Andy's observations as he continues to attempt to know all that is .NET...

Monday, October 22, 2007

Mobile Bingo

Just couldn't help myself in writing a bingo caller for my Mobile phone the other week, you see it was my daughter's birthday and she wanted bingo at the party and my wife had decided the low tech approach of pulling numbers out of a bag was the way to go...I wasn't having it "I could write a .NET App for my phone in 10 minutes to do that" I said. Kind of brave since it was my first mobile phone app, I paid £300 for this phone so it just has to do more than just make calls...You know what a bit more than 10 minutes say 30 and I had it working...It just goes to show easy things have become these days....Im thinking of porting wack a mole now...although I'm not sure my touch sensitive screen will last...


 

You can download the app from here


 

Wednesday, October 10, 2007

WPF Extension method

I used my first extension method yesterday with WPF.  I wanted to bring a control to the foreground, that means changing its position in the child list to the last child.


 

  Parent.Children.Remove(element);

  Parent.Children. Add(element);


 

What I would prefer to do is call SendToFront on the collection with the appropriate child.  Since I don't have any control over the type being used to hold the collection,  I would need to resort to extension methods to get a more object style syntax.  Resulting in the following code.


 

public static class UIElementCollectionExtensions

{

   public static void SendToFront(this UIElementCollection collection, UIElement element )

   {

        collection.Remove(element);

        collection.Add(element);

    }

}


 

All this is well and good but it would have been virtually as elegant with an old fashioned static call on some Util class.    It then dawned on me why these are perhaps so useful, for me anyway I rely on intelli sense to see what I can do with a lot of the WPF controls, with any Util class methods I write I need to have knowledge of them, but with extension methods intelli sense can potentially pick them up.  So I refactored my code once more into a separate assembly, but placing the code into the System.Windows.Controls namespace, thus whenever I use WPF controls and reference my additional extension assembly I get my new methods.  On a large scale project I can see how this could be aid productivity.


 

However there is something smelly about placing my code into someone else's namespace so on reflection I think it would be far nicer to place all my extension methods inside my own namespace, and just make sure I bring them into scope by using my extension namespace.

Thursday, October 04, 2007

Too much reliance on encryption

A friend of mine was telling me recently how safe manufactures grade their safes, they don't simply say this is unbreakable/uncrackable ( only a fool would say that ). What they do say is that they say you need X amounts of TNT or N hours to crack this safe, clearly limiting their liability you may think, but actually its useful and practical information to anyone who has a safe. Firstly they know that there are no guarantees, but they also know what level of additional security they may need to layer on to get closer to full peace of mind. In the case of the fact that it takes 2 hours to crack the safe the owner could employee a security guard to patrol the safe location every 1 hour, hopefully not creating a window of opportunity for the cracker to do the deed. When you want to secure digital data similar considerations need to be taken into account.

Ive recently been involved with debating the security of biometric systems used in schools with a biometric firms Principal Sales Engineer based in the US. The issue we have as a group is that whilst Im sure all measures today are being take to secure the data in terms of encryption technology, the plain fact remains that history has shown us that what ever cryptography we used today is likely to be compromised in a reasonable time frame say 5-10 years. Therefore when we encrypt any sort of date we need to be aware of this since if the data has uses outside this time window then clearly we cannot rely just on this means of security.

At this goes to the very heart of the debate in biometrics in schools, the Engineer in question dismissed are complaints about encryption technology not being adequate for 10 years plus by first acknowledging this fact ( which is a great step forward ),

"I personally believe their will be another breakthrough in the next 10

to 15 years. Whether is it quantum computing or the DNA processor they

have been working on for the last 10 years. They can now beat a person

in tic tac toe. 5 years ago they could count to 10 with 80 percent

accuracy. They are much faster because they don't have to calculate

they just know the answer. But it is going to be a while and belive it

or not there are higher levels of encryption out there. There are 512

and even 1028 based encryption. Like the computer industry, there is

always someone out there building a stronger based encryption."


 

Further that statement shows that the industry as a whole knows we need stronger encryption because we know its only a matter of time for it to be broken, but he then goes on to say that

"As I said above the great thing about using encryption on keys and or

files is the fact that if there is a problem with a key or the actual

encryption you can encrypt the info with a better encryption or even

encrypt the encryption such as is done with 3 DES. It is DES encrypting

DES encrypting DES. The US government went from a standard of 3 DES to

AES 256. Not because 3 DES had been broken…. It has not. But because

they saw there were some weekensses that could be exploited and maybe in

the next 10 years or so it may be broken. Now do you think that all the

info that they have stored in 3 DES is still in 3 DES… I think not.

They reencrypted it in the new standard."


 

Whilst this is all well and good there is a piece to this solution that makes the US government solution ok but not for the average school. In that it relies on the fact that the person who is responsible for the re-encryption has guaranteed sole access to the data, in other words no one has taken an illicit copy, or more likely has some backup media, or an old hard disk. Whilst I can imagine that the US government has plenty of physical security measures in place to make sure they own the only copy of the data, I can't imagine that the average school will have similar system in place, and let's be realistic they can't with theft being the obvious one.


 

In fact I encountered a similar experience when working for Cisco, we were trying to pitch wireless networking to a large bank. Whilst they accepted the notion that the encryption technology we had chosen prevented illegal access to the network, we could not demonstrate to them that any data sniffed off the network could not be decrypted in a time frame that still made the data useful to an outsider. These guys were smart and did truly understand the nature of securing the business data.

So to summarise a responsible biometric manufacture would secure biometric data as best they can today, but once the software has been deployed if that data is to be truly secure it needs have sufficient physical security measures in place provided by the owner to ensure that in the future the encryption based solution still has adequate merits, the moment you do not have complete ownership of the data all bets are off...and by their own admission the biometric provider in this case said their guarantees is for appx. 10 years, in the case of biometric data for kids that data is sensitive for 60-70 years.


 


 


 


 

Friday, September 28, 2007

Patterns, Patterns everywhere

I've not blogged for a while now...mainly due to being snowed under writing the developmentor's "Code Smarter with Design Patterns" course with my co-author Kevin Jones (Course Details ) and working at a local firm ( yes Chippenham has industry). My head is now full of a handful of posts I want to make, and I've finally found time to write this one...

Whilst writing the design pattern course I managed to stumble upon various variations of standard patterns, in the course we discuss various forms of the standard Singleton pattern. Such as ThreadScoped singleton where each thread has access to a single instance, rather than a single instance app domain wide, this is potentially useful for preventing contention where multiple threads are competing over the use of the singleton. But oddly enough I stumbled upon another use of the singleton pattern when developing an example of a CopyOnWrite proxy. There are times when multiple threads request a collection of items from some object. If all the threads are only reading from the collection life is good...If there is a potential for the threads to update the collection for their private use then we should really be returning a copy of the collection to each thread, this could be considered inefficient if the collection is only occasionally privately updated, and that is where the CopyOnWrite Proxy comes in. By returning a proxy to the collection as opposed to the collection itself we allow ourselves to control the access to the collection in such a way as to provide access to the shared view of the collection whilst the thread is only reading but the moment it attempts to update the collection the proxy takes a copy of the collection it is providing the proxy for, and from that moment on the proxy is now wrapping up the copy. The client is blissfully unaware of this.

Ok, so inside my proxy implementation all the methods that are deemed write operations need to determine if a copy has been made previously and if not make a copy. In other words I will have a piece of code that I only wish to run once for the life of the object, and only once. This sounded similar to the fact that in the case of a singleton type I only want one instance. In my case I wanted to define a method in a class that only ever gets run once.

For example I was creating a proxy for a List, thus inside my proxy type I implemented the method

public
void Insert(int index, T item)

{

// Write operation, so make a private copy of the list now    

MakeCopy();

subject.Insert(index, item);

}

Where subject is the actual List. Clearly I don't want MakeCopy to run every time the insert method is called just the first time on this proxy. Simple you might say just test a Boolean flag to see if you have made a copy. Whilst this would be guaranteed to work in a non threaded environment it would not in a threaded environment, so now you end up writing code that performs some kind of synchronization prior to determining if you have made a copy or not.

In this scenario I could simply create a method called MakeCopyIfNeccessary() and call that from each write method, and make that method do the appropriate synchronization, to do this efficiently requires double check locking. However this got me thinking if there was a more re-usable way of doing this so that if I needed this functionality again I could somehow re-use not just the pattern but the code. Below is a type I defined which wraps up an Action delegate such that it will only invoke the delegate once, irrespective of the number of times you call DoOnlyOnce();

public
class
SingletonAction<T>

{


object onlyOnceLock = new
object();


private
Action<T> action;


 


public SingletonAction(Action<T> action)

{


this.action = action;

}


 


public
void DoOnlyOnce(T arg)

{


if (action != null)

{

SynchronizedDoOnlyOnce(arg);

}

}


 


private
void SynchronizedDoOnlyOnce(T arg)

{


lock (onlyOnceLock)

{


if (action != null)

{

action(arg);

action = null;

}

}

}

}


 

Inside my proxy class I create an instance of this type supplying it the corresponding method I wish to only be executed once.

class
CopyOnUpdateList<T> : IList<T>

{


private
IList<T> subject;


private
SingletonAction<object> makeCopy;


 


public CopyOnUpdateList(IList<T> list)

{

makeCopy = new
SingletonAction<object>(MakeCopy);

subject = list;

}

        // ....

public
void Insert(int index, T item)

{

makeCopy.DoOnlyOnce(null);


 

subject.Insert(index, item);

}

}

However if it wrapped up as an anonymous method this would be more true to the values of the singleton pattern as that way the only way the code could be invoked was via my wrapper. Wrapping up an arbitrary method doesn't stop it from being invoked elsewhere.

public CopyOnUpdateList(IList<T> list)

{

makeCopy = new
SingletonAction<object>(delegate

{

// Code required to perform the subject copy

});

subject = list;

}

Now the "only" way that code can be run is via my wrapper class which ensures it only ever gets run once...

Thursday, May 17, 2007

Obtaining a balanced view

I was asked yesterday to visit Gloucestershire county council to discuss the use of biometrics in schools. The council have set up a committee with the responsibility of forming a policy around the use of biometrics in the county; I must congratulate Gloucestershire here as they are the first government body that I'm aware of that are actually attempting to tackle this issue fully. Currently schools are allowed to spend their money as they wish, which in one sense is great but on the other side there is always the worry that can a head of a school actually devote sufficient time to do full due diligence on a piece of technology. Certainly in the case of biometrics a lot of the issues are not obvious and present new challenges to the school such as correctly disposing of the data in a safe and secure manor, understanding the potential long term consequences of how the data may be used and abused in the future. These issues require you to be a technologist at minimum but really require someone to be a technologist visionary.

Investing time and effort in understanding the implications of biometrics does not need to be done by every school, it needs to be done centrally were money can be spent employing the experts from both sides of the argument to present a balanced and informed view, that will further enable the decision makers to make an efficient and informed decision. This information needs to be made available in a prominent place, and not buried on some government web site.

I also think it's important here to include parents in the decision making process, providing them with the information in a clear and concise form. There is precedence here, it's pretty well accepted that smoking is bad for you, people can still smoke but the government takes the responsible act of making it very clear that this could affect your long term life expectancy. Perhaps a similar mandatory warning should be made on all acceptance forms for biometric systems, just above the signature line.

"WARNING: Some experts feel that in the future it will be possible for this information to be used to steal your child's identity"

All I'm asking for is a fair representation of the facts, surely we owe it to all decision makers to make both arguments equally available and allow them to make the decision based on the full range of facts.


 


 


 

Monday, May 14, 2007

Minimum Debugging tool set


 

This week I've been teaching DM's Effective .NET course, one theme that run through the course is the ability to debug applications in the field. The tools in your arsenal here are the obvious ones like perfmon, but also the native windows debugging. The native debugger can be used to take snap shots of the suspect process (dump's seems to get a giggle from a few students). You therefore need to put the debugger on the client's machine, the typical way to do this is to download the debugging tools for windows, this is a reasonable size once unzipped 37MB...This includes a graphical debugging a few other bits and bobs, whilst these tools are useful on a developers machine they are not needed to take a snapshot and therefore not relevant for anyone other than a developer. This prompted a question from a student as to what is the minimum set of files needed to take a dump from a users machine.

After a few trial and errors we came up with the following list of files


 

    tlist.exe

    dbgeng.dll

    dbghelp.dll

    adplus.vbs

    cdb.exe

This results in a 4.6 MB unzipped install for a client's machine; this is more of a reasonable size that could be included in all employees desktop configuration.


 


 


 

Wednesday, March 21, 2007

Free Running Threads

Last week I was teaching Developmentor's Effective .NET 2 course, as part of that course we spend at least a day looking at various patterns for building multi threaded applications. With the rise of multi core machines it is becoming increasingly more important to write algorithms that can scale with the availability of new cores. One of the topics we cover is thread safety, and I typically write an application that simply performs an i++ operation on multiple threads, on a single core does not typically generate a problem but on a multi core means we rarely get the value we expected since i++ is in fact multiple CPU instructions, e.g


 

MOV R0,i

INC R0

MOV i,R0

One thread could perform the load of i into a register and at the same time another is also doing this thus they both cause i to be simply incremented by 1 and not by 2. To solve this problem we have a variety of synchronization techniques at our disposal some are lighter weight than others

Interlocked.Increment , Monitor.Enter/Exit and OS synchronization primitives like Mutex


 

In my stress test I conducted in class I had multiple threads updating a single integer...First I demonstrated on a multi core machine that i indeed didn't have the correct value after 10 threads had attempted to increment it 10000 times. I then moved to using Interlocked.Increment and now everything was as expected for the result, but it was slower than a simple i++. All well and good so far...I then moved to using Monitor.Enter and Monitor.Exit and to my amassment that took pretty much the same time as Interlocked...so as all engineers do...we run it again cause that result was just a glitch....but after numerous runs it kept coming out the same....So when I first developed this demo I did so on a single core machine and this was its first outing on a dual core, so what went wrong...all my multi threading life I've been told that interlocked is far cheaper than the heavier weight mutex style of synchronization. I then re-ran the demo with CPU Affinity set to a single CPU, and got the results I would have expected with Interlocked being at least an order of magnitude quicker than mutex

During the lab break I stepped back and had a think, about what could have caused this, the light eventually went on in that all the threads were in fact sharing a common variable, with dual core each core is going to be attempting to cache this value, this has a very negative effect on the cache, why well the cores try and maintain cache coherency by marking parts of the cache dirty when they update them, and forcing the other core to reload its data from main memory...If this wasn't the case in our example we would have had the wrong numbers again...So ok possibly what you would have expected....however the process of marking a piece of memory as dirty is not as simple as marking a single word, its less granular than that the core will mark whats called a cache line as dirty meaning any data on the same cache line as the value being updated is effectively marked as invalid.

So imagine two integers next two each other in memory. Thread A increments integer one and Thread B increments integer two, from a high level programming language perspective this is perfect, each thread has its own private resource and there is no need to synchronise and the threads therefore run freely...The perfect threaded app....however not so fast if both integers occupy the same cache line...we take a performance hit...

To demonstrate this fact I wrote some code that simply implements a parallel i++. There are three scenario's

  • Interlocked.Increment( ref i )
  • Interlocked.Increment( ref ManyCounters[Thread.CurrentThread.ManagedThreadId ] )
  • Interlocked.Increment( ref ManyCounters[Thread.CurrentThread.ManagedThreadId * 10000 ] )


     

Whilst the last two variants will not produce the correct total, the middle version shows that although the threads are not sharing the same variable the fact that the fact that they probably all live on the same cache line is the crucial factor. Below is the results of running the code using 1 and 2 threads. In the first case, with the high level of contention we see that a single thread would have been more efficient.

Single shared counter with 1 Threads took 00:00:00.2065516

Single shared counter with 2 Threads took 00:00:00.4303586

In the second case with multiple counters we are still taking a hit on performance even though the counters are different ints, and thus we are not incrementing the same location

Multiple counters with 1 Threads took 00:00:00.2184554

Multiple counters with 2 Threads took 00:00:00.5177658

Only in this last case were we ensure the counters are not on the same cache line do we see both cores being used efficiently.

Multiple Sparse Counters ( ~4k apart ) with 1 Threads took 00:00:00.2180101

Multiple Sparse Counters ( ~4k apart ) with 2 Threads took 00:00:00.1603323

So what does all this mean, well it certainly shows that just because your code at a high level programming language has no contention it doesn't mean that it will not have contention when the code finally meets the hardware. This is obviously a big issue and more so with virtualisation in .NET, how do I know the size of the cache lines, or how my data will be laid out in memory...Computing just got fun again.....Code

Thursday, February 08, 2007

Reduce your risk; only store what you really need

I've been drawn further into the campaign to prevent biometric information being used by schools, on Tuesday I attended a briefing session for MP's with the aim to highlight the issues with adopting this technology in the context of school children. The BBC were present and did an article for BBC online..

http://news.bbc.co.uk/1/hi/uk_politics/6336799.stm

What we as campaigners are increasingly finding hard to understand is how the department for education fails to understand the difference between data that has a relatively short validity and immutable data that last a life time ( I can't change my finger print)

For me one of the best quotes from Dfes with regard to schools holding biometric data is:-

"They are well used to handling all kinds of sensitive information to comply with data protection and confidentiality laws.

Schools have historically failed here; a forensic computer science faculty bought hard drives off ebay, and extracted school records (http://www.theregister.co.uk/2005/02/17/hard_drive_data/ ). A colleague also told me recently how he took a school computer out of a skip...Personally I don't condemn the schools here after all their primary focus is on education and not on securing personal information. In fact this is also the case in business the IT system the security are all additional burdens which do not enhance the core functionality of the business, it is seen as a necessary evil, and you often find these are given second or third rate priorities.

Information security can never be guaranteed and so we should therefore only gather the least amount of information required to perform our function. Software engineers have been aware of running applications with least privilege, thus limiting the risk that there application exposes to a system if it was to be hacked; even Microsoft is adopting this strategy at last with Vista. This therefore poses the question do schools need biometric information in order to educate our children? If the answer is no then it should not be used in schools...since this creates a further burden on a system which already showing signs of failing under the current security workload.


 


 


 


 


 

Friday, February 02, 2007

Custom ToString() for Flag based Enums, and a splatter of Unit Testing

A project that I'm currently working on has a enum type, where the enum is defined like so

[Flags]

public
enum EventDayMask

{

NONE = 0,

SUNDAY = 1,

MONDAY = 2,

TUESDAY = 4,

WEDNESDAY = 8,

THURSDAY = 16,

FRIDAY = 32,

SATURDAY = 64

}


 

Calling ToString on a value of type EventDayMask would result in a comma separated list of the various set bits..

EventDayMask weekend = EventDayMask.SATURDAY | EventDayMask.SUNDAY;


 

Console.WriteLine( weekend.ToString() );


 

Would produce SATURDAY,SUNDAY..Whilst this is a massive improvement on what we had with C and C++ I in fact would like a more appealing string like Saturday, Sunday.

You can't override ToString for Enum's which means you can't write your custom string generate as part of the enum class. There are many blog posts on how you can achieve this for a non Flagged based enums, by applying custom attributes to your enum definition and then having a static method on a Utils class that first determines the value of the enum and then determines if there is a custom attribute associated with that value and if so uses the string associated with that custom attribute.

E.g

public
enum EventDayMask

{

NONE = 0,

[Description("Sunday")]

SUNDAY = 1,

[Description("Monday")] // etc..

MONDAY = 2,

TUESDAY = 4,

WEDNESDAY = 8,

THURSDAY = 16,

FRIDAY = 32,

SATURDAY = 64

}


 

However with flag based enum's it gets a bit more complex, since an enum does not have a single value but a combination of many valid values, as in the DayMask example above. To extend the technique to Flag'd based enums you need to effectively test the enum value against each possible flag value. I managed to write a version that did this by stepping through each possible value and performing a logical "AND" against the underlying enum value, and that worked fine, until I added an or'd value to my enum definition.

public enum EventDayMask

{

    ...

[Description("Weekend")]

    WEEKEND = SATURDAY | SUNDAY

}


 

The built in ToString() works as expected if the underlying value is SATURDAY | SUNDAY it outputs WEEKEND. Obviously I wanted the same behaviour, it was at this point that I realised this small task was now going to escalate, a quick cup of tea latter and I decided to change my approach why not simply use the built in ToString() to generate the initial string and then replace each of its component parts with the value stored in the attribute. This simplified the code greatly..the downside being that I'm tightly coupled with the output format of Enum.ToString(). To counteract this I have a handful of unit tests as part of the project that tests this functionality, so if MS ever changed the formatting algorithm for Enum.ToString(), I'm alerted immediately..This for me is another example why unit testing is so powerful, I can make expedient decisions that I wouldn't have dared made without them....


 

[AttributeUsage(AttributeTargets.Field,AllowMultiple=false)]


public
class
EnumValueDescriptionAttribute : Attribute

{


public EnumValueDescriptionAttribute(string description)

{

Description = description;

}


 


public
string Description;

}


 


public
static
class
EnumUtils

{


private
const
char ENUM_FLAGGED_VALUE_SEPERATOR_CHARACTER = ',';


 


public
static
string EnumToString(Enum enumValue)

{


StringBuilder enumValueAsString = new
StringBuilder();


 


Type enumType = enumValue.GetType();


 


string[] enumToStringParts = enumValue.ToString().Split(ENUM_FLAGGED_VALUE_SEPERATOR_CHARACTER);


 


foreach (string enumValueStringPart in enumToStringParts)

{


FieldInfo enumValueField = enumType.GetField(enumValueStringPart.Trim());


 


EnumValueDescriptionAttribute[] enumDesc = enumValueField.GetCustomAttributes(typeof(EnumValueDescriptionAttribute), false) as
EnumValueDescriptionAttribute[];


 


if (enumValueAsString.Length > 0)

{

enumValueAsString.Append(ENUM_FLAGGED_VALUE_SEPERATOR_CHARACTER);

}


 


if (enumDesc.Length == 1)

{

enumValueAsString.Append(enumDesc[0].Description);

}


else

{

enumValueAsString.Append(enumValueStringPart);

}

}


 


return enumValueAsString.ToString();

}

}

Friday, January 12, 2007

TransactionScope and DataAdapters

Recently I had to write some code that persisted changes too two DataTable's to a SQL Server database, with both updates inside a single transaction. So since I was using .NET 2.0 I decided to use TransactionScope.

using( TransactionScope tx = new TransactionScope()))

{

    firstTableAdapter.Update( firstTable );

    secondTableAdapter.Update( secondTable );


 

    tx.Complete();

}


 

I was careful to ensure that both adapters used the same database connection object, however there was still a nasty side effect. First a bit of background

In order for a resource manager (in this case SQL Server) to take part inside a transaction it must first enlist in the transaction, SQL server determines if to do this when a connection is opened. Opening a connection when there is a transaction associated with the current thread will result in SQL server placing that connection inside a transaction, if no other enlistment has happened then SQL server will create a local transaction and thus manage the transaction.

If another resource manager wishes to enlist in the transaction then no one resource manager can be responsible for coordinating the transaction and the DTC is invoked (Distributed Transaction Coordinator), the role of the DTC is to ensure the properties of the transaction across multiple resources. Such that a failure to commit one set of resources through one resource manager causes a rollback in the other. SQL Server is the only resource manager that currently allows the promotion of a local transaction to a distributed transaction; other resource manages like MSMQ will always create a distributed transaction irrespective of the number of previous enlistments.

Ok, so to sum up if you enlist multiple resource managers inside a single transaction the DTC is invoked and the transaction becomes a distributed transaction. The obvious consequence of this is that the transaction management is now more expensive. So we obviously want to avoid having a distributed transaction unless we really need to.

When first playing with TransactionScope we can end up with a distributed transaction when from a high level logical perspective it might seem strange.


 

using( TransactionScope tx = new TransactionScope() )

{


 

    conn.Open();

        // Do DB Work

conn.Close();


 

conn.Open();

        // Do DB Work

conn.Close();


 

tx.Complete();

}


 


 

In the above example our transaction is promoted to a distributed transaction, why because on the second open SQL Server will attempt to enlist in the transaction, because there is already a transaction enlisted System.Transaction will attempt to promote the existing transaction to be managed by the DTC, and the new transaction will also be managed by the DTC. This does seem odd at first since you are in fact only using a single Resource Manager, there is a rumour that this will be fixed in the future. If you do not have the DTC service running on your machine the code above will throw an exception. If you have the DTC running then the code runs to completion, and you are non the wiser of the promotion, except for a slight pause. To see when the promotion has taken place simply make a call to the function below at the various points to see the transaction identifier.

private
static
void PrintTransaction(Transaction transaction)

{


Console.WriteLine("Local Id = {0}" , transaction.TransactionInformation.LocalIdentifier );


Console.WriteLine("Global Id={0}" , transaction.TransactionInformation.DistributedIdentifier);

}


 

Back to the data adapter example, when the data adapter attempts to do an update if the connection is not currently open it opens the connection and performs the update. If it opens the connection it would be polite for it close the connection when it is done. So when the first Update runs on opening the connection it creates a local transaction which gets enlisted, when the second update runs it also needs to open the connection and so it enlists a second transaction and thus causes the behaviour observed above. To fix the problem we simply have to ensure that the connection is only opened once.

using( TransactionScope tx = new TransactionScope()))

{

    conn.Open();

    try

    {

        firstTableAdapter.Update( firstTable );

        secondTableAdapter.Update( secondTable );


 

        tx.Complete();

}

finally

{

    conn.Close();

}

}


 

Now when the calls are made to update the data adapter sees that the connection is already open and thus the sql commands now run in the context of the existing transaction and no promotion is necessary. It is therefore essential that when you create the data adapters you create them with the same connection object you are using inside the transaction scope.

All this poses the question how can I ensure that I don't accidently cause a promotion to take place; there are two solutions that come to mind

  • Disable the DTC on your machine
  • Add a Debug.Assert prior to calling the outer Complete

The only way I could find to determine if a promotion had happened was to look at the Distributed Transaction Identifier.

Debug.Assert(IsInDistributedTransaction() == false);

tx.Complete();


 

...


 

private
static
bool IsInDistributedTransaction()

{



return ((Transaction.Current != null) &&

(Transaction.Current.TransactionInformation.DistributedIdentifier != Guid.Empty));}


 

So whilst TransactionScope has certainly improved the programming model for transactions by hiding a lot of the complexities required for distributed transactions it is essential to understand how it works if you are to avoid accidently creating a distributed transaction.

Friday, January 05, 2007

WPF Pong


 

Couldn't keep still over the xmas break, my kids were playing some of those classic games we used to play as kids although now inside a tiny console you plug in to your tv. This got me thinking about writing a classic game using WPF, not being much of a games writer I kept things simple and decided to implement the classic game Pong...My main aim is to continue to ramp up on WPF, one area I wanted to explore further was the use of Content Templates and styles, what I wanted to be able to do was not only clearly separate UI layout from UI behaviour but also allow different Skins to be applied to the game one being the classic black and white look another being a more 21st Centaury look and feel.

To implement Skining in WPF you make use of Styles contained in resource dictionaries, when a control makes reference to a named style the control tree is walked for each control its resource dictionary is searched for a style by that name if none is found the search continues up the tree. Styles allow you to set properties of the control, very similar to ASP.NET skins. So in the case of the bats you can set a style that sets the colour, width and length of the bats. You can then apply this style to both player one's and player two's bat keeping both bats looking the same. In order to allow the interchanging of Skins I placed all the resource definitions into their own resource dictionaries. Each Skin is then an instantiation of a resource dictionary, when the window is launched I simple associate my window resource dictionary with the currently selected skin. When any of the controls wish reference styles located in resources they walk up the control tree looking for the resource and should find it in the resource dictionary associated with the window. If the user wishes to change the current skin I simply change the resource dictionary associated with the window and the look feel changes to reflect the new skin.

this.Resources = newSkin;// this is the current window


 

Below is an extract from GameWindow.xaml, the style of the stack panel is defined through a resource named StatusPanel, notice that DynamicResource is used as opposed to StaticResource. If you want to have any changes in resources immediately reflected then you need to bind using a DynamicResource, these are slightly more expensive since the infrastructure needs to keep track of any changes.

<StackPanel
Grid.Column="0"
Grid.Row="3"
Style="{DynamicResource StatusPanel}" >


 

One really cool feature of WPF is the ability to change the inner content of a control via the template property. I used this to great effect when defining for displaying the scores using block character style numbers. I used a data trigger to select the appropriate inner content template based on the value of the score.

When I have some more free time I want to add WPF animation effects so that the bat shudders when the ball hits it...Oh and my QA engineer ( AKA my son ) wants to enhance the game so you can move the bats forwards and backwards too...( "Just like real tennis." )

Oh I also perhaps need to have some instructions in regard to keys etc...but for now

A and Z player one

K and M player two

And Space to serve..

First to 9 wins...

When I first started hearing about WPF and how designers would now rule over UI I I started to get the same feelings I had when I first saw VB 6, but what seems to be coming clear whilst the designers can really do cool stuff they really need to have good exposure to the underlying model, so there certainly is still a significant role for us UI developers in exposing the appropriate information all be it perhaps slightly removed from the coal face...


 

You can download the full source from WPFPong , or for any Click Once Jen fans you can install it via ClickOnce

About Me

My photo
Im a freelance consultant for .NET based technology. My last real job, was at Cisco System were I was a lead architect for Cisco's identity solutions. I arrived at Cisco via aquisition and prior to that worked in small startups. The startup culture is what appeals to me, and thats why I finally left Cisco after seven years.....I now filll my time through a combination of consultancy and teaching for Developmentor...and working on insane startups that nobody with an ounce of sense would look twice at...