Andy's observations as he continues to attempt to know all that is .NET...

Friday, July 31, 2009

Linq for NHibernate

Its been a while coming, but there it is no possible to use LINQ against NHibernate..I think its a shame that MS didn’t get behind the Hibernate family when they first created LINQ as its a marriage made in heaven, combining a mature ORM with a language integrated query.  Rather than spending all that time ( and still ) on trying to create their own ORM.

I’ve yet to try out the integration, but hopefully will be over the next few weeks…

Click Here for the announcement

Thursday, July 30, 2009

WARNING..Cancellation support may makes things go slow….

.NET 4 Tasks offers much better support for task cancellation, unlike QueueUserWorkItem tasks can be cancelled before commencing, and the Task library offers a standard way for running tasks to detect and report cancellation.  I recently recorded a screencast that demonstrated the new Task API including the cancellation support.   This blog post isn’t so much about the cancellation mechanics but some guidance on how best to use it if you don’t want to change the throughput of your task.

Below is some code that calculates pi, it is expecting to be run inside a task and is supporting the notion of cancellation.

 

private static double AbortableCalculatePi()
{
double pi = 1;
double multiplier = -1;

const int N_ITERATIONS = 500000000;

for (int nIter = 3; nIter < N_ITERATIONS; nIter += 2)
{

if (Task.Current.IsCancellationRequested)
{
Task.Current.AcknowledgeCancellation();
return 0.0;
}


pi += (1.0 / (double)nIter) * multiplier;
multiplier *= -1;

}
return pi * 4.0;
}





So all well and good until you benchmark it and compare it to the version with no cancellation support and it runs almost twice as slow.  The reason being that the cost of detecting cancellation is high in relation to the work being done.  The important aspect to cancellation is being able to respond in  a meaningful time for the client, at present we are being way over aggressive in checking.



One option would be to only check every N iterations..




private static double BetterAbortableCalculatePi()
{
double pi = 1;
double multiplier = -1;

const int N_ITERATIONS = 500000000;

for (int nIter = 3; nIter < N_ITERATIONS; nIter += 2)
{

if ((nIter - 3) % 100000 == 0)
{
if (Task.Current.IsCancellationRequested)
{
Task.Current.AcknowledgeCancellation();
return 0.0;
}
}


pi += (1.0 / (double)nIter) * multiplier;
multiplier *= -1;

}
return pi * 4.0;
}





This  took a third less time than the previous more aggressive version.  However the if block’s effect on the pipeline and the additional maths is still an additional cost over the version that had no cancellation support.



So a third approach is called for this time refactoring the algorithm to use two loops instead of one, were the check for cancellation is done once per iteration of the outerloop, this results in little additional cost.




private static double OptimisedAbortableCalculatePi()
{
double pi = 1;
double multiplier = -1;

const int N_ITERATIONS = 500000000 / 2;

const int OUTER_ITERATIONS = 10000;
const int INNER_ITERATIONS = N_ITERATIONS / OUTER_ITERATIONS;

int i = 3;
for (int outerIndex = 0; outerIndex < OUTER_ITERATIONS; outerIndex++)
{
for (int nIter = 0; nIter < INNER_ITERATIONS; nIter++)
{
pi += (1.0 / i) * multiplier;
multiplier *= -1;
i += 2;
}

if (Task.Current.IsCancellationRequested)
{
Task.Current.AcknowledgeCancellation();
return 0.0;
}

}
return pi * 4.0;
}





Here are the timings are got from the various approaches



NoAbortableCalculatePi = 3.14159264958921 took 00:00:03.7357873

AbortableCalculatePi = 3.14159264958921 took 00:00:09.6137173


BetterAbortableCalculatePi = 3.14159264958921 took 00:00:06.3826212


OptimisedAbortableCalculatePi = 3.14159265758921 took 00:00:03.6883268



As the figures show there is virtually no difference between the first and last run, but a considerable difference when cancellation is inserted into the core of the computation.



So to sum up whilst cancellation support is good the frequency you check for could have an impact on the overall performance of your algorithm.  Cancellation is something we want to support but in general users probably don’t need it so we need to strike the right balance between throughput and responding to cancellation in an appropriate timeframe.

.NET 4 Tasks and UI Programming

Just upload a new screencast covering how to marshal results from .NET 4 Tasks back on to the UI thread.  One method is to continue to utilise the same API’s from previous versions of .NET thus utilising SynchronizationContext.Post, the Task based API offers an alternative and in some cases more elegant solution using the ContinueWith method.

Thursday, July 23, 2009

My First Silverlight 3 App

Had a brief rest from patterns and parallel stuff to have a quick play with Silverlight 3.  I mainly wanted to see the out of browser aspect, as I think the idea of being able to build RIA that also run on the desktop is very compelling….

So what to build, I really like the iPhone weather app so I thought I’d have a go at reproducing it in Silverlight, below is a screen shot showing it running out of the browser.

image

I’m using isolated storage to store the list of weather centre’s of interest, a more typical line of business app would store app config on the web server/cloud and potentially locally to support true client roaming, but I’ll leave that for another day.

One thing to note is that the method for enabling Out Of Browser mode for you application is now different from pre-release versions of Silverlight 3, so there are many old blog posts that are unfortunately wrong now.  The good news is now it is trivial, view the project properties, and under the Silverlight tab there is an option to enable the app to support out of browser.  This then creates the OutOfBrowserSettings.xml file.

image

You can download the source via here or to just see the app in action click here.  I’ll let you decide if its as cool as the iPhone….

Thursday, July 16, 2009

Testing on varying number of cores

I’ve written many blog articles in the past that show that the performance of a piece of parallel code can vary dramatically based on the number of available cores. With that it mind, its obviously desirable even when given a machine with 8 cores that you test your code against a machine that could have substantially less.  You can resort to task manager and set Process Affinity and reduce the number of cores available for the process, but this is tedious.  There is a .NET API that allows access to controlling which cores to make available for a process.  The API requires the use of a bitmask to identity which cores to use, that's a bit ( no pun intended) overkill for what I'm trying to do, so I created a  facade that allows me to simply say use N cores.

public static class Cores
{
public static int Max
{
get
{
return Environment.ProcessorCount;
}
}
public static int CoresInUse
{
get
{
IntPtr cores =
Process.
GetCurrentProcess()
.ProcessorAffinity;


int nCores = 0;
while( cores != IntPtr.Zero )
{
if ( ((int)cores & 1) == 1 )
{
nCores++;
}
cores = (IntPtr)((int)cores >> 1);
}
return nCores;
}

set
{
if ((value < 1) || (value > Environment.ProcessorCount))
{
throw new ArgumentException("Illegal number of cores");
}

int cores = 1;
for (int nShift = 0; nShift < value-1; nShift++)
{
cores = 1 | (cores << 1);
}

Process.GetCurrentProcess().ProcessorAffinity = (IntPtr)cores;
}
}
}





The following code prints out the number of active cores and then reduces the number of cores to 4




Console.WriteLine("Using {0} out of {1}" , Cores.CoresInUse , Cores.Max);
Cores.CoresInUse = 4;
Console.WriteLine("Using {0} out of {1}", Cores.CoresInUse, Cores.Max);




Saturday, July 11, 2009

Guerilla .NET Demos from 6th July 2009

Had loads of fun as normal teaching Guerilla .NET with Rich and Marcus , All the demos from class here

About Me

My photo
Im a freelance consultant for .NET based technology. My last real job, was at Cisco System were I was a lead architect for Cisco's identity solutions. I arrived at Cisco via aquisition and prior to that worked in small startups. The startup culture is what appeals to me, and thats why I finally left Cisco after seven years.....I now filll my time through a combination of consultancy and teaching for Developmentor...and working on insane startups that nobody with an ounce of sense would look twice at...