.NET 4 Tasks offers much better support for task cancellation, unlike QueueUserWorkItem tasks can be cancelled before commencing, and the Task library offers a standard way for running tasks to detect and report cancellation. I recently recorded a screencast that demonstrated the new Task API including the cancellation support. This blog post isn’t so much about the cancellation mechanics but some guidance on how best to use it if you don’t want to change the throughput of your task.
Below is some code that calculates pi, it is expecting to be run inside a task and is supporting the notion of cancellation.
private static double AbortableCalculatePi()
{
double pi = 1;
double multiplier = -1;
const int N_ITERATIONS = 500000000;
for (int nIter = 3; nIter < N_ITERATIONS; nIter += 2)
{
if (Task.Current.IsCancellationRequested)
{
Task.Current.AcknowledgeCancellation();
return 0.0;
}
pi += (1.0 / (double)nIter) * multiplier;
multiplier *= -1;
}
return pi * 4.0;
}
So all well and good until you benchmark it and compare it to the version with no cancellation support and it runs almost twice as slow. The reason being that the cost of detecting cancellation is high in relation to the work being done. The important aspect to cancellation is being able to respond in a meaningful time for the client, at present we are being way over aggressive in checking.
One option would be to only check every N iterations..
private static double BetterAbortableCalculatePi()
{
double pi = 1;
double multiplier = -1;
const int N_ITERATIONS = 500000000;
for (int nIter = 3; nIter < N_ITERATIONS; nIter += 2)
{
if ((nIter - 3) % 100000 == 0)
{
if (Task.Current.IsCancellationRequested)
{
Task.Current.AcknowledgeCancellation();
return 0.0;
}
}
pi += (1.0 / (double)nIter) * multiplier;
multiplier *= -1;
}
return pi * 4.0;
}
This took a third less time than the previous more aggressive version. However the if block’s effect on the pipeline and the additional maths is still an additional cost over the version that had no cancellation support.
So a third approach is called for this time refactoring the algorithm to use two loops instead of one, were the check for cancellation is done once per iteration of the outerloop, this results in little additional cost.
private static double OptimisedAbortableCalculatePi()
{
double pi = 1;
double multiplier = -1;
const int N_ITERATIONS = 500000000 / 2;
const int OUTER_ITERATIONS = 10000;
const int INNER_ITERATIONS = N_ITERATIONS / OUTER_ITERATIONS;
int i = 3;
for (int outerIndex = 0; outerIndex < OUTER_ITERATIONS; outerIndex++)
{
for (int nIter = 0; nIter < INNER_ITERATIONS; nIter++)
{
pi += (1.0 / i) * multiplier;
multiplier *= -1;
i += 2;
}
if (Task.Current.IsCancellationRequested)
{
Task.Current.AcknowledgeCancellation();
return 0.0;
}
}
return pi * 4.0;
}
Here are the timings are got from the various approaches
NoAbortableCalculatePi = 3.14159264958921 took 00:00:03.7357873
AbortableCalculatePi = 3.14159264958921 took 00:00:09.6137173
BetterAbortableCalculatePi = 3.14159264958921 took 00:00:06.3826212
OptimisedAbortableCalculatePi = 3.14159265758921 took 00:00:03.6883268
As the figures show there is virtually no difference between the first and last run, but a considerable difference when cancellation is inserted into the core of the computation.
So to sum up whilst cancellation support is good the frequency you check for could have an impact on the overall performance of your algorithm. Cancellation is something we want to support but in general users probably don’t need it so we need to strike the right balance between throughput and responding to cancellation in an appropriate timeframe.