Recently I had to write some code that persisted changes too two DataTable's to a SQL Server database, with both updates inside a single transaction. So since I was using .NET 2.0 I decided to use TransactionScope.
using( TransactionScope tx = new TransactionScope()))
{
firstTableAdapter.Update( firstTable );
secondTableAdapter.Update( secondTable );
tx.Complete();
}
I was careful to ensure that both adapters used the same database connection object, however there was still a nasty side effect. First a bit of background
In order for a resource manager (in this case SQL Server) to take part inside a transaction it must first enlist in the transaction, SQL server determines if to do this when a connection is opened. Opening a connection when there is a transaction associated with the current thread will result in SQL server placing that connection inside a transaction, if no other enlistment has happened then SQL server will create a local transaction and thus manage the transaction.
If another resource manager wishes to enlist in the transaction then no one resource manager can be responsible for coordinating the transaction and the DTC is invoked (Distributed Transaction Coordinator), the role of the DTC is to ensure the properties of the transaction across multiple resources. Such that a failure to commit one set of resources through one resource manager causes a rollback in the other. SQL Server is the only resource manager that currently allows the promotion of a local transaction to a distributed transaction; other resource manages like MSMQ will always create a distributed transaction irrespective of the number of previous enlistments.
Ok, so to sum up if you enlist multiple resource managers inside a single transaction the DTC is invoked and the transaction becomes a distributed transaction. The obvious consequence of this is that the transaction management is now more expensive. So we obviously want to avoid having a distributed transaction unless we really need to.
When first playing with TransactionScope we can end up with a distributed transaction when from a high level logical perspective it might seem strange.
using( TransactionScope tx = new TransactionScope() )
{
conn.Open();
// Do DB Work
conn.Close();
conn.Open();
// Do DB Work
conn.Close();
tx.Complete();
}
In the above example our transaction is promoted to a distributed transaction, why because on the second open SQL Server will attempt to enlist in the transaction, because there is already a transaction enlisted System.Transaction will attempt to promote the existing transaction to be managed by the DTC, and the new transaction will also be managed by the DTC. This does seem odd at first since you are in fact only using a single Resource Manager, there is a rumour that this will be fixed in the future. If you do not have the DTC service running on your machine the code above will throw an exception. If you have the DTC running then the code runs to completion, and you are non the wiser of the promotion, except for a slight pause. To see when the promotion has taken place simply make a call to the function below at the various points to see the transaction identifier.
private
static
void PrintTransaction(Transaction transaction)
{
Console.WriteLine("Local Id = {0}" , transaction.TransactionInformation.LocalIdentifier );
Console.WriteLine("Global Id={0}" , transaction.TransactionInformation.DistributedIdentifier);
}
Back to the data adapter example, when the data adapter attempts to do an update if the connection is not currently open it opens the connection and performs the update. If it opens the connection it would be polite for it close the connection when it is done. So when the first Update runs on opening the connection it creates a local transaction which gets enlisted, when the second update runs it also needs to open the connection and so it enlists a second transaction and thus causes the behaviour observed above. To fix the problem we simply have to ensure that the connection is only opened once.
using( TransactionScope tx = new TransactionScope()))
{
conn.Open();
try
{
firstTableAdapter.Update( firstTable );
secondTableAdapter.Update( secondTable );
tx.Complete();
}
finally
{
conn.Close();
}
}
Now when the calls are made to update the data adapter sees that the connection is already open and thus the sql commands now run in the context of the existing transaction and no promotion is necessary. It is therefore essential that when you create the data adapters you create them with the same connection object you are using inside the transaction scope.
All this poses the question how can I ensure that I don't accidently cause a promotion to take place; there are two solutions that come to mind
- Disable the DTC on your machine
- Add a Debug.Assert prior to calling the outer Complete
The only way I could find to determine if a promotion had happened was to look at the Distributed Transaction Identifier.
Debug.Assert(IsInDistributedTransaction() == false);
tx.Complete();
...
private
static
bool IsInDistributedTransaction()
{
return ((Transaction.Current != null) &&
(Transaction.Current.TransactionInformation.DistributedIdentifier != Guid.Empty));}
So whilst TransactionScope has certainly improved the programming model for transactions by hiding a lot of the complexities required for distributed transactions it is essential to understand how it works if you are to avoid accidently creating a distributed transaction.
2 comments:
Hi,
This is very good advice.Thanks.
However, that means i have to open and close my connections in my BLL instead of in my DAL. Is that advisable?
choon
SQL Server, AJAX, C# örnek ve ipuçları.
http://www.ahmetkaymaz.com
Post a Comment