Learn extra at:
lock (sharedObj1)
{
...
lock (sharedObj2)
{
...
}
}
Be aware that the order of the locks within the Thread2Work methodology has been modified to match the order in Thread1Work. First a lock is acquired on sharedObj1, then a lock is acquired on sharedObj2.
Right here is the revised model of the whole code itemizing:
class DeadlockDemo
{
personal static readonly object sharedObj1 = new();
personal static readonly object sharedObj2 = new();
public static void Execute()
{
Thread thread1 = new Thread(Thread1Work);
Thread thread2 = new Thread(Thread2Work);
thread1.Begin();
thread2.Begin();
thread1.Be part of();
thread2.Be part of();
Console.WriteLine("Completed execution.");
}
static void Thread1Work()
{
lock (sharedObj1)
{
Console.WriteLine("Thread 1 has acquired a shared useful resource 1. " +
"It's now ready for buying a lock on useful resource 2");
Thread.Sleep(1000);
lock (sharedObj2)
{
Console.WriteLine("Thread 1 acquired a lock on useful resource 2.");
}
}
}
static void Thread2Work()
{
lock (sharedObj1)
{
Console.WriteLine("Thread 2 has acquired a shared useful resource 2. " +
"It's now ready for buying a lock on useful resource 1");
Thread.Sleep(1000);
lock (sharedObj2)
{
Console.WriteLine("Thread 2 acquired a lock on useful resource 1.");
}
}
}
}
Seek advice from the unique and revised code listings. Within the unique itemizing, threads Thread1Work and Thread2Work instantly purchase locks on sharedObj1 and sharedObj2, respectively. Then Thread1Work is suspended till Thread2Work releases sharedObj2. Equally, Thread2Work is suspended till Thread1Work releases sharedObj1. As a result of the 2 threads purchase locks on the 2 shared objects in reverse order, the result’s a round dependency and therefore a impasse.
Within the revised itemizing, the 2 threads purchase locks on the 2 shared objects in the identical order, thereby guaranteeing that there is no such thing as a chance of a round dependency. Therefore, the revised code itemizing exhibits how one can resolve any impasse scenario in your software by guaranteeing that every one threads purchase locks in a constant order.
Greatest practices for thread synchronization
Whereas it’s usually essential to synchronize entry to shared sources in an software, you need to use thread synchronization with care. By following Microsoft’s best practices you’ll be able to keep away from deadlocks when working with thread synchronization. Listed below are some issues to remember:
- When utilizing the lock key phrase, or the System.Threading.Lock object in C# 13, use an object of a non-public or protected reference kind to determine the shared useful resource. The article used to determine a shared useful resource could be any arbitrary class occasion.
- Keep away from utilizing immutable varieties in your lock statements. For instance, locking on string objects may trigger deadlocks because of interning (as a result of interned strings are primarily international).
- Keep away from utilizing a lock on an object that’s publicly accessible.
- Keep away from utilizing statements like lock(this) to implement synchronization. If the this object is publicly accessible, deadlocks may end result.
Be aware that you need to use immutable varieties to implement thread security while not having to put in writing code that makes use of the lock key phrase. One other method to obtain thread security is by utilizing native variables to restrict your mutable information to a single thread. Native variables and objects are all the time confined to at least one thread. In different phrases, as a result of shared information is the basis explanation for race circumstances, you’ll be able to eradicate race circumstances by confining your mutable information. Nonetheless, confinement defeats the aim of multi-threading, so will likely be helpful solely in sure circumstances.