Suppose I have an application with two threads, which both use the same resource. Normally, a synchronized block keeps these threads from messing with the resource in a way the user doesn't expect. I understand why you do this, but I need an explanation how it is done, especially on the following part.
How come acquiring the lock never results in a race condition? For Example: Let t1,t2 be threads that are synchronized on some object obj.
t1 checks if obj has a lock it doesn't. t2 checks if obj has a lock. it still doesn't. t1 locks obj. t2 locks obj (which it should not be able to)
How is this prevented?
Asked By : RunOrVeith
Answered By : Wandering Logic
Almost every modern processor has special memory instructions built in specifically to deal with this problem.
For example, many processors have a swap
instruction. This atomically swaps a value between a local variable in a thread and a global variable that is shared between threads. (Under the covers the "local variable" will be stored in a machine register and the "global variable" will be stored in memory.)
Suppose the "lock" is stored in the global variable global_lock_var
and a value of 0
means "unlocked" and a value of 1
means "locked. Then you do a critical section by:
do: local_var = 1 swap global_lock_var <-> local_var while local_var == 1 # Critical Section # (you only get here if the value of global_lock_var was originally 0, so you have # acquired the lock) global_lock_var = 0 # release the lock
It is also possible to solve the locking problem even on a machine that doesn't have atomic instructions, just load
and store
instructions, but it is a little bit more complicated. One well known way to do it is by using something called Peterson's algorithm. This requires 3 variables, flagA
(used to indicate thread A
is trying to acquire the lock), flagB
(used to indicate thread B is trying to acquire the lock), and turn
(used to indicate who has priority if both are trying to acquire the lock).
# initialization: flagA = false flagB = false turn = 'A' # thread A: # thread B: flagA = true flagB = true # "I want the lock" turn = 'B' turn = 'A' # "But I'll be polite and give you priority" while ((flagB == true) && while ((flagA == true) && (turn == 'B')): (turn == 'A')): # busy wait (do nothing) # busy wait (do nothing) # critical section # critical section flagA = false flagB = false # "I'm done"
Even though there are "races" all over the place, the algorithm is accounting for the possibilities. If only one of the threads wants to get into the critical section, then the other thread's flag
variable will be false (so the turn
variable doesn't even get checked.) But if both threads try to acquire the critical section simultaneously then whichever one sets the turn
variable last will lose.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/29021
0 comments:
Post a Comment
Let us know your responses and feedback