World's most popular travel blog for travel bloggers.

[Solved]: Question about threads and locks

, , No Comments
Problem Detail: 

I am currently reading Fuss, Futexes and Furwocks: Fast Userland Locking in Linux and came across this quote:

In a fair locking scheme the lock is granted in the order it was requested. This can have negative impact on throughput due to the increased number of context switches. At the same time it can lead to the so called convoy problem. Since the locks are granted in the order of arrival, they all proceed at the speed of the slowest process, slowing down all waiting processes. A common solution to the convoy problem has been to mark the lock available upon release, wake all waiting processes and have them recontend for the lock. This is referred to as random fairness. However, this also leads to the thundering herd problem. Despite this, it can work well on uni-processor systems if the first task to wake releases the lock before being preempted or scheduled, allowing the second herd member to obtain the lock, etc...

I have a few questions about this quote.

First, does a fair locking scheme result in an increased number of context switches because different tasks put processes into the wait queue at different times and thus by serving processes in the order they were received, we'd be context switching between multiple tasks?

Second, how does granting locks in the order of arrival cause processes to proceed at the speed of the slowest process? Wouldn't this only be the case if the slowest process is granted the lock before the other processes? Similarly, how does having processes contending randomly for the lock solve the convoy problem?

Finally, I don't understand how random fairness is any better on uni-processor systems in comparison to multiprocessor systems. For example, in both cases, all of the waiting processors are woken up, one gets the lock, and the others have to go to sleep again, right? So how does this work well on uni-processor systems?

Asked By : Public Display Name

Answered By : Wandering Logic

Let me answer the last question first. It points out that the paper was written in 2002, when multi-core processors were much more expensive. I think the authors were largely concerned with optimizing for the single-core case.

... how [is] random fairness ... any better on uni-processor systems in comparison to multiprocessor systems.

On a uniprocessor only one process can be scheduled at a time. So everyone gets "woken up," but that just means they get moved from a waiting state to a ready state. Then the scheduler is invoked and it chooses one of the ready processes to run, and lets that process run for some period of time.

The first thing the newly running process does is acquire the lock. Then it does some processing and (if you are not unlucky) releases the lock. In this case the next time the kernel scheduler gets control it chooses another "ready" process and starts it running. If the first process released the lock, then the second process will acquire it, and so on.

On a multiprocessor on the other hand, all the processes might get "woken up" and then the scheduler starts some/many/all of them running simultaneously on different processors. One of the processes will acquire the lock, but all the other processes will try to acquire the lock, immediately fail, and put themselves back to sleep on the wait queue. So on all the processors but one you just wasted several kernel calls and several context switches just so you could tell a bunch of processes that there was nothing to do.

does a fair locking scheme result in an increased number of context switches because different tasks put processes into the wait queue at different times and thus by serving processes in the order they were received, we'd be context switching between multiple tasks?

I think the case they have in mind is if you have multiple processes (on one processor), where each process acquires the same lock repeatedly for short periods of time. In the unfair case the first process can run for a while and acquire and release the lock many times without doing a context switch. Then the second process runs for a while and acquires and releases the lock many times.

In the "fair" case the first process locks/releases and then the second time it tries to lock it must be put to sleep, and the kernel has to do a context switch and wake up the other process. The processes ping-pong back and forth doing "wake up, lock, unlock, work a little while, try-to-lock, get put to sleep."

how does granting locks in the order of arrival cause processes to proceed at the speed of the slowest process? Wouldn't this only be the case if the slowest process is granted the lock before the other processes? Similarly, how does having processes contending randomly for the lock solve the convoy problem?

I think they are making the observation that, on a uniprocessor, shortest-job-first (unfair) has shorter average wait times than does round-robin (fair). But I don't understand how random decreases the average wait times compared to round-robin.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/23785

3.2K people like this

 Download Related Notes/Documents

0 comments:

Post a Comment

Let us know your responses and feedback