Wikipedia says that shared memory comes with lots of costs associated with cache coherence costs. But I thought the whole idea of shared memory is that all the CPUs access the same memory? So if one CPU changes that memory then other CPUs would access the same value? It would seem like this would require FEWER cache coherence costs? Is the idea that if one CPU changes its local cache before it writes to shared memory then other CPUs have to be notified?
Asked By : bernie2436
Answered By : Pavel Zaichenkov
If two different processors share one memory, each having individual cache, they can end up having two different values in the same address.
Imagine each of two processors has private caches L1 and L2. The cache L3 is shared between both processors. Assume the processor A reads data from address X in L3 to L1 and the processor B reads the same data from the same address (address X in L3) to it's private cache L1. Then, if the processor A modifies the value and does a write-back, the processor B can't figure it out without the support of coherence protocol and would still have an old value in it's own cache.
Basically, you are right. The cache coherence protocol is a mechanism to notify processors about shared memory modification caused by other processors.
The main advantage of the shared memory architecture for a programmer is that there is no need to explicitly describe communication and interaction between processors (like you would do using MPI, for instance). The coherence and consistency of the memory is fully the responsibility of the hardware.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/14240
0 comments:
Post a Comment
Let us know your responses and feedback