World's most popular travel blog for travel bloggers.

[Solved]: Are self-stabilizing algorithms guaranteed to work in parallel?

, , No Comments
Problem Detail: 

Self-stabilizing algorithms are extremely useful in distributed systems, but can the algorithm be applied concurrently to each computational node?

My instinct is to say yes, but I can't help but imagine a 'Game Of Life' scenario where the state of the graph toggles between two states. What keeps the question alive in my mind is the possibility that both such states would be considered 'correct', but then if the state were correct, wouldn't the graph stabilize?

If they are not guaranteed to work in parallel, can you provide an example where absolute convergence doesn't exist?


A self-stabilizing algorithm is an algorithm (in all my experience, applied to arbitrary graphs and graph structures exclusively) that, upon repeated applications, guarantees convergence onto a correct state—one that meets a set of requirements. Consider a simple example:

You maintain a network of hospitals and data centers. Each hospital must be connected to exactly one data center, and each data center must be connected to exactly two other data centers. (You can imagine also attempting to minimize latency.) When you set up the network, it is in a correct state—all requirements are met.

Weeks pass, and a giant storm takes out one of your data centers. Being the forth-dimensional thinker you are, you planned for situations like this by putting into place a self-stabilizing algorithm that will automatically reconfigure the network to get back to a correct state. You set up a daemon to oversee the operation. It picks arbitrary nodes in your network and asks them if they are in a correct state. If not, the daemon corrects that part of the network.

All is well and you can stay asleep in bed (because everything catastrophic happens at 3am).

The trick is, the daemon isn't a necessary part of this system—it's just easier to think about it like this. In reality, each node is its own computational unit—constantly evaluating its own state and taking steps to correct itself if necessary.

The introductory paragraphs of the Wikipedia article on the subject give a very good technical overview:

Self-stabilization is a concept of fault-tolerance in distributed computing. A distributed system that is self-stabilizing will end up in a correct state no matter what state it is initialized with. That correct state is reached after a finite number of execution steps.

Just as a piece of requested information that doesn't fit in well above:

When the graph is stable, no node has any faults with it. Since no predicates apply to it, no actions are taken, and the graph remains stable.

Asked By : Sean Allred

Answered By : D.W.

I suggest you look for a textbook that provides an introduction to self-stabilizing algorithms (with some definitions, examples, etc.), and work through that textbook. I think that will provide you a much better understanding, and you'll be able to answer this kind of question yourself. Trying to learn a subject by reading Wikipedia and asking questions on StackExchange sites is not the way to go.

For instance, I think you'll find that, yes, of course each distributed node is working concurrently (in parallel); that's what it means for a system to be a distributed system. And yes, for a self-stabilizing algorithm to be correct, one must show that it will never enter an infinite loop of the sort you describe; this is the convergence property.

Designing an algorithm with these properties is non-trivial, but in general, designing self-stabilizing systems is not easy. No one ever promised this stuff would be easy.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/19230

3.2K people like this

 Download Related Notes/Documents

0 comments:

Post a Comment

Let us know your responses and feedback