Answered By : Khaur
There are two answers, depending on how you define efficient.
Compactness of representation
Telling more with less: NFAs are more efficient.
Converting a DFA to an NFA is straightforward and does not increase the size of the representation.
However, there are regular languages for which the smallest DFA is exponentially bigger than the smallest NFA. A typical example is $(a|b)^*b(a|b)^k$ for $k$ fixed.
Computation
Running it fast: DFAs are more efficient.
The computers we use today are deterministic in nature. That makes them bad at dealing with non-determinism. There are two common ways of dealing deterministically with NFAs: backtracking on one side, which is rather costly, or keeping track of the active states, which means each transition will take up to $N$ times longer (where $N$ is the size of the NFA).
I just started reading about theory of computation. If we compare which is more powerful (in accepting strings), both are same. But what about efficiency ? DFA will be fast compared to NFA, since it has only one outgoing edge & there will be no ambiguity. But in case of NFA we have to check all possible cases & that surely takes time. So can we say DFA is more efficient than NFA ?
But, my other part of brain is also thinking that NFA exists only in theory, so we cannot compare it's efficiency with DFA.
Asked By : avi
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/9389
0 comments:
Post a Comment
Let us know your responses and feedback