I need some help with explaining why a grammar is not LL(1).
Let us take the following grammar:
$$ \begin{align} S \rightarrow & aB \mid bA \mid \varepsilon \\ A \rightarrow & aS \mid bAA \\ B \rightarrow & b \\ \end{align} $$
This is my attempt:
For the grammar to be LL(1) it is a necessary condition that for any strings $c_1γ$ and $c_2β$, derivable from $S \rightarrow aB$ and $A \rightarrow aS$ respectively, we have $c_1 \ne c_2$.
But, $S \rightarrow aB$ and $A \rightarrow aS$, hence $c_1 = c_2$ and the grammar is not LL(1).
Is my reasoning right?
Thanks in advance.
Asked By : mrjasmin
Answered By : Hendrik Jan
I am afraid your reasoning is not exactly right.
A grammar is LL(1) if we can predict the production used from the current nonterminal $A$ to be rewritten and the next terminal $a$ in the string. $\newcommand{\To}{\Rightarrow}$ Using $\To$ to denote leftmost derivation, if $S\To^* uAv \To uax$, we must have used unique $A\to\alpha$, independent on $x$. The problem that usually arizes is when $A$ is rewitten into $\varepsilon$ and $a$ is produced by the next nonterminal.
Try to construct another example.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/7761
0 comments:
Post a Comment
Let us know your responses and feedback