**Problem Detail:**

i'm reading the article of J. L. Massey and M. K. Sain, "Inverses of Linear Sequential Circuits" (Date of Publication - April 1968) (here) and i cannot understand - what is "feedforward inversion"?

They give a short description of circles in convolution codes and a little bit description about catastrophic codes (codes that makes an infinite amount of errors on decoding if there was a couple of errors in channel).

Can anyone explain the term "feedforward inversion", and what do they mean by delay in that regard?

P.S. i'm interested in this theme because i want to understand the proof of Theorem 1 in that article, no more.

###### Asked By : esselesse

#### Answered By : Ran G.

The paper deals with sequential encoding and decoding of codes. Maybe you have heard of "block codes": these codes take a message $M\in \{0,1\}^k$, perform some computation, and then output a code-word $c_M \in \{0,1\}^n$.^{(1)}. A main drawback of encoding systems for such codes is that need to know **the entire** message $M$ in order to output $c_M$.

This is not the case in convolution code.

### Sequential circuits

Convolution codes are usually designed so that there is a very simple and *sequential* encoding algorithm. By "sequential" I mean that the system reads $M$ bit by bit, and immediately^{(2)} outputs some part of $c_M$. Say, if the rate is $k/n=1/2$, then for every bit of $M$ given as an input to the encoder, the encoding system outputs the next $2$ bits of the codeword (even without knowing the rest of $M$). After the entire input $M$ is fed into the encoder, we will get the entire codeword $c_M$.

This sequential nature of encoding is what the paper means by "feed-forward": you always input the next part of the data, and that's enough for the system to proceed and give output, as a function on the current state of the encoder. This is/was crucial in memoryless implementations (those can't hold the past bits of the message $M$), or implementations that could not wait to have the entire $M$ due to the large delay incurred (those don't know the future bits of the message $M$).

Thus, a "feedforward inverse" is simply a **sequential** decoding algorithm. ("inverse" is the algorithm that inverses the code, that is, the decoding system).

### Delay

The delay $L$ is explained in the first paragraph of the paper: (below, not a precise quote)

An inverse with delay $L$, for a sequential circuit with $k$ inputs and $n$ outputs

[= a decoder for the encoder], is a sequential circuit with $n$ inputs and $k$ outputs, that cascaded with the encoder, yields an overall sequential circuit with delay $L$.

That is, take the encoder, and connect the decoder immediately to the output of the encoder. Now start feeding the input into the encoder. $L$ is the amount of "clocks" until you see the same bit at the output of the decoder.

^{(1)} The alphabet needs not be binary, this is just for simplicity.

^{(2)} Or after a fixed number of clocks.

###### Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/43547

**3.2K people like this**

## 0 comments:

## Post a Comment

Let us know your responses and feedback