I am a bit confused on the difference between Cyclic Redundancy Check and Hamming Code. Both add a check value attached based on the some arithmetic operation of the bits in the message being transmitted. For Hamming, it can be either odd or even parity bits added at the end of a message and for CRC, it's the remainder of a polynomial division of their contents.
However, CRC and Hamming are referred to as fundamentally different ideas. Can someone please elaborate on why this is?
Also why is the CRC compared with the FCS(Frame check sequence) to see if the received message (i.e. in the VirtualWire library used with Arduino) is with or without error? Why not just use the FCS from the beginning? (I might be totally flawed in my understanding by asking this question, please correct me.)
Asked By : Jonathan
Answered By : TEMLIB
- CRC is conceived as the remainder of a polynomial division. It is efficient for detecting errors, when the calculated remainder does not match. Depending on the CRC size, it can detect bursts of errors (10 bits zeroed, for example), which is great for checking communications.
The "FCS" term is used sometimes for some transformed version of the CRC (Ethernet for example) : The purpose is to apply the CRC algorithm to both the data and its FCS to cancel the remainder value and get a constant (just like even parity is ensuring an even number of "1" bits, including the parity bit).
- Hamming codes are both detection and correction codes. Adding the Hamming code bits, you ensure some distance (as the number of differing bits) between valid codes. For example, with 3 bits distance, you can correct any 1 bit error OR detect any 2 bit error.
Reduced to a single check bit, Hamming codes and CRC are identical (x+1 polynomial) to parity.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/53719
0 comments:
Post a Comment
Let us know your responses and feedback