Question: 1 Suppose the standard 16-bit CRC protocol. Can we use this protocol to do error-CORRECTION? If so, how powerful is it?
Part 1: What is the largest x such that the protocol performs x-bit correction?
Part 2: What algorithm would you use to perform this correction? Give me the pseudo-code (or a sensible explanation)
Can you answer this problem and explain the protocol performs x-bit correction? Can we use this protocol to do error connection?