Question: Most programming languages have a built-in integer data type. Usually this representation has a fixed size, thus placing a limit on how large a value can be stored in an integer variable. Describe a representation for integers that has no size restriction (other than the limits of the computer's available main memory), and thus no practical limit on how large an integer can be stored. Briefly show how your representation can be used to implement the operations of addition, exponentiation and multiplication.
Can anyone provide the answer for given question with example?