Problem
1. Why are real numbers more difficult to represent and process than integers?
2. Why might a programmer choose to represent a data item in IEEE binary128 floating-point format instead of IEEE binary64 floating-point format? What additional costs might be incurred at runtime (when the application program executes) as a result of using the 128-bit instead of the 64-bit format?