Why may a programmer choose to represent a data item in IEEE 64-bit floating point format instead of IEEE 32-bit floating point format? What additional costs may be incurred at run time (when the application program executes) as a result of using the 64-bit instead of the 32-bit format?