Using a 64-bit system, the binary floating-point representation of the number a, the way that computer stores this number, is the following: 1.1001 × 2^10000000101 Convert this binary floating-point representation to its decimal equivalent. (Hint: the bias for the 64-bit system is 1023)