6. [Implementation Testing] (15 points)
You are to implement and test a program for summing 1/x as x runs over all approximately eight million (23 fraction bits) single precision floating point numbers in the interval [1, 2). You are to do this on a server, PC (or Mac) of your choice. You are first asked to predetermine estimates of your implementation's computation time (architecture and compiler dependant), result accuracy (algorithm and round-off dependant), and result value (real valued sum estimate). Specifically:
a. Discuss the computational environment for your tests, including the compiler, operating system, machine MHz and cycle times for appropriate instructions and whether pipelining affects your execution time.
b. Predetermine an estimate of the time utilizing single precision for the variables for all computations from any documentation you can find from the hardware manufacturer and/or compiler and system provider.
c. Predetermine a rough estimate of the exact sum (hint: how many terms are being added and how large is an "average term").
d. Predetermine an estimate of the accuracy. Single precision computation should be done in round to nearest mode as provided by standard C implementations. By accuracy of the sum we mean the difference between the rounded sum of rounded values compared to the exact sum of exact values.
e. Give the measured running time and the computed result for your implementation. Compare the results with your estimates of running time and approximate sum. Compute the sum in double precision and single precision and compare to give a reasonable value for the accuracy of the single precision sum. Compare your result with another student's results that might have performed the sum in a different order. Can you explain the size of the approximation error?