The question is:
We are testing some benchmark tasks to show off performance of floatingpoint units (comparing old and new floating-point hardware). One benchmark we are considering runs for 100 seconds (total initial execution time) with the old floating-point hardware. We observed the overall benchmark showing a speedup of 3 due to the floating-point enhancement with a new floating-point unit running 5 times faster. How much of the initial execution time would floating-point instructions have to account for?