Question :
Suppose that a given architecture does not have hardware support for multiplication, so multiplications have to be done through repeated addition, as was the case on some early microprocessors.
Assume it takes 200 cycles to perform a multiplication in software, and 4 cycles to perform a multiplication in hardware.
What is the overall speedup of the program when multiplications are done in hardware if:
a. 10 percent of the program execution is spent in doing multiplications?
b. 40 percent of the program execution is spent in doing multiplications?