An important measure of the performance of a machine is the mean time between failures (MTBF). A certain printer attached to a word processor was observed for a period of time during which 10 failures were observed. The times between failures averaged 98 working hours, with a standard deviation of 6 hours. A modified version of this printer was observed for eight failures, with the times between failures averaging 94 working hours, with a standard deviation of 8 hours. Can we say, at the 1% significance level, that the modified version of the printer has a smaller MTBF? What assumptions are made in this test? Do you think the assumptions are reasonable for the situation?