Question: A random variable X is known to be distributed either N(1,3) or N(2,3). The mill hypothesis is that the mean is equal to 1. The alternative hypothesis is that the mean is 2.
a. With a sample of 10 draws from the distribution, what is the cutoff for a test with probability of Type I error of at most .01?
b. What is the probability of a Type ft error?
c. Repeat parts (a) and (b) using a sample size of 100 draws. What does this tell you about keeping the probability of a Type I error constant as the sample size becomes larger?