If a computer statistical package were to simulate 1000 random observations from a normal distribution with μ = 200 and σ = 50 what percentage of these observations would you expect to have a value of 300 or more?
Do you think the actual number in the "≥ 300 " range would equal the expected number in this range? If so, why? If not, why not?