An Internet Service Provider explains that the average number of hours its customers are online each day is 3.75. Assume a random sample of 14 of company's customers is chosen and the average number of hours that they're online each day is measured. The sample results are
3.11 1.97 3.52 4.56 7.19 3.89 7.71
2.12 4.68 6.78 5.02 4.28 3.23 1.29
Based on the sample of 14 customers, how much sampling error exists? Would you expect the sampling error to increase or decrease if the sample size was increased to 40?