# Using the Normal Distribution in Hypothesis Tests

## Introduction to Hypothesis Testing with Normal Distribution

• Hypothesis testing is a statistical method that uses sample data to evaluate a claim about the parameters of a population.
• Hypothesis testing often utilizes the normal distribution, especially when the size of the sample is large (n > 30), or the population is known to be normally distributed.
• In hypothesis testing, we make two conflicting assumptions about the population: the null hypothesis (H₀) which is the assumption to be tested, and the alternative hypothesis (H₁) which is what we consider as evidence against the null hypothesis.

## Constructing a Hypothesis Test

• Begin by identifying a null hypothesis (H₀) and an alternative hypothesis (H₁).
• Determine a significance level (α), which is the maximum probability you are willing to accept of rejecting the null hypothesis when it is, in reality, true. A common choice for α is 0.05.
• Based on the sample data and the identified significance level, calculate the test statistic, which can be standardised to follow a standard normal distribution (z-distribution).
• If you have a large sample size (n > 30) or if the population standard deviation is known, use the z-test. Compute the test statistic z = (x̄ - μ₀) / (σ/√n), where x̄ is the sample mean, μ₀ is the population mean under the null hypothesis, σ is the population standard deviation, and n is the sample size.
• For smaller samples from a normally distributed yet unknown standard deviation population, the t-test is used instead of the z-test.

## Making Decision and Interpretation

• Compare the test statistic to the critical value corresponding to the significance level. The critical value can be found from standard normal distribution tables.
• If the test statistic is more extreme in the direction of the alternative hypothesis than the critical value, we reject the null hypothesis (H₀) and support for the alternative hypothesis (H₁).
• If the null hypothesis is rejected, the result is said to be statistically significant.
• It’s important not to confuse statistical significance with practical significance. Even a small effect can be statistically significant with large enough sample sizes.
• The choice of the significance level is somewhat arbitrary, and should reflect the consequences of making a Type I error - rejecting the null hypothesis when it is true.

## Type I and Type II Errors

• A Type I error occurs when the null hypothesis H₀ is true, but is rejected. The probability of making a Type I error is equal to the significance level α.
• A Type II error happens when the null hypothesis is false, but is not rejected. The probability of making a Type II error is denoted by β. The power of a test (1 - β) is the probability that it correctly rejects a false null hypothesis.
• The risks of Type I and Type II errors should be balanced, based on the potential consequences of these errors in the given context.