# Conclusion from a Hypothesis Test

## Conclusion from a Hypothesis Test

**Understanding Conclusions from Hypothesis Testing**

**Hypothesis testing**is a statistical method that is used to make inferences or draw conclusions about a population based on a sample.- It involves setting up a null hypothesis (
**H0**) and an alternative hypothesis (**H1**), then deciding whether to reject or not reject the null hypothesis based on a sample of data. - Null hypothesis (
**H0**) is the statement being tested. Usually it is a statement of ‘no effect’ or ‘no difference’. - Alternative hypothesis (
**H1**) is the statement we’ll accept if the data provides enough evidence against H0. - A statistically
**significant result**(usually a result with p-value < 0.05) is strong evidence against the null hypothesis, so you reject the null hypothesis. - A
**not statistically significant result**(usually a p-value > 0.05) means that there is not enough evidence in your data to conclude that an effect exists.

**Interpreting P-Value**

- The
**p-value**is a probability that provides a measure of the evidence against the null hypothesis provided by the data. - A small p-value (typically ≤ 0.05) indicates strong evidence against the null hypothesis, so you reject the null hypothesis.
- A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis.
- P-values very close to the cutoff (0.05) are considered to be marginal.

**Making a Decision - Reject or Fail to Reject H0**

- The decision to reject or fail to reject the null hypothesis is based on the p-value and your chosen
**significance level**. - If p-value ≤ significance level, the test is
**statistically significant**, so you reject H0. This means the data provide enough evidence to support the alternative hypothesis. - If p-value > significance level, the test is
**not statistically significant**, so you fail to reject H0. This means the data does not provide enough evidence to support the alternative hypothesis.

**Understanding Type I and Type II Errors**

- In hypothesis testing, a
**Type I error**occurs when you reject the null hypothesis when it is true. This is like a false positive. - A
**Type II error**occurs when you do not reject the null hypothesis when it is false. This is like a false negative. - The probability of Type I error is denoted by α (alpha) and it is equal to significance level.
- The probability of Type II error is denoted by β (beta). Ideally, we want both α and β to be as small as possible.

**Interpreting Confidence Intervals**

- The
**confidence interval (CI)**provides a range in which the population parameter is likely to fall. - A wider confidence interval may suggest more uncertainty or variability in the data.
- If the confidence interval includes the value in the null hypothesis, fail to reject H0. If it doesn’t, reject H0.

**Hypothesis Testing Limitations**

- Even if a result is statistically significant, it does not necessarily mean it is practically or scientifically significant.
- Observing a significant test only suggests that the effect observed in the sample is unlikely to have occurred due to sampling error, but it does not prove that effect is true for the entire population.
- Hypothesis testing only provides evidence for or against the null hypothesis, it doesn’t prove the null or alternative hypothesis.
- A study with low power, a small effect size, or a small sample size, may not achieve statistical significance, even if an effect is truly present.

**Understanding Power Analysis**

- Power Analysis is a method for finding the probability of detecting a significant effect, provided that the effect actually exists.
- It is affected by sample size, significance level (α), effect size and standard deviation.
- It helps determine the appropriate sample size required to detect the effect.
- High power (0.8 or more) is often desirable in study design because it means the study has a high chance of detecting a statistically significant effect in the sample if it exists in the population.