Probability

Probability

Basic Terminology

  • Probability is a measure of the chance or likelihood of an event happening.
  • A random experiment is where we don’t know what the outcome will be until it has happened.
  • An event is a set of outcomes of a random experiment.
  • The sample space is the set of all possible outcomes of a random experiment.

Probability Scale

  • Probability is always between 0 and 1. 0 means the event is certain not to happen, 1 means it is certain to happen.
  • Complementary events are opposite events. The sum of the probabilities of complementary events is 1.

Calculating Probability

  • If all outcomes of an experiment are equally likely, the probability of an event happening is the number of favourable outcomes divided by the total number of outcomes.
  • To calculate the probability of an event not occurring, subtract the probability of the event occurring from 1.

Combined Events

  • The probability of Event A and Event B occurring is found by multiplying the probability of Event A by the probability of Event B, if they are independent (the occurrence of Event A does not affect the occurrence of Event B).
  • The probability of Event A or Event B occurring can be found by adding their probabilities together, but if they are not mutually exclusive (they can both happen at the same time), you should subtract the probability of them both happening.

Experimental Probability vs Theoretical Probability

  • Theoretical probability involves prediction based on what should happen in an ideal situation.
  • Experimental probability is based on what actually happens when we run an experiment. It is calculated as the number of desired outcomes divided by the number of trials.

Using Probability Trees

  • Probability trees are a useful visual tool for calculating compound probabilities.

Conditional Probability

  • Conditional probability is the probability of an event given that another event has occurred. Denoted as P(A B), it is read as “probability of A given B”.

Expectation and Variance

  • The expected value of a random variable is the long-term average or mean value of the random variable over many repeats of the experiment.
  • The variance measures how much the outcomes of an experiment vary on average around the expected value.