E(X) and Var(X)

Understanding E(X) and Var(X)

E(X)

  • E(X), also known as the expected value of the random variable X, is a fundamental concept in probability. It is the long-term average value of repetitions of the experiment it represents.
  • To calculate the expected value, multiply each possible outcome by its probability, then add those values. The Greek letter μ (mu) is often used to represent the expected outcome.
  • If a random variable X has a probability function p(x), the expected value is defined as E(X) = Σ [ x * P(X=x) ].
  • In a uniform distribution, where each outcome has an equal chance of occurring, the expected value can be calculated as the average of all possible values.
  • The expected value doesn’t need to be an obtainable outcome. For instance, if a die roll (values 1 to 6) was repeated many times, the expected value would be 3.5 - a value not possible with one roll.

Var(X)

  • The Var(X) or variance of a random variable X measures how spread out the data is from the expected value.
  • In more mathematical terms, variance is the expected value of the squared deviation from the mean of the random variable.
  • If a random variable X has a probability function p(x), the variance is defined as Var(X) = E[(X - E[X])^2].
  • Variance can also calculated as Var(X) = E(X^2) - [E(X)]^2 - the difference between the expectation of the square of X and the square of the expectation of X.
  • Variance is always non-negative because squared numbers are always positive or zero.
  • It’s important to note that the larger the variance, the more spread out the data is.
  • The square root of variance gives us the standard deviation, which allows us to look at our dispersion in terms of our original variable X.

Knowing how to interpret and calculate E(X) and Var(X) is crucial in understanding the behaviour of random variables and provides the foundation for many statistical methods.