Fiveable

๐ŸŽฒIntro to Probability Unit 8 Review

QR code for Intro to Probability practice questions

8.1 Bernoulli distribution

๐ŸŽฒIntro to Probability
Unit 8 Review

8.1 Bernoulli distribution

Written by the Fiveable Content Team โ€ข Last updated September 2025
Written by the Fiveable Content Team โ€ข Last updated September 2025
๐ŸŽฒIntro to Probability
Unit & Topic Study Guides

The Bernoulli distribution is a simple yet powerful model for binary outcomes. It's the building block for more complex distributions, describing events with only two possible results: success or failure. Understanding Bernoulli trials is key to grasping probability concepts.

This distribution forms the foundation for analyzing random experiments with yes/no outcomes. From coin flips to quality control, it's widely applied in various fields. Mastering its properties and calculations is crucial for tackling more advanced probability problems in this course.

Bernoulli Distribution

Definition and Probability Mass Function

  • Discrete probability distribution modeling random experiments with two possible outcomes labeled success (1) and failure (0)
  • Probability mass function (PMF) given by P(X=x)=px(1โˆ’p)(1โˆ’x)P(X = x) = p^x (1-p)^{(1-x)}, where x is 0 or 1, and p represents probability of success
  • Named after Swiss mathematician Jacob Bernoulli
  • Special case of binomial distribution with n = 1 trial
  • Support set {0, 1} represents two possible outcomes of the experiment
  • Models binary outcomes in various fields (genetics, quality control, medical testing)

Applications and Examples

  • Coin flips model Bernoulli distribution with p = 0.5 for a fair coin
  • Quality control uses Bernoulli trials to test if a product is defective (1) or not (0)
  • Medical tests often follow Bernoulli distribution (positive result = 1, negative result = 0)
  • Election polls model voter preference as Bernoulli trials (support candidate = 1, not support = 0)
  • Email spam filters classify messages as spam (1) or not spam (0) using Bernoulli distribution

Parameters of the Bernoulli Distribution

Key Parameters and Characteristics

  • Single parameter p represents probability of success (X = 1) in a single trial
  • Mean (expected value) E[X] = p, representing average outcome over many trials
  • Variance Var(X) = p(1-p) measures spread of distribution
  • Standard deviation ฯƒ=p(1โˆ’p)ฯƒ = \sqrt{p(1-p)} provides measure of dispersion in same units as random variable
  • Mode depends on p value:
    • 1 if p > 0.5
    • 0 if p < 0.5
    • Both 0 and 1 if p = 0.5
  • Moment-generating function MX(t)=1โˆ’p+petM_X(t) = 1 - p + pe^t derives moments and other properties of distribution

Examples and Applications of Parameters

  • Coin flip: p = 0.5, E[X] = 0.5, Var(X) = 0.25
  • Biased die (6 is success): p = 1/6, E[X] = 1/6, Var(X) = 5/36
  • Quality control (1% defect rate): p = 0.01, E[X] = 0.01, Var(X) = 0.0099
  • Medical test (90% accuracy): p = 0.9, E[X] = 0.9, Var(X) = 0.09
  • Using moment-generating function to find E[X^2]: E[X2]=d2dt2MX(t)โˆฃt=0=pE[X^2] = \frac{d^2}{dt^2}M_X(t)|_{t=0} = p

Probabilities with Bernoulli Distribution

Calculating Basic Probabilities

  • Probability of success (X = 1) calculated as P(X = 1) = p
  • Probability of failure (X = 0) calculated as P(X = 0) = 1 - p
  • Cumulative distribution function (CDF):
    • F(x) = 0 for x < 0
    • F(x) = 1-p for 0 โ‰ค x < 1
    • F(x) = 1 for x โ‰ฅ 1
  • Use PMF to calculate probabilities for specific outcomes: P(X=x)=px(1โˆ’p)(1โˆ’x)P(X = x) = p^x (1-p)^{(1-x)}, where x is 0 or 1

Advanced Probability Calculations

  • Apply law of total probability for multiple Bernoulli trials or conditional probabilities
  • Utilize properties of expectation and variance to solve complex probability problems
  • Example: Probability of at least one success in three independent Bernoulli trials
    • P(at least one success) = 1 - P(all failures) = 1 - (1-p)^3
  • Conditional probability example: P(X = 1 | Y = 1) where X and Y are dependent Bernoulli variables
  • Use Bernoulli distribution to model rare events (small p) in large populations