Fiveable

๐Ÿ”€Stochastic Processes Unit 6 Review

QR code for Stochastic Processes practice questions

6.4 Stationary distributions

๐Ÿ”€Stochastic Processes
Unit 6 Review

6.4 Stationary distributions

Written by the Fiveable Content Team โ€ข Last updated September 2025
Written by the Fiveable Content Team โ€ข Last updated September 2025
๐Ÿ”€Stochastic Processes
Unit & Topic Study Guides

Stationary distributions are key to understanding Markov chains' long-term behavior. They represent the equilibrium state where probabilities remain constant over time. By analyzing stationary distributions, we can predict a system's steady-state performance and make informed decisions.

Computation methods like solving balance equations or using eigenvectors help find stationary distributions. These distributions have important properties, such as invariance under the Markov chain and representing long-run proportions of time in each state. Examples in birth-death processes, random walks, and queueing models illustrate their practical applications.

Definition of stationary distributions

  • A stationary distribution is a probability distribution that remains unchanged over time in a Markov chain or stochastic process
  • Represents the long-run behavior of the system, where the probabilities of being in each state remain constant
  • Formally, a distribution $\pi$ is stationary if it satisfies the equation $\pi P = \pi$, where $P$ is the transition matrix of the Markov chain
    • This means that if the system starts in the stationary distribution, it will remain in that distribution at all future time steps

Existence and uniqueness

  • The existence of a stationary distribution depends on the properties of the Markov chain
    • Irreducible: It is possible to reach any state from any other state in a finite number of steps
    • Aperiodic: The chain does not have a periodic structure, meaning it does not return to certain states at fixed intervals
  • If a Markov chain is both irreducible and aperiodic, it is guaranteed to have a unique stationary distribution
  • In some cases, a Markov chain may have multiple stationary distributions or no stationary distribution at all

Relationship to limiting distributions

  • The limiting distribution of a Markov chain describes the long-run behavior of the system when starting from any initial distribution
  • If a unique stationary distribution exists, the limiting distribution will converge to the stationary distribution regardless of the initial state
  • The stationary distribution can be thought of as the equilibrium state of the system, while the limiting distribution represents the convergence to that equilibrium over time

Computation methods

Solving balance equations

  • Balance equations, also known as steady-state equations, can be used to find the stationary distribution of a Markov chain
  • The balance equations state that the total probability flow into each state must equal the total probability flow out of that state
  • To solve for the stationary distribution, set up a system of linear equations based on the balance equations and the normalization condition (i.e., the probabilities must sum to 1)
  • Solving this system of equations yields the stationary distribution

Using eigenvectors of transition matrix

  • The stationary distribution can also be computed using the eigenvectors of the transition matrix
  • The stationary distribution corresponds to the left eigenvector of the transition matrix associated with the eigenvalue 1
  • To find the stationary distribution:
    1. Compute the eigenvalues and left eigenvectors of the transition matrix
    2. Identify the eigenvector corresponding to the eigenvalue 1
    3. Normalize the eigenvector so that its entries sum to 1, yielding the stationary distribution

Properties

Invariance under Markov chain

  • The stationary distribution is invariant under the action of the Markov chain
  • If the system starts in the stationary distribution, the distribution of states will remain the same at each subsequent time step
  • This property is a consequence of the definition of the stationary distribution, which satisfies $\pi P = \pi$

Long-run proportion of time in each state

  • The stationary distribution has an important interpretation in terms of the long-run behavior of the Markov chain
  • The entries of the stationary distribution vector $\pi$ represent the proportion of time the system spends in each state over the long run
  • Specifically, $\pi_i$ is the long-run proportion of time the system spends in state $i$, regardless of the initial state

Examples

Birth-death processes

  • Birth-death processes are a class of Markov chains used to model population dynamics, queueing systems, and other applications
  • In a birth-death process, the system can only transition between adjacent states, representing births and deaths (or arrivals and departures)
  • The stationary distribution of a birth-death process can be found using the balance equations, which take into account the birth and death rates at each state

Random walks

  • Random walks are stochastic processes where the system moves randomly between states according to certain transition probabilities
  • Examples include the simple random walk on a line or a graph, where the system moves to neighboring states with equal probability
  • The stationary distribution of a random walk depends on the structure of the state space and the transition probabilities
    • For a simple random walk on a finite line with reflecting boundaries, the stationary distribution is uniform

Queueing models

  • Queueing models describe systems where customers arrive, wait in a queue, and are served by one or more servers
  • Common queueing models include M/M/1 (single server with Poisson arrivals and exponential service times) and M/M/c (multiple servers)
  • The stationary distribution of a queueing model represents the long-run distribution of the number of customers in the system
    • For an M/M/1 queue, the stationary distribution is geometric, with the probability of having $n$ customers in the system decreasing exponentially with $n$

Applications

Steady-state behavior of systems

  • Stationary distributions are crucial for understanding the steady-state behavior of systems modeled by Markov chains
  • In many applications, such as queueing systems, inventory management, and population dynamics, the long-run behavior is of primary interest
  • By analyzing the stationary distribution, one can determine key performance measures, such as the average number of customers in a queue or the long-run proportion of time a machine is idle

Ergodicity and convergence

  • Ergodicity is a property of Markov chains that ensures the long-run behavior of the system is independent of the initial state
  • If a Markov chain is ergodic (i.e., irreducible and aperiodic), it will converge to its unique stationary distribution over time
  • This convergence property is essential for the practical application of Markov chains, as it allows for the estimation of long-run averages and the design of efficient simulation algorithms

Simulation and sampling techniques

  • Stationary distributions play a key role in the design and analysis of simulation and sampling techniques for Markov chains
  • Markov chain Monte Carlo (MCMC) methods, such as the Metropolis-Hastings algorithm and Gibbs sampling, rely on constructing a Markov chain with a desired stationary distribution
  • By running the Markov chain for a sufficient number of steps, samples from the stationary distribution can be obtained, enabling the estimation of various quantities of interest

Stationary distributions vs limiting distributions

  • While stationary and limiting distributions are closely related, they are distinct concepts
  • A stationary distribution is a probability distribution that remains invariant under the action of the Markov chain
    • If the system starts in the stationary distribution, it will remain in that distribution at all future time steps
  • A limiting distribution, on the other hand, describes the long-run behavior of the Markov chain when starting from any initial distribution
    • If a unique stationary distribution exists, the limiting distribution will converge to the stationary distribution over time
  • In some cases, a Markov chain may have a limiting distribution but no stationary distribution, or vice versa

Extensions and generalizations

Quasi-stationary distributions

  • Quasi-stationary distributions are a generalization of stationary distributions for Markov chains with absorbing states
  • In a Markov chain with absorbing states, the system will eventually reach an absorbing state and remain there forever
  • A quasi-stationary distribution describes the long-run behavior of the system conditional on not being absorbed
    • It represents the distribution of states the system visits before absorption, given that it has not yet been absorbed

Stationary distributions in continuous time

  • The concept of stationary distributions can be extended to continuous-time Markov chains (CTMCs)
  • In a CTMC, the system evolves continuously over time, with transitions occurring according to exponentially distributed holding times
  • The stationary distribution of a CTMC satisfies the global balance equations, which are analogous to the balance equations in discrete-time Markov chains
    • The global balance equations state that the total rate of flow into each state must equal the total rate of flow out of that state
  • Computing the stationary distribution of a CTMC involves solving a system of linear equations based on the global balance equations and the normalization condition