Dynkin system

A Dynkin system, named after Eugene Dynkin, is a collection of subsets of another universal set satisfying a set of axioms weaker than those of σ-algebra. Dynkin systems are sometimes referred to as λ-systems (Dynkin himself used this term) or d-system.[1] These set families have applications in measure theory and probability.

A major application of λ-systems is the π-λ theorem, see below.

Definitions

Let Ω be a nonempty set, and let be a collection of subsets of Ω (i.e., is a subset of the power set of Ω). Then is a Dynkin system if

  1. Ω ∈ ,
  2. if A, B and AB, then B \ A,
  3. if A1, A2, A3, ... is a sequence of subsets in and AnAn+1 for all n ≥ 1, then .

Equivalently, is a Dynkin system if

  1. Ω ∈ ,
  2. if A, then Ac,
  3. if A1, A2, A3, ... is a sequence of subsets in such that AiAj = Ø for all ij, then .

The second definition is generally preferred as it usually is easier to check.

An important fact is that a Dynkin system which is also a π-system (i.e., closed under finite intersections) is a σ-algebra. This can be verified by noting that conditions 2 and 3 together with closure under finite intersections imply closure under countable unions.

Given any collection of subsets of , there exists a unique Dynkin system denoted which is minimal with respect to containing . That is, if is any Dynkin system containing , then . is called the Dynkin system generated by . Note . For another example, let and ; then .

Dynkin's π-λ theorem

If is a π-system and is a Dynkin system with , then . In other words, the σ-algebra generated by is contained in .

One application of Dynkin's π-λ theorem is the uniqueness of a measure that evaluates the length of an interval (known as the Lebesgue measure):

Let (Ω, B, λ) be the unit interval [0,1] with the Lebesgue measure on Borel sets. Let μ be another measure on Ω satisfying μ[(a,b)] = b  a, and let D be the family of sets S such that μ[S] = λ[S]. Let I = { (a,b),[a,b),(a,b],[a,b] : 0 < ab < 1 }, and observe that I is closed under finite intersections, that ID, and that B is the σ-algebra generated by I. It may be shown that D satisfies the above conditions for a Dynkin-system. From Dynkin's π-λ Theorem it follows that D in fact includes all of B, which is equivalent to showing that the Lebesgue measure is unique on B.

Application to probability distributions

The π-λ theorem motivates the common definition of the probability distribution of a random variable in terms of its cumulative distribution function. Recall that the cumulative distribution of a random variable is defined as

whereas the seemingly more general law of the variable is the probability measure

where is the Borel σ-algebra. We say that the random variables , and (on two possibly different probability spaces) are equal in distribution (or law), , if they have the same cumulative distribution functions, FX = FY. The motivation for the definition stems from the observation that if FX = FY, then that is exactly to say that and agree on the π-system which generates , and so by the example above: .

A similar result holds for the joint distribution of a random vector. For example, suppose X and Y are two random variables defined on the same probability space , with respectively generated π-systems and . The joint cumulative distribution function of (X,Y) is

However, and . Since

is a π-system generated by the random pair (X,Y), the π-λ theorem is used to show that the joint cumulative distribution function suffices to determine the joint law of (X,Y). In other words, (X,Y) and (W,Z) have the same distribution if and only if they have the same joint cumulative distribution function.

In the theory of stochastic processes, two processes are known to be equal in distribution if and only if they agree on all finite-dimensional distributions. i.e. for all .

The proof of this is another application of the π-λ theorem.[2]

Notes

  1. Aliprantis, Charalambos; Border, Kim C. (2006). Infinite Dimensional Analysis: a Hitchhiker's Guide (Third ed.). Springer. Retrieved August 23, 2010.
  2. Kallenberg, Foundations Of Modern probability, p. 48

References

  • Gut, Allan (2005). Probability: A Graduate Course. New York: Springer. doi:10.1007/b138932. ISBN 0-387-22833-0.
  • Billingsley, Patrick (1995). Probability and Measure. New York: John Wiley & Sons, Inc. ISBN 0-471-00710-2.
  • Williams, David (2007). Probability with Martingales. Cambridge University Press. p. 193. ISBN 0-521-40605-6.

This article incorporates material from Dynkin system on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.