Kolmogorov equations

In probability theory, Kolmogorov equations, including Kolmogorov forward equations and Kolmogorov backward equations, characterize stochastic processes. In particular, they describe how the probability that a stochastic process is in a certain state changes over time.

Diffusion processes vs. jump processes

Writing in 1931, Andrei Kolmogorov started from the theory of discrete time Markov processes, which are described by the Chapman-Kolmogorov equation, and sought to derive a theory of continuous time Markov processes by extending this equation. He found that there are two kinds of continuous time Markov processes, depending on the assumed behavior over small intervals of time:

If you assume that "in a small time interval there is an overwhelming probability that the state will remain unchanged; however, if it changes, the change may be radical" -Ben Shmase,[1] then you are led to what are called jump processes.

The other case leads to processes such as those "represented by diffusion and by Brownian motion; there it is certain that some change will occur in any time interval, however small; only, here it is certain that the changes during small time intervals will be also small".[1]

For each of these two kinds of processes, Kolmogorov derived a forward and a backward system of equations (four in all).

History

The equations are named after Andrei Kolmogorov since they were highlighted in his 1931 foundational work.[2]

William Feller, in 1949, used the names "forward equation" and "backward equation" for his more general version of the Kolmogorov's pair, in both jump and diffusion processes.[1] Much later, in 1956, he referred to the equations for the jump process as "Kolmogorov forward equations" and "Kolmogorov backward equations".[3]

Other authors, such as Motoo Kimura referred to the diffusion (Fokker–Planck) equation as Kolmogorov forward equation, a name that has persisted.[4]

The modern view

An example from biology

One example from biology is given below:[5]

This equation is applied to model population growth with birth. Where is the population index, with reference the initial population, is the birth rate, and finally , i.e. the probability of achieving a certain population size.

The analytical solution is:[5]

This is a formula for the density in terms of the preceding ones, i.e. .

gollark: I did not mean that they're true statements describing the entire universe. Go with "unconditional truth" if you mean something different by "universal" too.
gollark: What?
gollark: Someone not understanding it doesn't make it false.
gollark: They're "universal truth" because they apply regardless of location etc. in the universe.
gollark: You can have "universal truth" with things like logical statements, where you can come up with things that are always true given some set of axioms. For physical/sciencey things you can just do "it's very unlikely for this to not be the case".

References

  1. Feller, W. (1949) "On the Theory of Stochastic Processes, with Particular Reference to Applications", Proceedings of the (First) Berkeley Symposium on Mathematical Statistics and Probability pp 403-432.
  2. Andrei Kolmogorov, "Über die analytischen Methoden in der Wahrscheinlichkeitsrechnung" (On Analytical Methods in the Theory of Probability), 1931,
  3. William Feller, 1957. On Boundaries and Lateral Conditions for the Kolmogorov Differential Equations
  4. Kimura, Motoo (1957) "Some Problems of Stochastic Processes in Genetics", The Annals of Mathematical Statistics, 28 (4), 882-901 JSTOR 2237051
  5. Logan, J. David and Wolesensky, Willian R. Mathematical methods in biology. Pure and Applied Mathematics: a Wiley-interscience Series of Texts, Monographs, and Tracts. John Wiley& Sons, Inc. 2009. pp. 325-327.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.