Cramér's theorem (large deviations)

Cramér's theorem is a fundamental result in the theory of large deviations, a subdiscipline of probability theory. It determines the rate function of a series of iid random variables. A weak version of this result was first shown by Harald Cramér in 1938.

Statement

The logarithmic moment generating function (which is the cumulant-generating function) of a random variable is defined as:

Let be a sequence of iid real random variables with finite logarithmic moment generating function, e.g. for all .

Then the Legendre transform of :

satisfies,

for all

In the terminology of the theory of large deviations the result can be reformulated as follows:

If is a series of iid random variables, then the distributions satisfy a large deviation principle with rate function .

gollark: GPT-2/3 are just trained off unstructured unlabelled text. So they'd basically only need a HTML parser.
gollark: It would actually have been quite easy for them.
gollark: Unfortunately, that project did not work and has been consigned to history.
gollark: It impressed me a lot the first time I trained a small network instead of the really big ones popular now and saw how fast it went.
gollark: Modularity?

References

  • Klenke, Achim (2008). Probability Theory. Berlin: Springer. pp. 508. doi:10.1007/978-1-84800-048-3. ISBN 978-1-84800-047-6.
  • "Cramér theorem", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.