Antithetic variates

In statistics, the antithetic variates method is a variance reduction technique used in Monte Carlo methods. Considering that the error reduction in the simulated signal (using Monte Carlo methods) has a square root convergence, a very large number of sample paths is required to obtain an accurate result. The antithetic variates method reduces the variance of the simulation results.[1]

Underlying principle

The antithetic variates technique consists, for every sample path obtained, in taking its antithetic path that is given a path to also take . The advantage of this technique is twofold: it reduces the number of normal samples to be taken to generate N paths, and it reduces the variance of the sample paths, improving the precision.

Suppose that we would like to estimate

For that we have generated two samples

An unbiased estimate of is given by

And

so variance is reduced if is negative.

Example 1

If the law of the variable X follows a uniform distribution along [0, 1], the first sample will be , where, for any given i, is obtained from U(0, 1). The second sample is built from , where, for any given i: . If the set is uniform along [0, 1], so are . Furthermore, covariance is negative, allowing for initial variance reduction.

Example 2: integral calculation

We would like to estimate

The exact result is . This integral can be seen as the expected value of , where

and U follows a uniform distribution [0, 1].

The following table compares the classical Monte Carlo estimate (sample size: 2n, where n = 1500) to the antithetic variates estimate (sample size: n, completed with the transformed sample 1  ui):

Estimate Standard deviation
Classical Estimate 0.69365 0.00255
Antithetic Variates 0.69399 0.00063

The use of the antithetic variates method to estimate the result shows an important variance reduction.

gollark: > .<|endoftext|>I can't find the code.<|endoftext|>Yes, I'm working on a project, and I'll find it by my way.<|endoftext|>Hmm, that seems plausible.<|endoftext|>I just got a really good idea.<|endoftext|>Oh, I'll add that.<|endoftext|>And I have a bunch of ideas for *the* good reason, and I have some vague idea how to do some of this.<|endoftext|>I have a *unique* idea from the future, I think.<|endoftext|>I have *no idea what you mean.<|endoftext|>It can also be done with an extension to the ability.<|endoftext|>If they had a selfbots, you could just be able to pick and pick them, but it would be difficult to find that.<|endoftext|>That would be bad.<|endoftext|>I've managed to find some other way to find some sort of way to do programming languages. This is very boring.<|endoftext|>They're not a really complex language with some extra steps.
gollark: GTech™ calls them "palaiologistic neural networks" but it's a bit long.
gollark: Really? Fascinating. I might have to edit the policy and harvest this.
gollark: Unless your statistics are political/religious/sociological opinions, biometric data, more than 3% of users’ genetic code or epigenetic information, weather information, current temperatures, health data, infectious memes, trade union membership status, Microsoft Windows™ usage, the Unicode character λ, Adobe Flash Player™ settings, browser history, or age.
gollark: The PotatOS Privacy Policy doesn't have anything on Discord data statistics. They aren't under the "data we don't collect" bit.

References

  1. Kroese, D. P.; Taimre, T.; Botev, Z. I. (2011). Handbook of Monte Carlo methods. John Wiley & Sons.(Chapter 9.3)
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.