Approximate inference
Approximate inference methods make it possible to learn realistic models from big data by trading off computation time for accuracy, when exact learning and inference are computationally intractable.
Major methods classes
- Variational Bayesian methods
- Markov chain Monte Carlo
- Expectation propagation
- Markov random fields
- Bayesian networks
- Loopy and generalized belief propagation
gollark: Which is *maybe* still the case, it's disputed.
gollark: Moore's law is about transistor count per chip doubling every two years.
gollark: No, it's not THAT.
gollark: To be fair, computers are faster now, but also waste horrendous amounts of processing power on random nonsense.
gollark: "computer get better""computer get better, but not as faster better!!!!"
See also
References
- "Approximate Inference and Constrained Optimization". Uncertainty in Artificial Intelligence - UAI: 313–320. 2003.
- "Approximate Inference". Retrieved 2013-07-15.
External links
- Tom Minka, Microsoft Research (Nov 2, 2009). "Machine Learning Summer School (MLSS), Cambridge 2009, Approximate Inference" (video lecture).
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.