LogSumExp

The LogSumExp (LSE) (also called RealSoftMax[1] or multivariable softplus) function is a smooth maximum – a smooth approximation to the maximum function, mainly used by machine learning algorithms.[2] It is defined as the logarithm of the sum of the exponentials of the arguments:

In tropical analysis, this is the sum in the log semiring.

Properties

The LogSumExp function domain is , the real coordinate space, and its range is , the real line. The larger the values of or their deviation, the better the approximation becomes. The LogSumExp function is convex, and is strictly monotonically increasing everywhere in its domain[3] (but not strictly convex everywhere[4]).

LSE is a smooth maximum because, applying the tangent line approximation if one term, is much larger than the rest, the second term is small because it has in the denominator, and one gets:

Indeed, there are the following tight bounds (if , otherwise the first inequality is not strict):

The upper bound is equality if and only if all are equal.

This is because (a sum is at most its maximum term each time), and for positive numbers, for any term, include the maximum (since it's adding positive numbers), and in fact is strict if (since you're adding a positive number). Combining with logarithms and exponents, one gets:

The lower bound is met only for , otherwise it is strict but approached when all but one of the arguments approach negative infinity, and the upper bound is met when all the arguments are equal.

Writing the partial derivatives are:

This can be calculated via logarithmic differentiation.

Expressing the partial derivatives as a vector with the gradient yields the softmax function, the multivariable analog of the logistic function.

The convex conjugate of LogSumExp is the negative entropy.

log-sum-exp trick for log-domain calculations

The LSE function is often encountered when the usual arithmetic computations are performed on a logarithmic scale, as in log probability.

Similar to multiplication operations in linear-scale becoming simple additions in log-scale, an addition operation in linear-scale becomes the LSE in log-scale.

A common purpose of using log-domain computations is to increase accuracy and avoid underflow and overflow problems when very small or very large numbers are represented directly (i.e. in a linear domain) using limited-precision floating point numbers.

Unfortunately, the use of LSE directly in this case can again cause overflow/underflow problems. Therefore, the following equivalent must be used instead (especially when the accuracy of the above 'max' approximation is not sufficient). Therefore, many math libraries such as IT++ provide a default routine of LSE and use this formula internally.

where

A strictly convex log-sum-exp type function

LSE is convex but not strictly convex. We can define a strictly convex log-sum-exp type function[5] by adding an extra argument set to zero:

This function is a proper Bregman generator (strictly convex and differentiable). It is encountered in machine learning, for example, as the cumulant of the multinomial/binomial family.

gollark: r/w/h/whatever other stupidly named stuff should just be required, as needed, by k.lua.
gollark: I still think that you should just be able to require k.lua and have it require the rest itself like a sensible dependency-using thing.
gollark: Within minutes of the first full-spectrum monitoring and rebroadcasting coming online, it was ruined by spam.
gollark: So - yes in every way.
gollark: It's the only one with GPU-acceleration.

See also

References

  1. Zhang, Aston; Zack, Lipton; Li, Mu; Smola, Alex. "Dive into Deep Learning, Chapter 3 Exercises". www.d2l.ai. Retrieved 27 June 2020.
  2. Nielsen, Frank; Sun, Ke (2016). "Guaranteed bounds on the Kullback-Leibler divergence of univariate mixtures using piecewise log-sum-exp inequalities". Entropy. 18: 442. arXiv:1606.05850. Bibcode:2016Entrp..18..442N. doi:10.3390/e18120442.
  3. El Ghaoui, Laurent (2017). Optimization Models and Applications.
  4. "convex analysis - About the strictly convexity of log-sum-exp function - Mathematics Stack Exchange". stackexchange.com.
  5. Nielsen, Frank; Hadjeres, Gaetan (2018). "Monte Carlo Information Geometry: The dually flat case". arXiv:1803.07225. Bibcode:2018arXiv180307225N. Cite journal requires |journal= (help)


This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.