Convex function
In mathematics, a real-valued function defined on an n-dimensional interval is called convex if the line segment between any two points on the graph of the function lies above the graph between the two points. Equivalently, a function is convex if its epigraph (the set of points on or above the graph of the function) is a convex set. A twice-differentiable function of a single variable is convex if and only if its second derivative is nonnegative on its entire domain.[1] Well-known examples of convex functions of a single variable include the squaring function and the exponential function . In simple terms, a convex function refers to a function that is in the shape of a cup , and a Concave function is in the shape of a cap .
Convex functions play an important role in many areas of mathematics. They are especially important in the study of optimization problems where they are distinguished by a number of convenient properties. For instance, a strictly convex function on an open set has no more than one minimum. Even in infinite-dimensional spaces, under suitable additional hypotheses, convex functions continue to satisfy such properties and as a result, they are the most well-understood functionals in the calculus of variations. In probability theory, a convex function applied to the expected value of a random variable is always bounded above by the expected value of the convex function of the random variable. This result, known as Jensen's inequality, can be used to deduce inequalities such as the arithmetic–geometric mean inequality and Hölder's inequality.
Convex down & Convex up
In introductory level math books, the term convex is often conflated with the opposite term concave by referring to a "concave function" as "covex downward". Likewise, a "concave" function is referred to as "convex upwards" to distinguish it from "convex downwards". However, the use of "up" and "down" keyword modifiers is not universally used in the field of mathematics, and mostly exists to avoid confusing students with an extra term for concavity.
If the term concave is used with out an "up" or "down" keywords, then convex refers strictly to a cup shaped graph . (Example, Jensen's inequality refers to an inequality involving a convex function, and makes no mention of the words "convex up" or "convex down".)
Definition
Let be a convex set in a real vector space and let be a function.
- is called convex if:
- is called strictly convex if:
- A function is said to be (strictly) concave if is (strictly) convex.
Properties
Functions of one variable
- Suppose is a function of one real variable defined on an interval, and let
- (note that R(x1, x2) is the slope of the purple line in the above drawing; the function R is symmetric in (x1, x2)). is convex if and only if R(x1, x2) is monotonically non-decreasing in x1, for every fixed x2 (or vice versa). This characterization of convexity is quite useful to prove the following results.
- A convex function of one real variable defined on some open interval C is continuous on C. admits left and right derivatives, and these are monotonically non-decreasing. As a consequence, is differentiable at all but at most countably many points, the set on which is not differentiable can however still be dense. If C is closed, then may fail to be continuous at the endpoints of C (an example is shown in the examples section).
- A differentiable function of one variable is convex on an interval if and only if its derivative is monotonically non-decreasing on that interval. If a function is differentiable and convex then it is also continuously differentiable.
- A differentiable function of one variable is convex on an interval if and only if its graph lies above all of its tangents:[2]:69
- for all x and y in the interval. Equivalently, a differentiable function of one variable is convex if and only if its epigraph is a convex set. In particular, if then c is a global minimum of .
- A twice differentiable function of one variable is convex on an interval if and only if its second derivative is non-negative there; this gives a practical test for convexity. Visually, a twice differentiable convex function "curves up", without any bends the other way (inflection points). If its second derivative is positive at all points then the function is strictly convex, but the converse does not hold. For example, the second derivative of f(x) = x4 is f ′′(x) = 12x2, which is zero for x = 0, but x4 is strictly convex.
- If is a convex function of one real variable, and , then is superadditive on the positive reals.
- Proof. Since is convex, letting y = 0 we have
- From this we have:
- Proof. Since is convex, letting y = 0 we have
- A function is midpoint convex on an interval C if
- This condition is only slightly weaker than convexity. For example, a real-valued Lebesgue measurable function that is midpoint-convex is convex: this is a theorem of Sierpinski.[3] In particular, a continuous function that is midpoint convex will be convex.
Functions of several variables
- A twice continuously differentiable function of several variables is convex on a convex set if and only if its Hessian matrix of second partial derivatives is positive semidefinite on the interior of the convex set.
- Any local minimum of a convex function is also a global minimum. A strictly convex function will have at most one global minimum.[4]
- For a convex function the sublevel sets {x | f(x) < a} and {x | f(x) ≤ a} with a ∈ R are convex sets. However, a function whose sublevel sets are convex sets may fail to be a convex function. A function whose sublevel sets are convex is called a quasiconvex function.
- Jensen's inequality applies to every convex function . If X is a random variable taking values in the domain of , then , where E denotes the mathematical expectation.
- A first-order homogeneous function of two positive variables x and y (i.e. f(ax, ay) = a f(x,y) for each a,x,y > 0) that is convex in one variable must be convex in the other variable.[5]
Operations that preserve convexity
- is concave if and only if is convex.
- Nonnegative weighted sums:
- if and are all convex, then so is . In particular, the sum of two convex functions is convex.
- this property extends to infinite sums, integrals and expected values as well (provided that they exist).
- Elementwise maximum: let be a collection of convex functions. Then is convex. The domain of is the collection of points where the expression is finite. Important special cases:
- If are convex functions then so is
- If is convex in x then is convex in x even if C is not a convex set.
- Composition:
- If f and g are convex functions and g is non-decreasing over a univariate domain, then is convex. As an example, if is convex, then so is . because is convex and monotonically increasing.
- If f is concave and g is convex and non-increasing over a univariate domain, then is convex.
- Convexity is invariant under affine maps: that is, if f is convex with domain , then so is , where with domain .
- Minimization: If is convex in then is convex in x, provided that C is a convex set and that
- If is convex, then its perspective with domain is convex.
Strongly convex functions
The concept of strong convexity extends and parametrizes the notion of strict convexity. A strongly convex function is also strictly convex, but not vice versa.
A differentiable function is called strongly convex with parameter m > 0 if the following inequality holds for all points x, y in its domain:[6]
or, more generally,
where is any norm. Some authors, such as [7] refer to functions satisfying this inequality as elliptic functions.
An equivalent condition is the following:[8]
It is not necessary for a function to be differentiable in order to be strongly convex. A third definition[8] for a strongly convex function, with parameter m, is that, for all x, y in the domain and
Notice that this definition approaches the definition for strict convexity as m → 0, and is identical to the definition of a convex function when m = 0. Despite this, functions exist that are strictly convex but are not strongly convex for any m > 0 (see example below).
If the function is twice continuously differentiable, then it is strongly convex with parameter m if and only if for all x in the domain, where I is the identity and is the Hessian matrix, and the inequality means that is positive semi-definite. This is equivalent to requiring that the minimum eigenvalue of be at least m for all x. If the domain is just the real line, then is just the second derivative so the condition becomes . If m = 0, then this means the Hessian is positive semidefinite (or if the domain is the real line, it means that ), which implies the function is convex, and perhaps strictly convex, but not strongly convex.
Assuming still that the function is twice continuously differentiable, one can show that the lower bound of implies that it is strongly convex. Using Taylor's Theorem there exists
such that
Then
by the assumption about the eigenvalues, and hence we recover the second strong convexity equation above.
A function is strongly convex with parameter m if and only if the function
is convex.
The distinction between convex, strictly convex, and strongly convex can be subtle at first glance. If is twice continuously differentiable and the domain is the real line, then we can characterize it as follows:
- convex if and only if for all x.
- strictly convex if for all x (note: this is sufficient, but not necessary).
- strongly convex if and only if for all x.
For example, let be strictly convex, and suppose there is a sequence of points such that . Even though , the function is not strongly convex because will become arbitrarily small.
A twice continuously differentiable function on a compact domain that satisfies for all is strongly convex. The proof of this statement follows from the extreme value theorem, which states that a continuous function on a compact set has a maximum and minimum.
Strongly convex functions are in general easier to work with than convex or strictly convex functions, since they are a smaller class. Like strictly convex functions, strongly convex functions have unique minima on compact sets.
Uniformly convex functions
A uniformly convex function,[9][10] with modulus , is a function that, for all x, y in the domain and t ∈ [0, 1], satisfies
where is a function that is non-negative and vanishes only at 0. This is a generalization of the concept of strongly convex function; by taking we recover the definition of strong convexity.
Examples
Functions of one variable
- The function has , so f is a convex function. It is also strongly convex (and hence strictly convex too), with strong convexity constant 2.
- The function has , so f is a convex function. It is strictly convex, even though the second derivative is not strictly positive at all points. It is not strongly convex.
- The absolute value function is convex (as reflected in the triangle inequality), even though it does not have a derivative at the point x = 0. It is not strictly convex.
- The function for is convex.
- The exponential function is convex. It is also strictly convex, since , but it is not strongly convex since the second derivative can be arbitrarily close to zero. More generally, the function is logarithmically convex if f is a convex function. The term "superconvex" is sometimes used instead.[11]
- The function with domain [0,1] defined by for is convex; it is continuous on the open interval (0, 1), but not continuous at 0 and 1.
- The function x3 has second derivative 6x; thus it is convex on the set where x ≥ 0 and concave on the set where x ≤ 0.
- Examples of functions that are monotonically increasing but not convex include and .
- Examples of functions that are convex but not monotonically increasing include and .
- The function has which is greater than 0 if x > 0, so is convex on the interval . It is concave on the interval .
- The function with , is convex on the interval and convex on the interval , but not convex on the interval , because of the singularity at x = 0.
Functions of n variables
- LogSumExp function, also called softmax function, is a convex function.
- The function on the domain of positive-definite matrices is convex.[2]:74
- Every real-valued linear transformation is convex but not strictly convex, since if f is linear, then . This statement also holds if we replace "convex" by "concave".
- Every real-valued affine function, i.e., each function of the form , is simultaneously convex and concave.
- Every norm is a convex function, by the triangle inequality and positive homogeneity.
- The spectral radius of a nonnegative matrix is a convex function of its diagonal elements.[12]
See also
- Concave function
- Convex optimization
- Convex conjugate
- Geodesic convexity
- Kachurovskii's theorem, which relates convexity to monotonicity of the derivative
- Logarithmically convex function
- Pseudoconvex function
- Quasiconvex function
- Invex function
- Subderivative of a convex function
- Jensen's inequality
- Karamata's inequality
- Hermite–Hadamard inequality
- K-convex function
Notes
- "Lecture Notes 2" (PDF). www.stat.cmu.edu. Retrieved 3 March 2017.
- Boyd, Stephen P.; Vandenberghe, Lieven (2004). Convex Optimization (pdf). Cambridge University Press. ISBN 978-0-521-83378-3. Retrieved October 15, 2011.
- Donoghue, William F. (1969). Distributions and Fourier Transforms. Academic Press. p. 12. ISBN 9780122206504. Retrieved August 29, 2012.
- "If f is strictly convex in a convex set, show it has no more than 1 minimum". Math StackExchange. 21 Mar 2013. Retrieved 14 May 2016.
- Altenberg, L., 2012. Resolvent positive linear operators exhibit the reduction phenomenon. Proceedings of the National Academy of Sciences, 109(10), pp.3705-3710.
- Dimitri Bertsekas (2003). Convex Analysis and Optimization. Contributors: Angelia Nedic and Asuman E. Ozdaglar. Athena Scientific. p. 72. ISBN 9781886529458.
- Philippe G. Ciarlet (1989). Introduction to numerical linear algebra and optimisation. Cambridge University Press. ISBN 9780521339841.
- Yurii Nesterov (2004). Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers. pp. 63–64. ISBN 9781402075537.
- C. Zalinescu (2002). Convex Analysis in General Vector Spaces. World Scientific. ISBN 9812380671.
- H. Bauschke and P. L. Combettes (2011). Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer. p. 144. ISBN 978-1-4419-9467-7.
- Kingman, J. F. C. (1961). "A Convexity Property of Positive Matrices". The Quarterly Journal of Mathematics. 12: 283–284. doi:10.1093/qmath/12.1.283.
- Cohen, J.E., 1981. Convexity of the dominant eigenvalue of an essentially nonnegative matrix. Proceedings of the American Mathematical Society, 81(4), pp.657-658.
References
- Bertsekas, Dimitri (2003). Convex Analysis and Optimization. Athena Scientific.
- Borwein, Jonathan, and Lewis, Adrian. (2000). Convex Analysis and Nonlinear Optimization. Springer.
- Donoghue, William F. (1969). Distributions and Fourier Transforms. Academic Press.
- Hiriart-Urruty, Jean-Baptiste, and Lemaréchal, Claude. (2004). Fundamentals of Convex analysis. Berlin: Springer.
- Krasnosel'skii M.A., Rutickii Ya.B. (1961). Convex Functions and Orlicz Spaces. Groningen: P.Noordhoff Ltd.
- Lauritzen, Niels (2013). Undergraduate Convexity. World Scientific Publishing.
- Luenberger, David (1984). Linear and Nonlinear Programming. Addison-Wesley.
- Luenberger, David (1969). Optimization by Vector Space Methods. Wiley & Sons.
- Rockafellar, R. T. (1970). Convex analysis. Princeton: Princeton University Press.
- Thomson, Brian (1994). Symmetric Properties of Real Functions. CRC Press.
- Zălinescu, C. (2002). Convex analysis in general vector spaces. River Edge, NJ: World Scientific Publishing Co., Inc. pp. xx+367. ISBN 981-238-067-1. MR 1921556.