Rosenbrock function

In mathematical optimization, the Rosenbrock function is a non-convex function, introduced by Howard H. Rosenbrock in 1960, which is used as a performance test problem for optimization algorithms.[1] It is also known as Rosenbrock's valley or Rosenbrock's banana function.

Plot of the Rosenbrock function of two variables. Here , and the minimum value of zero is at .

The global minimum is inside a long, narrow, parabolic shaped flat valley. To find the valley is trivial. To converge to the global minimum, however, is difficult.

The function is defined by

It has a global minimum at , where . Usually these parameters are set such that and . Only in the trivial case where the function is symmetric and the minimum is at the origin.

Multidimensional generalisations

Two variants are commonly encountered.

Animation of Rosenbrock's function of three variables. [2]

One is the sum of uncoupled 2D Rosenbrock problems, and is defined only for even s:

[3]

This variant has predictably simple solutions.

A second, more involved variant is

[4]

has exactly one minimum for (at ) and exactly two minima for —the global minimum of all ones and a local minimum near . This result is obtained by setting the gradient of the function equal to zero, noticing that the resulting equation is a rational function of . For small the polynomials can be determined exactly and Sturm's theorem can be used to determine the number of real roots, while the roots can be bounded in the region of .[5] For larger this method breaks down due to the size of the coefficients involved.

Stationary points

Many of the stationary points of the function exhibit a regular pattern when plotted.[5] This structure can be exploited to locate them.

Rosenbrock roots exhibiting hump structures

Optimization examples

Nelder-Mead method applied to the Rosenbrock function

The Rosenbrock function can be efficiently optimized by adapting appropriate coordinate system without using any gradient information and without building local approximation models (in contrast to many derivate-free optimizers). The following figure illustrates an example of 2-dimensional Rosenbrock function optimization by adaptive coordinate descent from starting point . The solution with the function value can be found after 325 function evaluations.

Using the Nelder–Mead method from starting point with a regular initial simplex a minimum is found with function value after 185 function evaluations. The figure below visualizes the evolution of the algorithm.

gollark: Book?
gollark: I think I have access to them, they're in trusted-user-chat somewhere, but I probably shouldn't necessarily share them publicly.
gollark: > photographing your screen instead of taking screenshots
gollark: I'm not really sure, but I suspect *most* don't.
gollark: Not all possible color codes actually map to a wavelength of light.

See also

References

  1. Rosenbrock, H.H. (1960). "An automatic method for finding the greatest or least value of a function". The Computer Journal. 3 (3): 175–184. doi:10.1093/comjnl/3.3.175. ISSN 0010-4620.
  2. Simionescu, P.A. (2014). Computer Aided Graphing and Simulation Tools for AutoCAD users (1st ed.). Boca Raton, FL: CRC Press. ISBN 978-1-4822-5290-3.
  3. Dixon, L. C. W.; Mills, D. J. (1994). "Effect of Rounding Errors on the Variable Metric Method". Journal of Optimization Theory and Applications. 80: 175–179. doi:10.1007/BF02196600.
  4. "Generalized Rosenbrock's function". Retrieved 2008-09-16.
  5. Kok, Schalk; Sandrock, Carl (2009). "Locating and Characterizing the Stationary Points of the Extended Rosenbrock Function". Evolutionary Computation. 17 (3): 437–53. doi:10.1162/evco.2009.17.3.437. hdl:2263/13845. PMID 19708775.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.