Learning vector quantization

In computer science, learning vector quantization (LVQ), is a prototype-based supervised classification algorithm. LVQ is the supervised counterpart of vector quantization systems.

Overview

LVQ can be understood as a special case of an artificial neural network, more precisely, it applies a winner-take-all Hebbian learning-based approach. It is a precursor to self-organizing maps (SOM) and related to neural gas, and to the k-nearest neighbor algorithm (k-NN). LVQ was invented by Teuvo Kohonen.[1]

An LVQ system is represented by prototypes which are defined in the feature space of observed data. In winner-take-all training algorithms one determines, for each data point, the prototype which is closest to the input according to a given distance measure. The position of this so-called winner prototype is then adapted, i.e. the winner is moved closer if it correctly classifies the data point or moved away if it classifies the data point incorrectly.

An advantage of LVQ is that it creates prototypes that are easy to interpret for experts in the respective application domain.[2] LVQ systems can be applied to multi-class classification problems in a natural way. It is used in a variety of practical applications. See the 'Bibliography on the Self-Organizing Map (SOM) and Learning Vector Quantization (LVQ)'.

A key issue in LVQ is the choice of an appropriate measure of distance or similarity for training and classification. Recently, techniques have been developed which adapt a parameterized distance measure in the course of training the system, see e.g. (Schneider, Biehl, and Hammer, 2009)[3] and references therein.

LVQ can be a source of great help in classifying text documents.

Algorithm

Below follows an informal description.
The algorithm consists of three basic steps. The algorithm's input is:

  • how many neurons the system will have (in the simplest case it is equal to the number of classes)
  • what weight each neuron has for
  • the corresponding label to each neuron
  • how fast the neurons are learning
  • and an input list containing all the vectors of which the labels are known already (training set).

The algorithm's flow is:

  1. For next input (with label ) in find the closest neuron ,
    i.e. , where is the metric used ( Euclidean, etc. ).
  2. Update . A better explanation is get closer to the input , if and belong to the same label and get them further apart if they don't.
    if (closer together)
    or if (further apart).
  3. While there are vectors left in go to step 1, else terminate.

Note: and are vectors in feature space.
A more formal description can be found here: http://jsalatas.ictpro.gr/implementation-of-competitive-learning-networks-for-weka/

gollark: ææææ ħ
gollark: tty0.xyz
gollark: Well, my browser says it's connecting using 1.2.
gollark: I am referring to Protocol APIO-9.
gollark: SSLLabs says it wasn't.

References

  1. T. Kohonen. Self-Organizing Maps. Springer, Berlin, 1997.
  2. T. Kohonen (1995), "Learning vector quantization", in M.A. Arbib (ed.), The Handbook of Brain Theory and Neural Networks, Cambridge, MA: MIT Press, pp. 537–540
  3. P. Schneider, B. Hammer, and M. Biehl (2009). "Adaptive Relevance Matrices in Learning Vector Quantization". Neural Computation. 21 (10): 3532–3561. CiteSeerX 10.1.1.216.1183. doi:10.1162/neco.2009.10-08-892. PMID 19635012.CS1 maint: uses authors parameter (link)

Further reading

  • LVQ for WEKA: Implementation of LVQ variants (LVQ1, OLVQ1, LVQ2.1, LVQ3, OLVQ3) for the WEKA Machine Learning Workbench.
  • lvq_pak official release (1996) by Kohonen and his team
  • LVQ for WEKA: Another implementation of LVQ in Java for the WEKA Machine Learning Workbench.
  • GMLVQ toolbox: An easy-to-use implementation of Generalized Matrix LVQ (matrix relevance learning) in (c) matlab
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.