In computer science, learning vector quantization (LVQ) is a prototype-based supervised classification algorithm. LVQ is the supervised counterpart of vector quantization systems.
LVQ can be understood as a special case of an artificial neural network, more precisely, it applies a winner-take-all Hebbian learning-based approach. It is a precursor to self-organizing maps (SOM) and related to neural gas, and to the k-nearest neighbor algorithm (k-NN). LVQ was invented by Teuvo Kohonen.
An LVQ system is represented by prototypes which are defined in the feature space of observed data. In winner-take-all training algorithms one determines, for each data point, the prototype which is closest to the input according to a given distance measure. The position of this so-called winner prototype is then adapted, i.e. the winner is moved closer if it correctly classifies the data point or moved away if it classifies the data point incorrectly.
An advantage of LVQ is that it creates prototypes that are easy to interpret for experts in the respective application domain.
LVQ systems can be applied to multi-class classification problems in a natural way.
A key issue in LVQ is the choice of an appropriate measure of distance or similarity for training and classification. Recently, techniques have been developed which adapt a parameterized distance measure in the course of training the system, see e.g. (Schneider, Biehl, and Hammer, 2009) and references therein.
LVQ can be a source of great help in classifying text documents.
Below follows an informal description.
The algorithm consists of three basic steps. The algorithm's input is:
how many neurons the system will have (in the simplest case it is equal to the number of classes)
what weight each neuron has for
the corresponding label to each neuron
how fast the neurons are learning
and an input list containing all the vectors of which the labels are known already (training set).
The algorithm's flow is:
For next input (with label ) in find the closest neuron , i.e.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Hands-on introduction to data science and machine learning. We explore recommender systems, generative AI, chatbots, graphs, as well as regression, classification, clustering, dimensionality reduction
This course aims to introduce the basic principles of machine learning in the context of the digital humanities. We will cover both supervised and unsupervised learning techniques, and study and imple
This class discusses advanced data science and machine learning (ML) topics: Recommender Systems, Graph Analytics, and Deep Learning, Big Data, Data Clouds, APIs, Clustering. The course uses the Wol
In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression. In both cases, the input consists of the k closest training examples in a data set. The output depends on whether k-NN is used for classification or regression: In k-NN classification, the output is a class membership.
Among the available solutions for drone swarm simulations, we identified a lack of simulation frameworks that allow easy algorithms prototyping, tuning, debugging and performance analysis. Moreover, users who want to dive in the research field of drone swa ...
2020
, , ,
Astrobots are robotic artifacts whose swarms are used in astrophysical studies to generate the map of the observable universe. These swarms have to be coordinated with respect to various desired observations. Such coordination\footnote{\z{A coordination sa ...
2021
,
We present faster algorithms for the approximate Closest Vector Problem under l_p norms ...