Competitive learning is a form of unsupervised learning in artificial neural networks, in which nodes compete for the right to respond to a subset of the input data. A variant of Hebbian learning, competitive learning works by increasing the specialization of each node in the network. It is well suited to finding clusters within data.
Models and algorithms based on the principle of competitive learning include vector quantization and self-organizing maps (Kohonen maps).
There are three basic elements to a competitive learning rule:
A set of neurons that are all the same except for some randomly distributed synaptic weights, and which therefore respond differently to a given set of input patterns
A limit imposed on the "strength" of each neuron
A mechanism that permits the neurons to compete for the right to respond to a given subset of inputs, such that only one output neuron (or only one neuron per group), is active (i.e. "on") at a time. The neuron that wins the competition is called a "winner-take-all" neuron.
Accordingly, the individual neurons of the network learn to specialize on ensembles of similar patterns and in so doing become 'feature detectors' for different classes of input patterns.
The fact that competitive networks recode sets of correlated inputs to one of a few output neurons essentially removes the redundancy in representation which is an essential part of processing in biological sensory systems.
Competitive Learning is usually implemented with Neural Networks that contain a hidden layer which is commonly known as “competitive layer”. Every competitive neuron is described by a vector of weights and calculates the similarity measure between the input data and the weight vector .
For every input vector, the competitive neurons “compete” with each other to see which one of them is the most similar to that particular input vector. The winner neuron m sets its output and all the other competitive neurons set their output .
Usually, in order to measure similarity the inverse of the Euclidean distance is used: between the input vector and the weight vector .
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
Learning is observable in animal and human behavior, but learning is also a topic of computer science. This course links algorithms from machine learning with biological phenomena of synaptic plastic
Le gaz neuronal est un réseau de neurones artificiel, inspiré des cartes autoadaptatives, et introduites en 1991 par Thomas Martinetz et Klaus Schulten. Le gaz neuronal est un algorithme simple pour trouver une représentation optimale de données à partir de vecteurs principaux. La méthode fut appelée "gaz neuronal" parce que l'évolution des vecteurs principaux durant l'étape d'apprentissage fait penser à un gaz qui occupe un espace de façon uniforme.