In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the Royal Signals and Radar Establishment.
Radial basis function (RBF) networks typically have three layers: an input layer, a hidden layer with a non-linear RBF activation function and a linear output layer. The input can be modeled as a vector of real numbers . The output of the network is then a scalar function of the input vector, , and is given by
where is the number of neurons in the hidden layer, is the center vector for neuron , and is the weight of neuron in the linear output neuron. Functions that depend only on the distance from a center vector are radially symmetric about that vector, hence the name radial basis function. In the basic form, all inputs are connected to each hidden neuron. The norm is typically taken to be the Euclidean distance (although the Mahalanobis distance appears to perform better with pattern recognition) and the radial basis function is commonly taken to be Gaussian
The Gaussian basis functions are local to the center vector in the sense that
i.e. changing parameters of one neuron has only a small effect for input values that are far away from the center of that neuron.
Given certain mild conditions on the shape of the activation function, RBF networks are universal approximators on a compact subset of . This means that an RBF network with enough hidden neurons can approximate any continuous function on a closed, bounded set with arbitrary precision.
The parameters , , and are determined in a manner that optimizes the fit between and the data.
In addition to the above unnormalized architecture, RBF networks can be normalized.
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
This course provides the students with 1) a set of theoretical concepts to understand the machine learning approach; and 2) a subset of the tools to use this approach for problems arising in mechanica
Machine learning and data analysis are becoming increasingly central in sciences including physics. In this course, fundamental principles and methods of machine learning will be introduced and practi
Data required for ecosystem assessment is typically multidimensional. Multivariate statistical tools allow us to summarize and model multiple ecological parameters. This course provides a conceptual i
En intelligence artificielle, plus précisément en apprentissage automatique, la rétropropagation du gradient est une méthode pour entraîner un réseau de neurones. Elle consiste à mettre à jour les poids de chaque neurone de la dernière couche vers la première. Elle vise à corriger les erreurs selon l'importance de la contribution de chaque élément à celles-ci. Dans le cas des réseaux de neurones, les poids synaptiques qui contribuent plus à une erreur seront modifiés de manière plus importante que les poids qui provoquent une erreur marginale.
En mathématiques, les fonctions logistiques sont les fonctions ayant pour expression où et sont des réels positifs et un réel quelconque. Ce sont les solutions en temps continu du modèle de Verhulst. Pour , leur courbe représentative a la forme d'un S ce qui fait qu'elles sont parfois appelées sigmoïdes. Ces fonctions ont été mises en évidence (vers 1840) par Pierre-François Verhulst, qui cherchait un modèle d'évolution non exponentielle de population comportant un frein et une capacité d'accueil .
A neural network can refer to a neural circuit of biological neurons (sometimes also called a biological neural network), a network of artificial neurons or nodes in the case of an artificial neural network. Artificial neural networks are used for solving artificial intelligence (AI) problems; they model connections of biological neurons as weights between nodes. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. All inputs are modified by a weight and summed.
Explore l'évolution des CNN dans le traitement de l'image, couvrant les réseaux neuronaux classiques et profonds, les algorithmes d'entraînement, la rétropropagation, les étapes non linéaires, les fonctions de perte et les frameworks logiciels.
The choice of the shape parameter highly effects the behaviour of radial basis function (RBF) approximations, as it needs to be selected to balance between the ill-conditioning of the interpolation matrix and high accuracy. In this paper, we demonstrate ho ...
PERGAMON-ELSEVIER SCIENCE LTD2023
, , ,
Here we provide the neural data, activation and predictions for the best models and result dataframes of our article "Task-driven neural network models predict neural dynamics of proprioception". It contains the behavioral and neural experimental data (cu ...
The field of biometrics, and especially face recognition, has seen a wide-spread adoption the last few years, from access control on personal devices such as phones and laptops, to automated border controls such as in airports. The stakes are increasingly ...