**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Lecture# Learning control laws with DS

Description

This lecture covers the concept of learning control laws with Dynamical Systems (DS) for robots. It explains why learned dynamics with standard machine learning algorithms can be unstable, introduces Stable Estimator of Dynamical Systems (SEDS) and Linear Parameter Varying Dynamical Systems. The lecture demonstrates modeling a robot as a point mass moving according to a time-invariant autonomous dynamical system, discusses training data, and the use of Support Vector Regression and Gaussian Mixture Regression. It also touches on fitting using Neural Networks and concludes by emphasizing that learning a control law as a regression problem involves regressing on the velocity given the state.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

In course

Related concepts (32)

MICRO-462: Learning and adaptive control for robots

To cope with constant and unexpected changes in their environment, robots need to adapt their paths rapidly and appropriately without endangering humans. this course presents method to react within mi

Segmented regression

Segmented regression, also known as piecewise regression or broken-stick regression, is a method in regression analysis in which the independent variable is partitioned into intervals and a separate line segment is fit to each interval. Segmented regression analysis can also be performed on multivariate data by partitioning the various independent variables. Segmented regression is useful when the independent variables, clustered into different groups, exhibit different relationships between the variables in these regions.

Nonlinear regression

In statistics, nonlinear regression is a form of regression analysis in which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fitted by a method of successive approximations. In nonlinear regression, a statistical model of the form, relates a vector of independent variables, , and its associated observed dependent variables, . The function is nonlinear in the components of the vector of parameters , but otherwise arbitrary.

Deming regression

In statistics, Deming regression, named after W. Edwards Deming, is an errors-in-variables model which tries to find the line of best fit for a two-dimensional dataset. It differs from the simple linear regression in that it accounts for errors in observations on both the x- and the y- axis. It is a special case of total least squares, which allows for any number of predictors and a more complicated error structure.

Regression toward the mean

In statistics, regression toward the mean (also called reversion to the mean, and reversion to mediocrity) is the phenomenon where if one sample of a random variable is extreme, the next sampling of the same random variable is likely to be closer to its mean. Furthermore, when many random variables are sampled and the most extreme results are intentionally picked out, it refers to the fact that (in many cases) a second sampling of these picked-out variables will result in "less extreme" results, closer to the initial mean of all of the variables.

Nonparametric regression

Nonparametric regression is a category of regression analysis in which the predictor does not take a predetermined form but is constructed according to information derived from the data. That is, no parametric form is assumed for the relationship between predictors and dependent variable. Nonparametric regression requires larger sample sizes than regression based on parametric models because the data must supply the model structure as well as the model estimates.

Related lectures (32)

Learning Modulation for DSMICRO-462: Learning and adaptive control for robots

Explores learning and adaptive control for robots, emphasizing modulation of dynamical systems to improve stability and enable reactive motion.

Multilayer Neural Networks: Deep LearningPHYS-467: Machine learning for physicists

Covers the fundamentals of multilayer neural networks and deep learning.

Neural Networks: Multilayer LearningPHYS-467: Machine learning for physicists

Covers the fundamentals of multilayer neural networks and deep learning, including back-propagation and network architectures like LeNet, AlexNet, and VGG-16.

Introduction to Neural NetworksPHYS-467: Machine learning for physicists

Introduces neural networks, focusing on multilayer perceptrons and training with stochastic gradient descent.

Learning and Adaptive Control for Robots: SEDS & LPV-DSMICRO-462: Learning and adaptive control for robots

Explores learning and adaptive control for robots through SEDS and LPV-DS, emphasizing stability, non-linear dynamics, and optimization.