**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Lecture# Boltzmann Machine

Description

This lecture covers the Boltzmann Machine, focusing on the concept of expectation consistency, clustering of data, and the goal of writing the probability distribution function. The instructor explains the process of training the machine, generating samples, and maximizing the likelihood of the data using gradient descent.

Login to watch the video

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Instructor

In course

PHYS-467: Machine learning for physicists

Machine learning and data analysis are becoming increasingly central in sciences including physics. In this course, fundamental principles and methods of machine learning will be introduced and practi

Related concepts (206)

Cluster sampling

In statistics, cluster sampling is a sampling plan used when mutually homogeneous yet internally heterogeneous groupings are evident in a statistical population. It is often used in marketing research. In this sampling plan, the total population is divided into these groups (known as clusters) and a simple random sample of the groups is selected. The elements in each cluster are then sampled. If all elements in each sampled cluster are sampled, then this is referred to as a "one-stage" cluster sampling plan.

Convenience sampling

Convenience sampling (also known as grab sampling, accidental sampling, or opportunity sampling) is a type of non-probability sampling that involves the sample being drawn from that part of the population that is close to hand. This type of sampling is most useful for pilot testing. Convenience sampling is not often recommended for research due to the possibility of sampling error and lack of representation of the population. But it can be handy depending on the situation. In some situations, convenience sampling is the only possible option.

Sampling error

In statistics, sampling errors are incurred when the statistical characteristics of a population are estimated from a subset, or sample, of that population. It can produced biased results. Since the sample does not include all members of the population, statistics of the sample (often known as estimators), such as means and quartiles, generally differ from the statistics of the entire population (known as parameters). The difference between the sample statistic and population parameter is considered the sampling error.

Normal distribution

In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is The parameter is the mean or expectation of the distribution (and also its median and mode), while the parameter is its standard deviation. The variance of the distribution is . A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate.

Survey sampling

In statistics, survey sampling describes the process of selecting a sample of elements from a target population to conduct a survey. The term "survey" may refer to many different types or techniques of observation. In survey sampling it most often involves a questionnaire used to measure the characteristics and/or attitudes of people. Different ways of contacting members of a sample once they have been selected is the subject of survey data collection.

Related lectures (1,000)

Eigenstate Thermalization Hypothesis

Explores the Eigenstate Thermalization Hypothesis in quantum systems, emphasizing the random matrix theory and the behavior of observables in thermal equilibrium.

Efficient Stochastic Numerical Methods

Explores efficient stochastic numerical methods for modeling and learning, covering topics like the Analytical Engine and kinase inhibitors.

Quantum Information

Explores the CHSH operator, self-testing, eigenstates, and quantifying randomness in quantum systems.

Gaussian Mixture Models: Data Classification

Explores denoising signals with Gaussian mixture models and EM algorithm, EMG signal analysis, and image segmentation using Markovian models.

Generative Models: Self-Attention and Transformers

Covers generative models with a focus on self-attention and transformers, discussing sampling methods and empirical means.