Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Belief propagation, also known as sum–product message passing, is a message-passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields. It calculates the marginal distribution for each unobserved node (or variable), conditional on any observed nodes (or variables). Belief propagation is commonly used in artificial intelligence and information theory, and has demonstrated empirical success in numerous applications, including low-density parity-check codes, turbo codes, free energy approximation, and satisfiability. The algorithm was first proposed by Judea Pearl in 1982, who formulated it as an exact inference algorithm on trees, later extended to polytrees. While the algorithm is not exact on general graphs, it has been shown to be a useful approximate algorithm. Given a finite set of discrete random variables with joint probability mass function , a common task is to compute the marginal distributions of the . The marginal of a single is defined to be where is a vector of possible values for the , and the notation means that the sum is taken over those whose th coordinate is equal to . Computing marginal distributions using this formula quickly becomes computationally prohibitive as the number of variables grows. For example, given 100 binary variables , computing a single marginal using and the above formula would involve summing over possible values for . If it is known that the probability mass function factors in a convenient way, belief propagation allows the marginals to be computed much more efficiently. Variants of the belief propagation algorithm exist for several types of graphical models (Bayesian networks and Markov random fields in particular). We describe here the variant that operates on a factor graph. A factor graph is a bipartite graph containing nodes corresponding to variables and factors , with edges between variables and the factors in which they appear. We can write the joint mass function: where is the vector of neighboring variable nodes to the factor node .
Andreas Peter Burg, Alexios Konstantinos Balatsoukas Stimming, Andreas Toftegaard Kristensen, Yifei Shen, Yuqing Ren, Leyu Zhang, Chuan Zhang
Andreas Peter Burg, Alexios Konstantinos Balatsoukas Stimming, Yifei Shen, Yuqing Ren, Hassan Harb