Publication

Gradient estimates of return

Publications associées (32)

ActiveAx(ADD): Toward non-parametric and orientationally invariant axon diameter distribution mapping using PGSE

Jean-Philippe Thiran, David Paul Roger Romascano, Alessandro Daducci, Muhamed Barakovic, Tim Bjørn Dyrby

Purpose Non-invasive axon diameter distribution (ADD) mapping using diffusion MRI is an ill-posed problem. Current ADD mapping methods require knowledge of axon orientation before performing the acquisition. Instead, ActiveAx uses a 3D sampling scheme to e ...
2019

Dimensionally Tight Bounds for Second-Order Hamiltonian Monte Carlo

Nisheeth Vishnoi, Oren Rami Mangoubi

Hamiltonian Monte Carlo (HMC) is a widely deployed method to sample from high-dimensional distributions in Statistics and Machine learning. HMC is known to run very efficiently in practice and its popular second-order "leapfrog" implementation has long bee ...
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS)2018

Efficient Variance-Reduced Learning Over Multi-Agent Networks

Ali H. Sayed, Bicheng Ying, Kun Yuan

This work develops a fully decentralized variance-reduced learning algorithm for multi-agent networks where nodes store and process the data locally and are only allowed to communicate with their immediate neighbors. In the proposed algorithm, there is no ...
IEEE COMPUTER SOC2018

Distributed Coupled Learning Over Adaptive Networks

Ali H. Sayed, Sulaiman A S A E Alghunaim

This work develops an effective distributed algorithm for the solution of stochastic optimization problems that involve partial coupling among both local constraints and local cost functions. While the collection of networked agents is interested in discov ...
IEEE2018

Decentralized exact coupled optimization

Ali H. Sayed, Kun Yuan, Sulaiman A S A E Alghunaim

This work develops an exact converging algorithm for the solution of a distributed optimization problem with partially-coupled parameters across agents in a multi-agent scenario. In this formulation, while the network performance is dependent on a collecti ...
2017

Exact Diffusion for Distributed Optimization and Learning---Part I: Algorithm Development

Ali H. Sayed, Bicheng Ying, Kun Yuan

This work develops a distributed optimization strategy with guaranteed exact convergence for a broad class of left-stochastic combination policies. The resulting exact diffusion strategy is shown in Part II to have a wider stability range and superior conv ...
2017

Comparative assessment of selected sugarcane biorefinery-centered systems in Brazil: A multi-criteria method based on sustainability indicators

Edgard Gnansounou, Elia Mercedes Ruiz Pachon, Pavel Vaskan, Catarina Marciano Alves

This work proposes a new sustainability assessment framework aiming to compare selected options of biorefineries subject to provide the same services to a community. At this end, a concept of biorefinery-centered system helps to develop a joint resources a ...
Elsevier2017

How close are the eigenvectors and eigenvalues of the sample and actual covariance matrices?

Andreas Loukas

How many samples are sufficient to guarantee that the eigenvectors and eigenvalues of the sample covariance matrix are close to those of the actual covariance matrix? For a wide family of distributions, including distributions with finite second moment and ...
2017

Diffusion gradient boosting for networked learning

Ali H. Sayed, Bicheng Ying

Using duality arguments from optimization theory, this work develops an effective distributed gradient boosting strategy for inference and classification by networked clusters of learners. By sharing local dual variables with their immediate neighbors thro ...
2017

Distributor of neurons in a neocortical column

Henry Markram, Felix Schürmann, Georges Khazen, Martin Telefont

Computer-implemented methods, software, and systems for determining a distribution of neuronal cells across a portion of a brain are described. One computer-implemented method for determining a target distribution of one or more neuronal cells across a por ...
2014

Graph Chatbot

Chattez avec Graph Search

Posez n’importe quelle question sur les cours, conférences, exercices, recherches, actualités, etc. de l’EPFL ou essayez les exemples de questions ci-dessous.

AVERTISSEMENT : Le chatbot Graph n'est pas programmé pour fournir des réponses explicites ou catégoriques à vos questions. Il transforme plutôt vos questions en demandes API qui sont distribuées aux différents services informatiques officiellement administrés par l'EPFL. Son but est uniquement de collecter et de recommander des références pertinentes à des contenus que vous pouvez explorer pour vous aider à répondre à vos questions.