Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
We present a new approach to model visual scenes in image collections, based on local invariant features and probabilistic latent space models. Our formulation provides answers to three open questions:(1) whether the invariant local features are suitable f ...
In a recent paper, we reported promising automatic speech recognition results obtained by appending spectral entropy features to PLP features. In the present paper, spectral entropy features are used along with PLP features in the framework of multi-stream ...
We present a new approach to model visual scenes in image collections, based on local invariant features and probabilistic latent space models. Our formulation provides answers to three open questions:(1) whether the invariant local features are suitable f ...
In a recent paper, we reported promising automatic speech recognition results obtained by appending spectral entropy features to PLP features. In the present paper, spectral entropy features are used along with PLP features in the framework of multi-stream ...
Recently, we introduced a simple variational bound on mutual information, that resolves some of the difficulties in the application of information theory to machine learning. Here we study a specific application to Gaussian channels. It is well known that ...
The goal of neural processing assemblies is varied, and in many cases still rather unclear. However, a possibly reasonable subgoal is that sensory information may be encoded efficiently in a population of neurons. In this context, Mutual Information is a l ...
Sparse approximations to Bayesian inference for nonparametric Gaussian Process models scale linearly in the number of training points, allowing for the application of these powerful kernel-based models to large datasets. We show how to generalize the binar ...
Department of Statistics, University of Berkeley, CA2004
Classifier performance is often enhanced through combining multiple streams of information. In the context of multi-stream HMM/ANN systems in ASR, a confidence measure widely used in classifier combination is the entropy of the posteriors distribution outp ...
Binary weights are favored in electronic and optical hardware implementations of neural networks as they lead to improved system speeds. Optical neural networks based on fast ferroelectric liquid crystal binary level devices can benefit from the many order ...
Binary weights are favored in electronic and optical hardware implementations of neural networks as they lead to improved system speeds. Optical neural networks based on fast ferroelectric liquid crystal binary level devices can benefit from the many order ...