Deep Learning Works in Practice. But Does it Work in Theory?
Graph Chatbot
Chat with Graph Search
Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
Author summaryIn recent years, the application of deep learning represented a breakthrough in the mass spectrometry (MS) field by improving the assignment of the correct sequence of amino acids from observable MS spectra without prior knowledge, also known ...
Recent successes in deep learning for vision and natural language processing are attributed to larger models but come with energy consumption and scalability issues. Current training of digital deep-learning models primarily relies on backpropagation that ...
In this master thesis, multi-agent reinforcement learning is used to teach robots to build a self-supporting structure connecting two points. To accomplish this task, a physics simulator is first designed using linear programming. Then, the task of buildin ...
Bowers et al. argue that deep neural networks (DNNs) are poor models of biological vision because they often learn to rival human accuracy by relying on strategies that differ markedly from those of humans. We show that this problem is worsening as DNNs ar ...
Explainable AI (XAI) methods aim to describe the decision process of deep neural networks. Early XAI methods produced visual explanations, whereas more recent techniques generate multimodal explanations that include textual information and visual represent ...
Epilepsy is one of the most common neurological disorders that is characterized by recurrent and unpredictable seizures. Wearable systems can be used to detect the onset of a seizure and notify family members and emergency units for rescue. The majority of ...
The successes of deep learning for semantic segmentation can in be, in part, attributed to its scale: a notion that encapsulates the largeness of these computational architectures and the labeled datasets they are trained on. These resource requirements hi ...
The recent advance of large language models (LLMs) demonstrates that these large-scale foundation models achieve remarkable capabilities across a wide range of language tasks and domains. The success of the statistical learning approach challenges our unde ...
Abstractive summarization has seen big improvements in recent years, mostly due to advances in neural language modeling, language model pretraining, and scaling models and datasets. While large language models generate summaries that are fluent, coherent, ...
The success of self-supervised learning in computer vision and natural language processing has motivated pretraining methods on tabular data. However, most existing tabular self-supervised learning models fail to leverage information across multiple data t ...