Building Word Embeddings for Solving Natural Language Processing
Related publications (151)
Graph Chatbot
Chat with Graph Search
Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
This study presents a self-supervised Bayesian Neural Network (BNN) framework using air-borne Acoustic Emission (AE) to identify different Laser Powder Bed Fusion (LPBF) process regimes such as Lack of Fusion, conduction mode, and keyhole without ground-tr ...
In this paper, we develop a MultiTask Learning (MTL) model to achieve dense predictions for comic panels to, in turn, facilitate the transfer of comics from one publication channel to another by assisting authors in the task of reconfiguring their narrativ ...
Abstractive summarization has seen big improvements in recent years, mostly due to advances in neural language modeling, language model pretraining, and scaling models and datasets. While large language models generate summaries that are fluent, coherent, ...
EPFL2023
Robustness of medical image classification models is limited by its exposure to the candidate disease classes. Generalized zero shot learning (GZSL) aims at correctly predicting seen and unseen classes and most current GZSL approaches have focused on the s ...
Cham2023
, ,
There is a strong incentive to develop computational pathology models to i) ease the burden of tissue typology annotation from whole slide histological images; ii) transfer knowledge, e.g., tissue class separability from the withheld source domain to the d ...
2023
, , ,
Text-to-image models, such as Stable Diffusion, can generate high-quality images from simple textual prompts. With methods such as Textual Inversion, it is possible to expand the vocabulary of these models with additional concepts, by learning the vocabula ...
BMVA2023
,
Unsupervised Domain Adaptation Regression (DAR) aims to bridge the domain gap between a labeled source dataset and an unlabelled target dataset for regression problems. Recent works mostly focus on learning a deep feature encoder by minimizing the discrepa ...
IEEE2023
, ,
Vision-Language Pre-training (VLP) has advanced the performance of many visionlanguage tasks, such as image-text retrieval, visual entailment, and visual reasoning. The pre-training mostly utilizes lexical databases and image queries in English. Previous w ...
Assoc Computational Linguistics-Acl2023
, , , ,
Current machine learning models for vision are often highly specialized and limited to a single modality and task. In contrast, recent large language models exhibit a wide range of capabilities, hinting at a possibility for similarly versatile models in co ...
Recent transformer language models achieve outstanding results in many natural language processing (NLP) tasks. However, their enormous size often makes them impractical on memory-constrained devices, requiring practitioners to compress them to smaller net ...