Person

Prakhar Gupta

This person is no longer with EPFL

Related publications (7)

Natural Language Processing (NLP) driven categorisation and detection of discourse in historical US patents

Jérôme Baudry, Nicolas Christophe Chachereau, Bhargav Srinivasa Desikan, Prakhar Gupta

Patents have traditionally been used in the history of technology as an indication of the thinking process of the inventors, of the challenges or “reverse salients” they faced, or of the social groups influencing the construction of technology. More recent ...
2022

Learning computationally efficient static word and sentence representations

Prakhar Gupta

Most of the Natural Language Processing (NLP) algorithms involve use of distributed vector representations of linguistic units (primarily words and sentences) also known as embeddings in one way or another. These embeddings come in two flavours namely, sta ...
EPFL2021

Lightweight Cross-Lingual Sentence Representation Learning

Martin Jaggi, Prakhar Gupta, Zhuoyuan Mao

Large-scale models for learning fixed-dimensional cross-lingual sentence representations like LASER (Artetxe and Schwenk, 2019b) lead to significant improvement in performance on downstream tasks. However, further increases and modifications based on such ...
ASSOC COMPUTATIONAL LINGUISTICS-ACL2021

Design Patterns for Resource-Constrained Automated Deep-Learning Methods

Prakhar Gupta

We present an extensive evaluation of a wide variety of promising design patterns for automated deep-learning (AutoDL) methods, organized according to the problem categories of the 2019 AutoDL challenges, which set the task of optimizing both model accura ...
2020

Better Word Embeddings by Disentangling Contextual n-Gram Information

Martin Jaggi, Matteo Pagliardini, Prakhar Gupta

Pre-trained word vectors are ubiquitous in Natural Language Processing applications. In this paper, we show how training word embeddings jointly with bigram and even trigram embeddings, results in improved unigram embeddings. We claim that training word em ...
2019

Learning Word Vectors for 157 Languages

Prakhar Gupta, Edouard Grave

Distributed word representations, or word vectors, have recently been applied to many tasks in natural language processing, leading to state-of-the-art performance. A key ingredient to the successful application of these representations is to train them on ...
2018

Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features

Martin Jaggi, Matteo Pagliardini, Prakhar Gupta

The recent tremendous success of unsupervised word embeddings in a multitude of applications raises the obvious question if similar methods could be derived to improve embeddings (i.e. semantic representations) of word sequences as well. We present a simpl ...
2017

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.