Laboratoire d'Histoire des Sciences et des Techniques
Laboratoire
Publications associées (41)
Graph Chatbot
Chattez avec Graph Search
Posez n’importe quelle question sur les cours, conférences, exercices, recherches, actualités, etc. de l’EPFL ou essayez les exemples de questions ci-dessous.
AVERTISSEMENT : Le chatbot Graph n'est pas programmé pour fournir des réponses explicites ou catégoriques à vos questions. Il transforme plutôt vos questions en demandes API qui sont distribuées aux différents services informatiques officiellement administrés par l'EPFL. Son but est uniquement de collecter et de recommander des références pertinentes à des contenus que vous pouvez explorer pour vous aider à répondre à vos questions.
For a long time, natural language processing (NLP) has relied on generative models with task specific and manually engineered features. Recently, there has been a resurgence of interest for neural networks in the machine learning community, obtaining state ...
This article decomposes the R&D-patent relationship at the industry level to shed light on the sources of the worldwide surge in patent applications. The empirical analysis is based on a unique data set that includes five patent indicators computed for 18 ...
This paper describes a new patent-based indicator of inventive activity. The indicator is based on counting all the priority patent applications filed by a country's inventors, regardless of the patent office in which the application is filed, and can ther ...
This paper seeks to understand how motives to patent affect the use of the patent portfolio with a particular focus on motives aimed at the monetization of intellectual property. The analysis relies on data from an international survey conducted by the Eur ...
We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is a ...
Word embedding is a feature learning technique which aims at mapping words from a vocabulary into vectors of real numbers in a low-dimensional space. By leveraging large corpora of unlabeled text, such continuous space representations can be computed for c ...
Word embeddings resulting from neural language models have been shown to be a great asset for a large variety of NLP tasks. However, such architecture might be difficult and time-consuming to train. Instead, we propose to drastically simplify the word embe ...
Recently, there has been a lot of effort to represent words in continuous vector spaces. Those representations have been shown to capture both semantic and syntactic information about words. However, distributed representations of phrases remain a challeng ...
A popular application in Natural Language Processing (NLP) is the Sentiment Analysis (SA), i.e., the task of extracting contextual polarity from a given text. The social network Twitter provides an immense amount of text (called tweets) generated by users ...
Word embeddings resulting from neural language models have been shown to be successful for a large variety of NLP tasks. However, such architecture might be difficult to train and time-consuming. Instead, we propose to drastically sim- plify the word embed ...