Augmenting large language models with chemistry tools
Publications associées (44)
Graph Chatbot
Chattez avec Graph Search
Posez n’importe quelle question sur les cours, conférences, exercices, recherches, actualités, etc. de l’EPFL ou essayez les exemples de questions ci-dessous.
AVERTISSEMENT : Le chatbot Graph n'est pas programmé pour fournir des réponses explicites ou catégoriques à vos questions. Il transforme plutôt vos questions en demandes API qui sont distribuées aux différents services informatiques officiellement administrés par l'EPFL. Son but est uniquement de collecter et de recommander des références pertinentes à des contenus que vous pouvez explorer pour vous aider à répondre à vos questions.
Beliefs inform the behaviour of forward-thinking agents in complex environments. Recently, sequential Bayesian inference has emerged as a mechanism to study belief formation among agents adapting to dynamical conditions. However, we lack critical theory to ...
The structural characterization of supported molecular catalysts is challenging due to the low density of active sites and the presence of several organic/organometallic surface groups resulting from the often complex surface chemistry associated with supp ...
A Concideration of Typologies for Housing the Sick as a Spatial Manifestation of knowledge. ...
EPFL Press2022
, , ,
Under resource constraints, LLMs are usually fine- tuned with additional knowledge using Parameter Efficient Fine-Tuning (PEFT), using Low-Rank Adaptation (LoRA) modules. In fact, LoRA injects a new set of small trainable matrices to adapt an LLM to a new ...
2024
, ,
The photochemical thiol-ene reaction is an efficient method for rapid and chemoselective formation of thioether linkages under mild conditions. It has found widespread use in small-molecule synthesis as well as peptide and protein chemistry. While high-thr ...
WILEY2023
By taking a journey through the events that happened during Professor David A. Evans' lifetime in the context of chemical synthesis and drug discovery, this in-focus article reflects upon Professor Evans' lifelong scientific and padegogical impacts on the ...
AMER CHEMICAL SOC2022
In this dissertation, we propose multiple methods to improve transfer learning for pretrained language models (PLMs). Broadly, transfer learning is a powerful technique in natural language processing, where a language model is first pre-trained on a data-r ...
Under resource constraints, LLMs are usually fine-tuned with additional knowledge using Parameter Efficient Fine-Tuning (PEFT), using Low-Rank Adaptation (LoRA) modules. In fact, LoRA injects a new set of small trainable matrices to adapt an LLM to a new t ...
2024
,
Large language models (LLMs) have been leveraged for several years now, obtaining state-of-the-art performance in recognizing entities from modern documents. For the last few months, the conversational agent ChatGPT has "prompted" a lot of interest in the ...
Large language models (LLMs) have demonstrated human-level performance on a vast spectrum of natural language tasks. However, it is largely unexplored whether they can better internalize knowledge from a structured data, such as a knowledge graph, or from ...