Kimi总结
这篇论文提出了一个新的研究任务:药物分子与适应症之间的翻译。适应症指的是药物用于治疗特定疾病、状况或症状的医疗用途。论文首先介绍了药物发现的重要性和成本,以及大型语言模型(LLM)在分子和文本描述之间转换方面的潜力。作者指出,尽管LLM在自然语言处理(NLP)任务中表现出色,但在药物分子与适应症之间的翻译方面,研究还相对欠缺。
接下来,论文介绍了使用简化分子输入线性条目系统(SMILES)来表示药物分子,并探讨了LLM在这一任务上的潜力。作者提出了两个子任务:从药物的SMILES字符串生成适应症(drug-to-indication),以及从适应症生成对应的SMILES字符串(indication-to-drug)。
研究使用了基于T5架构的MolT5模型,并在两个公开数据集DrugBank和ChEMBL上进行了评估。实验结果表明,较大的MolT5模型在所有配置和任务中表现更好,而模型的微调(fine-tuning)对性能有负面影响。
为了进一步提高性能,作者还尝试了使用自定义分词器从头开始训练最小的MolT5模型。在药物到适应症的任务上,该自定义模型在DrugBank数据集上的表现优于ChEMBL数据集,这可能归因于DrugBank数据集中药物适应症与SMILES字符串之间的更强信号。
论文还讨论了当前方法的局限性,并提出了未来工作的方向,包括改进数据表示、使用更大的模型和更多的数据,以及可能的模型架构改进。作者认为,通过解决这些问题,可以显著提高任务的性能,并最终促进新型药物的发现。
最后,论文提供了实验的方法论,包括数据来源、使用的模型、特征提取技术等,并讨论了数据的可用性。
这篇论文的贡献在于:
提出了一个新的任务:药物分子与适应症之间的翻译。
使用T5基础的LLM和两个数据集(DrugBank和ChEMBL)进行了广泛的实验,并讨论了如何通过解决当前瓶颈来显著提高任务性能。
整体而言,这篇论文为使用AI和LLM进行药物发现和分子设计的研究领域提供了新的视角和实验基础。
Abstract
A drug molecule is a substance that changes an organism’s mental or physical state. Every approved drug has an indication, which refers to the therapeutic use of that drug for treating a particular medical condition. While the Large Language Model (LLM), a generative Artificial Intelligence (AI) technique, has recently demonstrated effectiveness in translating between molecules and their textual descriptions, there remains a gap in research regarding their application in facilitating the translation between drug molecules and indications (which describes the disease, condition or symptoms for which the drug is used), or vice versa. Addressing this challenge could greatly benefit the drug discovery process. The capability of generating a drug from a given indication would allow for the discovery of drugs targeting specific diseases or targets and ultimately provide patients with better treatments. In this paper, we first propose a new task, the translation between drug molecules and corresponding indications, and then test existing LLMs on this new task. Specifically, we consider nine variations of the T5 LLM and evaluate them on two public datasets obtained from ChEMBL and DrugBank. Our experiments show the early results of using LLMs for this task and provide a perspective on the state-of-the-art. We also emphasize the current limitations and discuss future work that has the potential to improve the performance on this task. The creation of molecules from indications, or vice versa, will allow for more efficient targeting of diseases and significantly reduce the cost of drug discovery, with the potential to revolutionize the field of drug discovery in the era of generative AI.
Introduction
Drug discovery is a costly process1 that identifies chemical entities with the potential to become therapeutic agents2. Due to its clear benefits and significance to health, drug discovery has become an active area of research, with researchers attempting to automate and streamline drug discovery3,4. Approved drugs have indications, which refer to the use of that drug for treating a particular disease, condition, or symptoms5. They specify whether the drug is intended for treatment, prevention, mitigation, cure, relief, or diagnosis of that particular ailment. The creation of molecules from indications, or vice versa, will allow for more efficient targeting of diseases and significantly reduce the cost of drug discovery, with the potential to revolutionize the field.
Large Language Models (LLMs) have become one of the major directions of generative Artificial Intelligence (AI) research, with highly performant models like GPT-36, GPT-47, LLaMA8, and Mixtral9 developed in the recent years and services like ChatGPT reaching over 100 million users10,11. LLMs utilize deep learning methods to perform various Natural Language Processing (NLP) tasks, such as text generation12,13 and neural machine translation14,15. The capabilities of LLMs are due in part to their training on large-scale textual data, making the models familiar with a wide array of topics. LLMs have also demonstrated promising performance in a variety of tasks across different scientific fields16,17,18,19. Since LLMs work with textual data, the first step is usually finding a way to express a problem in terms of text or language.
An image or a diagram is a typical way to present a molecule, but methods for obtaining textual representations of molecules do exist. One such method is the Simplified Molecular-Input Line-Entry System (SMILES)20, which is usually considered as a language for describing molecules. As SMILES strings represent drugs in textual form, we can assess the viability of LLMs in translation between drug molecules and their indications. In this paper, we consider two tasks: drug-to-indication and indication-to-drug, where we seek to generate indications from the SMILES strings of drugs, and SMILES strings from possible indications, respectively. Translation between drugs and the corresponding indication will allow for finding a cure for diseases that have no current treatment.
Research efforts have attempted de-novo drug discovery through the use of AI, including graph neural networks21,22 and, more recently, forms of generative AI23. There are numerous existing efforts for molecular design and drug discovery using AI, such as GPT-based models using scaffold SMILES strings accompanied with desired properties of the output molecule24. Others have used T5 architecture for various tasks, such as reaction prediction25 and converting between molecular captions and SMILES strings26. Additional work in the field is centered around the generation of new molecules from gene expression signatures using generative adversarial networks27, training recurrent neural networks on known compounds and their SMILES strings, then fine-tuning for specific agonists of certain receptors28, or using graph neural networks to predict drugs and their corresponding indications from SMILES29. As such, there is an established promise in using AI for drug discovery and molecular design. Efforts to make data more friendly for AI generation of drugs also include the development of the Self-Referencing Embedded Strings (SELFIES)30, which can represent every valid molecule. The reasoning is that such a format will allow generative AI to construct valid molecules while maintaining crucial structural information in the string. The collection of these efforts sets the stage for our attempt at generating drug indications from molecules.
With advancements in medicinal chemistry leading to an increasing number of drugs designed for complex processes, it becomes crucial to comprehend the distinctive characteristics and subtle nuances of each drug. In this direction, researchers have released many resources, including datasets that bridge medicines and chemical ingredients like TCMBank31,32, models for generating high-quality molecular representations to facilitate Computer-Aided Drug Design (CADD)33, and models for drug-drug interactions34,35. This has also led to the development of molecular fingerprints, such as the Morgan fingerprint36 and the MAP4 fingerprint37, which use unique algorithms to vectorize the characteristics of a molecule. Computation of fingerprint representations is rapid, and they maintain much of the features of a molecule38. Molecular fingerprinting methods commonly receive input in the form of SMILES strings, which serve as a linear notation for representing molecules in their structural forms, taking into account the different atoms present, the bonds between atoms, as well as other key characteristics, such as branches, cyclic structures, and aromaticity20. Since SMILES is a universal method of communicating the structure of different molecules, it is appropriate to use SMILES strings for generating fingerprints. Mol2vec39 feeds Morgan fingerprints to the Word2vec40 algorithm by converting molecules into their textual representations. Bidirectional Encoder Representations from Transformers (BERT)41-based models have also been used for obtaining molecular representations, including models like MolBERT42 and ChemBERTa43, which are pretrained BERT instances that take SMILES strings as input and perform downstream tasks on molecular representation and molecular property prediction, respectively. Other efforts in using AI for molecular representations include generating novel molecular graphs through the use of reinforcement learning, decomposition, and reassembly44 and the prediction of 3D representations of small molecules based on their 2D graphical counterparts45.
In this paper, we evaluate the capabilities of MolT5, a T5-based model, in translating between drugs and their indications through the two tasks, drug-to-indication and indication-to-drug, using the drug data from DrugBank and ChEMBL. The drug-to-indication task utilizes SMILES strings for existing drugs as input, with the matching indications of the drug as the target output. The indication-to-drug task takes the set of indications for a drug as input and seeks to generate the corresponding SMILES string for a drug that treats the listed conditions.
We employ all available MolT5 model sizes for our experiments and evaluate them separately across the two datasets. Additionally, we perform the experiments under three different configurations:
1.
Evaluation of the baseline models on the entire available dataset
2.
Evaluation of the baseline models on 20% of the dataset
3.
Fine-tuning the models on 80% of the dataset followed by evaluation on the 20% subset
We found that larger MolT5 models outperformed the smaller ones across all configurations and tasks. It should also be noted that fine-tuning MolT5 models has a negative impact on the performance.
Following these preliminary experiments, we train the smallest available MolT5 model from scratch using a custom tokenizer. This custom model performed better on DrugBank data than on ChEMBL data on the drug-to-indication task, perhaps due to a stronger signal between the drug indications and SMILES strings in their dataset, owing to the level of detail in their indication descriptions. Fine-tuning the custom model on 80% of either dataset did not degrade model performance for either task, and some metrics saw improvement due to fine-tuning. Overall, fine-tuning for the indication-to-drug task did not consistently improve the performance, which holds for both ChEMBL and DrugBank datasets.
While the performance of the custom tokenizer approach is still not satisfying, there is promise in using a larger model and having access to more data. If we have a wealth of high-quality data to train models on translation between drugs and their indications, it may become possible to improve performance and facilitate novel drug discovery with LLMs.
In this paper, we make the following contributions:
1.
We introduce a new task: translation between drug molecules and indications.
2.
We conduct various experiments with T5-based LLMs and two datasets (DrugBank and ChEMBL). Our experiments consider 16 evaluation metrics across all experiments. In addition, we discuss the current bottlenecks that, if addressed, have the potential to significantly improve the performance on the task.
Results
Evaluation of MolT5 models
We performed initial experiments using MolT5 models from HuggingFace (GitHub links: https://huggingface.co/laituan245/molt5-small/tree/main, https://huggingface.co/laituan245/molt5-base/tree/main, https://huggingface.co/laituan245/molt5-large/tree/main). MolT5 offers three model sizes and fine-tuned models of each size, which support each task of our experiments. For experiments generating SMILES strings from drug indications (drug-to-indication), we used the fine-tuned models MolT5-smiles-to-caption, and for generating SMILES strings from drug indications (indication-to-drug), we used the models MolT5-caption-to-smiles. For each of our Tables, we use the following flags: FT (denotes experiments where we fine-tuned the models on 80% of the dataset and evaluated on the remaining 20% test subset), SUB (denotes experiments where the models are evaluated solely on the 20% test subset), and FULL (for experiments evaluating the models on the entirety of each dataset).
Table 1 Evaluation metrics used in the experiments.
Full size table
For evaluating drug-to-indication, we employ the natural language generation metrics BLEU46, ROUGE54,55,56, and METEOR57, as well as the Text2Mol53 metric, which generates similarities of SMILES-Indication pairs. As for evaluation of indication-to-drug, we measure exact SMILES string matches, Levenshtein distance47, SMILES BLEU scores, the Text2Mol similarity metric, as well as three different molecular fingerprint metrics: MACCS48,49, RDK48,50, and Morgan FTS48,51, where FTS stands for fingerprint Tanimoto similarity48, as well as the proportion of returned SMILES strings that are valid molecules. The final metric for evaluating SMILES generation is FCD, or Fréchet ChemNet Distance, which measures the distance between two distributions of molecules from their SMILES strings52. Table 1 presents both drug-to-indication and indication-to-drug metrics, including their descriptions, values, and supported intervals.
Table 2 lists four examples of inputs and our model outputs for both drug-to-indication and indication-to-drug tasks using the large MolT5 model and ChEMBL data. Molecular validity is determined through the use of RDKit (https://www.rdkit.org/docs/index.html), an open-source toolkit for cheminformatics, with the reason for invalidity given. Indication quality is determined by the Text2Mol string similarity between the ground truth and generated indications. We can observe that the proposed model could output valid molecules using SMILES strings for a given indication, and output meaningful indication, such as cancer, for a given molecule. However, there are some misspelling issues in the generated indication due to the small size of T5 model. We hypothesize that LLMs with larger size of parameters could significantly improve the validity of the generated molecules and indications.
Table 2 First four rows: example SMILES strings from the indication-to-drug task; Last four rows: example MolT5 indication generations from the drug-to-indication task.
Full size table
Table 3 DrugBank drug-to-indication results.
Full size table
Table 4 ChEMBL drug-to-indication results.
Full size table
Tables 3 and 4 show the results of MolT5 drug-to-indication experiments on DrugBank and ChEMBL data, respectively. Larger models tended to perform better across all metrics for each experiment. Across almost all metrics for the drug-to-indication task, on both DrugBank and ChEMBL datasets, the model performed the best on the 20% subset data. At the same time, both the subset and full dataset evaluations yielded better results than fine-tuning experiments. As MolT5 models are trained on molecular captions, fine-tuning using indications could introduce noise and weaken the signal between input and target text. The models performed better on DrugBank data than ChEMBL data, which may be due to the level of detail provided by DrugBank for their drug indications.
Table 5 DrugBank indication-to-drug results.
Full size table
Table 6 ChEMBL indication-to-drug results.
Full size table
Tables 5 and 6 show the results of MolT5 indication-to-drug experiments on DrugBank and ChEMBL data, respectively. The tables indicate that fine-tuning the models on the new data worsens performance, reflected in FT experiments yielding worse results than SUB or FULL experiments. Also, larger models tend to perform better across all metrics for each experiment.
In our drug-to-indication and indication-to-drug experiments, we see that fine-tuning the models causes the models to perform worse across all metrics. Additionally, larger models perform better on our tasks. However, in our custom tokenizer experiments, we pretrain MolT5-Small without the added layers of SMILES-to-caption and caption-to-SMILES. By fine-tuning the custom pretrained model on our data for drug-to-indication and indication-to-drug tasks, we aim to see improved results.
Evaluation of custom tokenizer
Table 7 Results for MolT5 augmented with custom tokenizer, drug-to-indication.
Full size table
Table 8 Results for MolT5 augmented with custom tokenizer, indication-to-drug.
Full size table
Tables 7 and 8 show the evaluation of MolT5 pretrained with the custom tokenizer on the drug-to-indication and indication-to-drug tasks, respectively. For drug-to-indication, the model performed better on the DrugBank dataset, reflected across all metrics. This performance difference may be due to a stronger signal between drug indication and SMILES strings in the DrugBank dataset, as the drug indication contains more details. Fine-tuning the model on 80% of either of the datasets did not worsen the performance for drug-to-indication as it did in the baseline results, and some metrics showed improved results. The results for indication-to-drug are more mixed. The model does not consistently perform better across either dataset and fine-tuning the model affects the evaluation metrics inconsistently.
Discussion
In this paper, we proposed a novel task of translating between drugs and indications, considering both drug-to-indication and indication-to-drug subtasks. We focus on generating indications from the SMILES strings of existing drugs and generating SMILES strings from sets of indications. Our experiments are the first attempt at tackling this problem. After conducting experiments with various model configurations and two datasets, we hypothesized potential issues that need further work. We believe that properly addressing these issues could significantly improve the performance of the proposed tasks.
The signal between SMILES strings and indications is poor. In the original MolT5 task (translation between molecules and their textual descriptions), “similar” SMILES strings often had similar textual descriptions. In the case of drug-to-indication and indication-to-drug tasks, similar SMILES strings might have completely different textual descriptions as they are different drugs, and their indications also differ. One could also make a similar observation about SMILES strings that are different: drug indications may be similar. Having no direct relationships between drugs and indications makes it hard to achieve high performance on proposed tasks. We hypothesize that having an intermediate representation that drugs (or indications) map to may improve the performance. As an example, mapping a SMILES string to its caption (MolT5 task) and then caption to indication may be a potential future direction of research.
The signal between drugs and indications is not the only issue: the data is also scarce. Since we do not consider random molecules and their textual descriptions but drugs and their indications, the available data is limited by the number of drugs. In the case of both ChEMBL and DrugBank datasets, the number of drug-indication pairs was under 10000, with the combined size also being under 10000. Finding ways to enrich data may help establish a signal between SMILES strings and indications and could be a potential future avenue for exploration.
Overall, the takeaway from our experiments is that larger models tend to perform better. By using a larger model and having more data (or data that has a stronger signal between drug indications and SMILES strings), we may be able to successfully translate between drug indications and molecules (i.e., SMILES strings) and ultimately facilitate novel drug discovery.
We note that our experiments did not involve human evaluation of the generated indications and relied entirely on automated metrics. We acknowledge that such metrics may not correlate well with human judgment58,59,60. At the same time, manually reviewing thousands of indications would have been expensive and would involve a lot of human labor. Future work could potentially consider incorporating humans in the loop or using LLMs to assess the quality of generated indications.
Experiments with other models and model architectures can be another avenue for exploration. Some potential benefits may include better performance, lower latency, and improved computational complexity. As an example, our current method uses the transformer architecture, which has the overall time complexity of (where is the sequence length and is the embedding dimension), with being the time complexity of the attention layer alone. On the other hand, State Space Models (SSMs), such as Mamba61, scale linearly with the sequence length.
Methods
Figure 1
figure 1
Overview of the methodology of the experiments: drug data is compiled from ChEMBL and DrugBank and utilized as input for MolT5. Our experiments involved two tasks: drug-to-indication and indication-to-drug. For the drug-to-indication task, SMILES strings of existing drugs were used as input, producing drug indications as output. Conversely, for the indication-to-drug task, drug indications of the same set of drugs were the input, resulting in SMILES strings as output. Additionally, we augmented MolT5 with a custom tokenizer in pretraining and evaluated the resulting model on the same tasks.
Full size image
This section describes the dataset, analysis methods, ML models, and feature extraction techniques used in this study. Figure 1 shows the flowchart of the process. We adjust the workflow of existing models for generating molecular captions to instead generate indications for drugs. By training LLMs on the translation between SMILES strings and drug indications, we endeavor to one day be able to create novel drugs that treat medical conditions.
Data
Our data comes from two databases, DrugBank62 and ChEMBL63, which we selected due to the different ways they represent drug indications. DrugBank gives in-depth descriptions of how each drug treats patients, while ChEMBL provides a list of medical conditions each drug treats. Table 9 outlines the size of each dataset, as well as the length of the SMILES and indication data. In the case of DrugBank, we had to request access to use the drug indication and SMILES data. The ChEMBL data was available without request but required establishing a database locally to query and parse the drug indication and SMILES strings into a workable format. Finally, we prepared a pickle file for both databases to allow for metric calculation following the steps presented in MolT526.
Table 9 Dataset Details.
Full size table
Models
We conducted initial experiments using the MolT5 model, based on the T5 architecture26. The T5 basis of the model gives it textual modality from pretraining on the natural language text dataset Colossal Clean Crawled Corpus (C4)64, and the pretraining on 100 million SMILES strings from the ZINC-15 dataset65 gives the model molecular modality.
In our experiments, we utilized fine-tuned versions of the available MolT5 models: SMILES-to-caption, fine-tuned for generating molecular captions from SMILES strings, and caption-to-SMILES, fine-tuned for generating SMILES strings from molecular captions. However, we seek to evaluate the model’s capacity to translate between drug indications and SMILES strings. Thus, we use drug indications in the place of molecular captions, yielding our two tasks: drug-to-indication and indication-to-drug.
The process of our experiments begins with evaluating the baseline MolT5 model for each task on the entirety of the available data (3004 pairs for DrugBank, 6127 pairs for ChEMBL), on a 20% subset of the data (601 pairs for DrugBank, 1225 pairs for ChEMBL), and then fine-tuning the model on 80% (2403 pairs for DrugBank, 4902 pairs for ChEMBL) of the data and evaluating on that same 20% subset.
After compiling the results of the preliminary experiments, we decided to use a custom tokenizer with the MolT5 model architecture. While the default tokenizer leverages the T5 pretraining, the reason is that treating SMILES strings as a form of natural language and tokenizing it accordingly into its component bonds and molecules could improve model understanding of SMILES strings and thus improve performance.
MolT5 with custom tokenizer
Figure 2
figure 2
MolT5 and custom tokenizers: MolT5 tokenizer uses the default English language tokenization and splits the input text into subwords. The intuition is that SMILES strings are composed of characters typically found in English text, and pretraining on large-scale English corpora may be helpful. On the other hand, the custom tokenizer method utilizes the grammar of SMILES and decomposes the input into grammatically valid components.
Full size image
The tokenizer for custom pretraining of MolT5 that we selected came from previous work on adapting transformers for SMILES strings66. This tokenizer separates SMILES strings into individual bonds and molecules. Figure 2 illustrates the behavior of both MolT5 and custom tokenizers. Due to computational limits, we only performed custom pretraining of the smallest available MolT5 model, with 77 million parameters. Our pretraining approach utilized the model configuration of MolT5 and JAX (https://jax.readthedocs.io/en/latest/index.html) / Flax (https://github.com/google/flax) to execute the span-masked language model objective on the ZINC dataset64. Following pretraining, we assessed model performance on both datasets. The experiments comprised three conditions: fine-tuning on 80% (2403 pairs for DrugBank, 4902 pairs for ChEMBL) of the data and evaluating on the remaining 20% (601 pairs for DrugBank, 1225 pairs for ChEMBL), evaluating on 20% of the data without fine-tuning, and evaluating on 100% (3004 pairs for DrugBank, 6127 pairs for ChEMBL) of the data.
Data availability
ChEMBL and DrugBank datasets are publicly available.