AI in Drug Discovery - A Practical View From the Trenches
It has never been my intent to use this blog as a personal soapbox, but I feel the need to respond to a recent article on AI in drug discovery.
A recent viewpoint by Allan Jordan in ACS Medicinal Chemistry Letters suggests that we are nearing the zenith of the hype curve for Artificial Intelligence (AI) in drug discovery and that this hype will be followed by an inevitable period of disillusionment. Jordan goes on to discuss the hype created around computer-aided drug design and draws parallels to current work to incorporate AI technologies in drug discovery. While the author does make some reasonable points, he fails to highlight specific problems or to define what he means by AI. This is understandable. While the term AI is used frequently, most available definitions are still unclear. Wikipedia defines AI as “ intelligence demonstrated by machines", not a particularly helpful phrase. We wouldn’t consider a person who can walk around a room without bumping into things as particularly intelligent. However, a robot that can do the same thing might be considered an AI. It is unfortunate that the popular press has inspired expectations of sentient computers, akin to HAL 9000 from the movie “2001 A Space Odyssey”, that will replicate human thought processes.
I prefer to focus on machine learning (ML), a relatively well-defined subfield of AI. In essence, machine learning can be thought of as “using patterns in data to label things.” In drug discovery, machine learning can take a number of different forms ranging from the classification of phenotypes from cellular images to models for predicting physical properties and biological activity. Much of the recent hype around AI has centered on the application of deep neural networks, often referred to as “deep learning”. While many refer to neural networks as motivated by the human brain, they are really just systems for fitting non-linear relationships to data. Over the last few years, advances in machine learning have impacted drug discovery in a number of areas.
Image analysis - The application of deep learning methods has brought about impressive improvements in the speed and accuracy achieved in standard image classification benchmarks. These improvements have already impacted drug discovery efforts. Many of the same algorithms used to identify cat pictures on the internet can be applied to the identification of specific cellular phenotypes.
Organic synthesis planning - Recent publications have shown that machine learning methods are capable of generating reasonable routes to synthetically challenging molecules. In his viewpoint, Jordan makes the point that continued use of ML in synthesis planning will become self-reinforcing and the same reactions will end up being overused. In practice, this will probably not end up being an issue. ML researchers have dealt with biased data in a number of other fields and have techniques for addressing unbalanced datasets. There are, however, two other areas that may end up having a more significant impact on the use of ML in synthesis planning. The first of these is data availability. The vast majority of electronic data on organic reactions is the property of a small number of entities. If these entities decline to supply the data, the field will have very few options. While it may be possible to create a public database of reactions from chemical patents, the extraction process is non-trivial and the scope of the extracted reactions may be limited. The other confounding factor is the absence of negative data. In order to be truly effective a synthesis planning system should have access to reactions that worked as well as those that didn’t. Unfortunately, unsuccessful reactions are rarely published in the literature or in patents, so it may be difficult to truly determine the scope of a reaction.
Predictive models - Computational chemists have been using machine learning to build QSAR models for more than 15 years. More recently, many computational chemists have augmented their more traditional machine learning approaches with deep learning methods. These methods have been employed in a few different ways. One of the first approaches was to use deep learning methods to build models based on widely used molecular descriptors. These efforts tended to generate models that, at best, provided a modest improvement over the current state of the art. In a 2015 publication, a group from Merck and the University of Toronto showed that deep learning produced an average R2 increase of 0.04 over the standard method used at Merck. Subsequent work from Google and Stanford showed that deep learning could also be used to create new molecular descriptors. To date, these methods, referred to as graph convolutions also show marginal improvements over the current state of the art. Graph convolutions have only been in existence for a few years so it is difficult to assess their impact. Recent developments show promise and it will be interesting to see if new representations will produce better predictive models. While some have argued that deep learning methods will be more effective on large datasets, this has yet to be demonstrated in a convincing fashion. Another potential downside to deep neural networks in QSAR is their lack of interpretability. The ultimate goal of most predictive models in medicinal chemistry is to help a discovery team to interpret SAR and to decide which molecules to synthesize next. To be most effective, a model should enable a team to determine the impact of specific parts of a molecule on the property, or properties, being predicted.
Quantum chemistry - A number of recent publications have reported the use of machine learning methods to reproduce quantum chemical potentials. The ANI-1 method can generate DFT level potentials, which would typically take more than two days to calculate, in less than a second. Recent reports have also shown that these methods can be used to calculate gradients and perform molecular dynamics. While these results appear promising, the extensibility and applicability of the methods is still an open question.
Machine learning is also having a dramatic impact on computational biology with advances in gene expression profiling, high-throughput sequencing, protein expression, and numerous other areas.
Jordan makes the point that AI will not have the capacity to impact new areas such as PROTACs or RNA therapies. It is true that any machine learning model is only as good as the data used to train that model. Hopefully, as we improve our abilities to computationally capture the fundamental nature of molecular interactions, machine learning will have more of an impact on new areas. One exciting area of machine learning research, transfer learning, may be able to provide such an impact. In transfer learning, a larger more general dataset can be used to enable a model to learn fundamental concepts. The model can then be fine-tuned on a more specific dataset. This technique is now being widely used in image analysis, where training a model on a set of general images from the internet can enable the model to learn concepts like edges and objects. The model can then be fine-tuned for specific tasks like identifying tumors in radiology images. In in a similar fashion, QSAR models can be initially trained on a large amount of public data (e.g. properties from quantum chemical calculations) then fine-tuned to predict physical properties like aqueous solubility.
In Jordan’s perspective, he points out that adopting AI requires significant investment and buy-in. While it is true that machine learning may not be at a point where it is accessible to the medicinal chemist at the bench, it is certainly within the reach of any practicing computational chemist. The appearance of toolkits like scikit-learn and Keras have made it straightforward for any competent programmer to implement a machine learning model. The DeepChem project has made this even easier by integrating machine learning with widely used cheminformatics methods. Projects like Google’s AutoML, auto-scikit-learn, and auto-keras have automated best practices for model parameter tuning. We have also reached a point where it is no longer necessary to invest in expensive computer hardware in order to run machine learning models. Services such as Amazon Web Services, Google Cloud and Microsoft Azure have made it possible to lease the powerful GPU computer systems necessary to run machine learning programs for less than one US dollar per hour. As with any modeling exercise, ML practitioners should have a sufficient command of statistics to enable them to assess the quality of the model. It is far to easy to be fooled by a poorly validated model.
Another interesting aspect of Jordan’s viewpoint is the desire for AI systems to mimic the thought processes of a medicinal chemist and “make nonobvious and illogical connections across incomplete data sets.” I would argue that the decisions of a medicinal chemist are rarely illogical and usually influenced by prior exposure to discovery programs or from reading the literature. These decisions come from the same “chemists who made this also made this” concept that Jordan points out as a flaw in AI systems. While machine learning is not currently integrated with the medicinal chemistry literature, one can imagine a day in the near future where it will be possible to ask questions like “show me any precedented replacements for an oxazole that reduced hERG binding while maintaining metabolic stability.” Tools like this could dramatically expand the scope of medicinal chemists knowledge beyond personal experience. Another “illogical” process employed by medicinal chemists is “pushing on the walls” of a protein binding site. Medicinal chemists will often use their intuition to intentionally introduce clashes between a ligand and protein binding site. These clashes can sometimes reveal areas where a protein is flexible and may even reveal new pockets that can be used for optimization. Given sufficient data, it should be possible to train an ML system to detect areas where an approach like this might be productive. Again, ML can be used as an adjunct to, rather than a replacement for, the chemist's intuition.
Any new development will have its share of zealots as well as its naysayers. It is definitely the case that AI in general, and particularly in drug discovery, has received more than its share of recent attention. Unfortunately, much of the current hype stems from a fundamental lack of understanding of the techniques being used as well as their intent. I can understand this confusion. Any machine learning method is simply trying to establish a relationship between a set of observables, X, and a response, Y. After watching a number of recent presentations at conferences on AI in Drug Discovery, I am often unclear on X, Y and the nature of the actual problem being solved. It is important for us to cut through the hype and focus on areas where we can make an impact. This is an exciting time for machine learning research. New developments coming from organizations like Google, Facebook, Microsoft, and Twitter are creating software with the potential to impact a wide array of fields including drug discovery. One important aspect of machine learning research is that, unlike computational chemistry and cheminformatics, almost all of the new developments are free and Open Source. This willingness to share code is pushing the field forward and enabling rapid advances. Like any field, AI is experiencing a hype phase that is creating unreasonable expectations. It is my sincere hope that, rather than falling into a period of despair, we embrace these new developments and use them as another tool to get medicines to patients.
This work is licensed under a Creative Commons Attribution 4.0 International License.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Comments
Post a Comment