Skip to content

A commonly heard phrase in the pharmaceutical industry is that ‘drug discovery is in crisis’. Over many years the costs for drug development have escalated while levels of success have sunk lower and lower.

Recent estimates suggest that it costs in excess of $2.5 billion to develop a new drug. This is not just bad news for the profitability of pharmaceutical companies, it is bad news for all of us as it limits the treatments that are available and pushes up the costs of treatments that do exist.

In drug discovery, as in many other areas, AI has the potential to change the game – to make drug discovery quicker, cheaper and more effective, both reducing the cost of development and aiding in the identification of novel medicines.

Drug discovery is a complex multistep process, but it can broadly be grouped into three areas: the identification of targets (these are the naturally occurring cellular or molecular structures involved in the disease); the development of a specific drug molecule that will modulate the activity of that target; and ensuring the end product is safe for humans to take. 

AI has been used for decades within computational approaches to drug discovery but has only recently started to offer the types of impacts that could really change the drug discovery pipeline. It is in the area of developing potential drug molecules that we currently have least traction but perhaps most promise for change.

One of the biggest challenges in using AI in this area is the data – both the amount and its heterogeneity and quality. It is difficult and challenging even to obtain data for most of the steps in the drug development pipeline. Using AI in drug discovery is often like training an algorithm to recognise pictures of cats when you have no pictures of cats but a relatively small number of out-of-focus, badly annotated pictures of dogs and elephants. 

One way around the data challenge is to use AI techniques on relatively small amounts of high-quality data that are specific to a given target. In standard drug discovery, once a potential drug molecule has been found, human experts look at all the data available and suggest new candidate molecules that should be more effective or safer. This is an iterative process until the molecules are considered ready for trials. Recent work has shown that an AI algorithm is able to make better candidate suggestions than human experts and so turn a potential drug molecule into a safe and effective version more quickly and more cheaply. 

The more general problem is with novel targets and molecules. Where we do not yet have extensive experimental data, this is more challenging for humans and for AI. Could AI predict an effective, safe drug candidate without needing extensive experimentation?

In this context, people have focused on specific tasks within the pipeline – for example, using AI to search the space of potential drug molecules. This is vast – estimated at around 10 to the power of 60 (to give an idea of scale, there are only 10 to the power of 24 stars in the universe). It is impossible to calculate the properties of all these molecules, but AI is starting to be able to explore this space in a way humans and other types of algorithms cannot. Other types of AI algorithms borrowed from image processing have been used to predict far more accurately than ever before how well a potential drug molecule will bind to a given target, both with and without information on the target.

Many challenges remain: none of these methods are accurate to a level that can be used without significant amounts of wet lab experimentation. All of them require human interpretation, and there are still real questions about the generality any of them can or will achieve.

But AI algorithms and techniques are already changing the way drug discovery is done, and as the algorithms improve, as we gain a better understanding of how to handle and represent the data, and also what data to collect, their benefits can only continue to grow.

Professor Charlotte Deane is Professor of Structural Bioinformatics and Head of the Department of Statistics at Oxford University.

We use cookies to give you the best experience of using this website.

To accept our cookies, click here or read our Cookie Policy for more information.