Technology Quarterly | REPORTS

Fixing the drugs pipeline

Drug design: The more pharmaceutical companies spend on research and development, the less they have to show for it. What has gone wrong—and how can it be fixed?

|

THE pharmaceutical industry is one of the biggest and most lucrative in the world, with annual sales of around $400 billion. Pfizer, a drug giant, rivals Microsoft in market capitalisation, and the two are exceeded in size only by General Electric. Pfizer and other giants such as GlaxoSmithKline, Aventis and Merck routinely report multi-billion-dollar profits. But despite its outward strength, the industry is ailing. The “pipelines” of forthcoming drugs on which its future health depends have been drying up for some time.

As long ago as 1995, Jürgen Drews, then the head of research at Hoffmann-La Roche, warned that leading drug companies were not generating novel therapeutic agents at a rate sufficient to sustain themselves. Globally, research funding has doubled since 1991, but the number of new drugs emerging each year has fallen by half (see chart). Last year, for example, America's Food and Drug Administration (FDA) approved only 21 “new molecular entities”—industry jargon for new drugs—down from 53 in 1996. The more they spend on research, the less the pharma giants seem to have to show for it.

This sorry state of affairs is due to a combination of factors, the relative importance of which is hotly debated. One is the industry's obsession with producing “blockbusters”—drugs with annual sales of $1 billion or more. As they search for such bestsellers, firms may mistakenly be passing up smaller, but still profitable, opportunities. And as patents expire and revenues from old drugs dry up, costs are rising inexorably, not just in research, but also in sales and marketing.

So far, the industry has responded with cost-cutting and organisational changes. The underlying challenge, however, is to address the innovation deficit. But how? Flashy new drug-discovery technology has lost its shine, having consumed huge sums of money during the 1990s with little to show for it. Yet something new is undoubtedly needed, not least because the diseases of current interest to researchers—such as cancer and Alzheimer's—are more complex than, say, heart disease, which means that treatments will take longer to develop.

Drug company, heal thyself

Creating a drug is not easy. Once a potentially treatable disease is chosen, a target molecule, usually a protein, has to be identified which can be modified with a drug to produce the desired effect. Next, chemical compounds are made and tested against this target. The most promising of the “hits” are selected and optimised to suit a profile of “drug-like” properties. These optimised hits become “leads” that are tested, first in animal models of the human disease and then, if all goes well, in humans. According to an oft-quoted figure from the Tufts Centre for the Study of Drug Development, in Medford, Massachusetts, the entire process typically costs $900m and takes 15 years. Only one in 1,000 compounds tested makes it into human trials, and only one in five of those emerges as a drug.

That is why expectations were high when two much-hyped technologies—combinatorial chemistry and high-throughput screening—appeared on the scene in the 1990s. They promised to speed up the development of new drugs by exploiting automation: the ability to generate and test many new compounds quickly would, it was hoped, increase the rate at which new leads were produced. The approaches looked promising, in that they generated lots of hits. But while the quantity improved, the quality did not. The number of new leads going into clinical testing did not increase, and enthusiasm for the new technologies waned.

Similarly, the sequencing of the human genome was expected to revolutionise the process of drug discovery. It is undeniably a remarkable achievement, but looked at squarely, it represents a “parts list” of genes whose connection with disease is still obscure. It has also provided thousands of potential targets for new drugs that researchers must sift through. The flood of information has caused a kind of “paralysis by novelty” that the industry is only now starting to come to terms with, says Michael Gilman, head of research at Biogen Idec, a firm based in Cambridge, Massachusetts. If the industry can overcome this paralysis, however, that novelty spells opportunity. The genome is estimated to contain around 5,000 pharmaceutically relevant genes. According to Arthur Sands, the boss of Lexicon Genetics, a firm based in Woodlands, Texas, the 100 bestselling drugs target 43 genes between them, and the top 200 just 47. “The whole industry is running on less than 50 genes,” he says.

As a result, much effort is now being focused on using combinatorial chemistry and high-throughput screening more appropriately than in the past, and finding new ways to identify targets, determine the structure of proteins, and test compounds for activity and behaviour. The key aim is to distinguish winners from losers as early as possible. Failure can occur at any point in the discovery process, and the later the failure, the more costly the loss. A target may be important but not chemically tractable; drug compounds generated may not work, or not work well enough; a promising lead may turn out to be toxic. “Fail early, fail cheap” is the industry's mantra.

Overhauling the pipeline

Starting at the beginning of the process, a major reason drugs fail is misunderstood biology. So “validating” a target—identifying it and making sure it is physiologically significant—is an area of much activity. A standard way to study a target gene's function is to create mice in which the gene has been “knocked out” from every cell. Recent refinements in knockout techniques have speeded up the process and made it more precise. Lexicon, for example, is analysing 5,000 mouse genes and studying the physiology and behaviour of mice to discover novel drug targets in a number of areas, including cognitive and neurodegenerative disorders. Similarly, Exelixis, a firm based in South San Francisco, is validating targets using knockout techniques in mice, zebrafish, worms and fruitflies.

Once the biology of a target has been nailed down, a drug must be found that fits the target precisely. In theory, computers can be used both to predict how well a drug will match a target of known structure, and to tailor-make drugs from scratch. But the more complex the target, the harder it is to model drug interactions. In practice, in silico biology has yet to deliver on its promise: very few successful drugs have actually been made this way.

“Rather than abandoning established practices for new-fangled techniques, these novel approaches to drug discovery combine the best of both”

The wider the range of candidate compounds, the better the chances of finding a good fit with the target. Hence the appeal of combinatorial chemistry, which can quickly produce mixtures of millions of different compounds. But this tends to produce compounds that are very similar to each other. In contrast, “click chemistry”, a new approach invented by Barry Sharpless, a Nobel laureate at the Scripps Research Institute in La Jolla, California, produces a greater diversity of structures by snapping more carefully chosen molecular building blocks together in various combinations. Lexicon's pharmaceuticals division is looking for new drugs using this method.

The enthusiasm for combinatorial chemistry also diverted attention away from compounds derived from natural products. Once a mainstay of pharmaceutical research, and the source of antibiotics and anti-cancer drugs, natural products were left behind in the rush to automation because they are complex and hard to make. But now they are making a comeback in the form of novel twists on combinatorial techniques.

Complex sugars, for example, perform various functions on the surfaces of living cells, and play a role in diseases such as viral infections and cancer. Peter Seeberger of the Swiss Federal Institute of Technology, in Zurich, has automated the production of a limited number of these molecules, reducing the time required to make them from months to hours, and has produced a candidate vaccine for malaria. Stuart Schreiber of Harvard University prefers compounds that resemble natural products. He is using a combinatorial strategy called “diversity-oriented synthesis” to exploit some of the desirable properties of natural products.

Yet another approach to finding promising compounds comes from Astex Technology of Cambridge, England. It has automated X-ray crystallography—a technique used to determine the three-dimensional structure of proteins—to screen large collections of very small molecules to determine whether they would stick to a particular target in a desirable way. Another problem with combinatorial chemistry, says Harren Jhoti of Astex, is that over time, the compounds it produced were getting larger and larger. Astex points to evidence showing that the larger a drug candidate, the more likely it is to fail on the way to market. Its approach, in contrast, starts with tiny fragments, and adds on other bits where necessary.

But even the most promising drug candidates are no good if they are too toxic. A critical part of analysing leads is finding out how a drug will act in the body, by performing a so-called ADMET study (for absorption, distribution, metabolism, excretion and toxicity). Toxicology is a particular bête noire, says Chris Lipinski, a senior researcher at Pfizer. Software can be used to design compounds with favourable properties, but toxicology is difficult to predict computationally. Cell-based tests are better guides than they were in the past, says Mark Murcko, head of technology at Vertex Pharamaceuticals in Cambridge, Massachusetts. But the ultimate test for ADMET studies is animals, usually mice. Even then, because animals are imperfect models, toxicity may only become apparent in human trials. Ideally, toxicology studies would be completed earlier in the process, but so far that has proven difficult.

Evidently there is no shortage of new ways to generate and evaluate drug candidates. One has to be careful, though, not to confuse technical success with progress, says Geoffrey Duyk, a former head of research at Exelixis who is now a venture capitalist. Despite the abundance of tools, he says, most drug discovery and development efforts still fail because of a lack of understanding of how the drugs work, and an inability to predict reliably how the human body will handle them. Until recently, biology essentially involved grinding things up and looking at the pieces. Now, researchers must work from the bottom up to construct a comprehensive understanding of biological processes—a huge task.

The common theme that unites all of these novel approaches to drug discovery, however, is that, rather than abandoning established practices for new-fangled automated and computational techniques, they combine the best of both. “What we need is a combination of old ways and new ways,” says Dr Drews, who is now a partner at Bear Stearns Health Innoventures. This means the right blend of physiology, pharmacology and target-oriented chemistry on one hand and genomics, molecular modeling and structural biology on the other. The pharmaceutical industry, it seems, was too quick to turn its back on the past. It must now combine old and new techniques if it is to prosper in the future.

This article appeared in the Technology Quarterly section of the print edition under the headline "Fixing the drugs pipeline"

A question of justice?

From the March 13th 2004 edition

Discover stories from this section and more in the list of contents

Explore the edition