OpenEye Scientific is now part of Cadence

The Rant Goes On…

The Rant Goes On…

In August, I gave a talk at the Boston ACS meeting about the contribution of academics to molecular modeling. Okay, their lack of contribution to molecular modeling. I might even have had a slide that read, “If all academic research into modeling disappeared tomorrow, it would make no difference… to the drug discovery industry. Discuss.”

How could I have suggested such an outrageous thing? A year ago, I was putting together a talk for Paul Labute’s CCG UGM on the general state of molecular modeling (my “History, Hubris and Greed” talk), and I made a slide which I reused in my Boston ACS talk listing all the things that I had found really useful in my career at OpenEye. Out of a total of nine things, only one was from academia (the Poisson-Boltzmann theory as developed by Barry Honig). Most of them, such as AM1BCC charges, MMFF, came from industry and some (e.g. SMILES) came from government. I found this surprising, considering how little time industry and government scientists get to spend doing novel work. So my Boston talk was essentially a rationalization of the peculiar ineffectiveness of academic research as applied to molecular modeling.

Was I suggesting that academics are lazy or dumb? Of course not. I would, however, draw a parallel with the traders on Wall Street—the thieving bastards who drove the world to an economic precipice only two years ago. Although it’s tempting to think ill of this group, the reality is that they were merely following the incentives laid out for them (public risk and private reward being a really bad idea). Similarly, I think that academics, especially those in the field of drug discovery, have inherited the wrong set of incentives: rather than help build a firm foundation for the field these incentives encourage application of flawed and poorly understood science. Many academics are aware that the incessant need to publish, the endless and often fruitless grant cycles, and the pressures to be monetarily successful have pushed real scientific research onto the back burner. But I think few know how the system got to be this way. Although my explanation is based largely on the history of science in the United States, I think that’s a safe place to start since the U.S. is so scientifically dominant and usually acts as a role model to the rest of the world.

At the end of the nineteenth century there were nine universities in England, and more than two hundred in the United States [Ref #1]. Why the discrepancy? The economy of England was bigger than that of the U.S., yet it had far fewer institutes of higher learning. The reason was simply that in America, anyone could start a university: all you needed was some land (typically granted from the government), some funding from the local rich folk and off you went. In England such things required government coordination, royal patronage, etc. In the U.S., then, so-called “land-grant” universities went up all over the place. Some failed quickly, but some are still with us as major institutions. The point is (as always): where did the money come from? It came, for the most part, from industrialists, typically those who recognized the general social benefit of a university but who also expected some direct benefit by way of technical expertise and research. This was still the case as late as the 1940s. Departments, at least scientific ones, would get the majority of their funding from industrial partners. This led to some spectacular advances in industry. The example I like to spotlight is the development of the fluidized bed catalysis of oil to form petroleum, an effort funded by a consortium of oil companies and led by researchers at MIT. Fluidized bed catalysis probably had the second biggest impact on the Second World War of any American technological invention because it allowed the U.S. to manufacture petroleum more efficiently than Japan or Germany. The next part of the story, of course, revolves around the invention that had the biggest impact: the atomic bomb.

The reason the Manhattan Project has anything to do with the effectiveness of current academic molecular modeling research is that it made politicians realize that science was not something they could ignore. As a consequence of the success of Los Alamos, Truman asked the wholly remarkable Vannevar Bush (who helped build the first computing machine and first proposed the idea of hyperlinks) to work out what science funding should look like after the war. Vannevar understood the incredible leverage that basic research could exert on practical science and wrote a monograph, Science, the Endless Frontier, proselytizing for pure, un-politicized government support for science. His wish was partially granted when the government created the National Science Foundation (NSF). Unfortunately, the government also created the NIH, which became the more powerful institution. The National Institute of Health was (and remains) aimed squarely at health issues, and was (and remains) the creation of political forces. As a consequence, one of the deep-rooted problems in our field is that although the NSF could in theory fund the type of basic research we desperately need, research that would provide the basis for true ‘drug engineering’, this impinges on the territory of the NIH, which has utterly failed to support that mandate. Even today we hear nonsense spewing forth from those who run that organization about “translational” research, which really translates to “just make something work.” It is a classic Catch-22. The basic science that is needed to really transform the field is not funded, in part because the organization that is supposed to perform that role has to answer to political directives.

The final blow to useful academic contribution occurred in 1980 with the Bayh-Dole act. This required universities to make something of the government-funded research they did—i.e., to commercialize it. The story of how Bayh-Dole came about is fascinating. Universities could already patent work that was funded by the government, but there was a patchwork of agencies and procedures. Bayh-Dole not only simplified the process but also shifted the onus from “could patent and commercialize” to “should patent and commercialize”. In my opinion, there has been no more damaging Act of Congress, at least where science is concerned. The way it was sold to the good people up on the Hill was that there were all these patents from research funded by the government and only five percent had actually been licensed. Clearly universities needed some incentives to get off their butts and start helping American Industry (sotto voce: beat the Japanese, who seemed to be taking over industrial leadership). The truth is that most of these patents had been funded by the Department of Defense. And it had always been the case that the military made it really simple for contractors funded by DOD grants to patent and license anything they wanted. There was no barrier; it was just that 95% of the patents weren’t very useful. These facts were never presented to those who voted, fairly overwhelmingly, for Bayh-Dole. Thirty years later, careful examination finds little support for the view that this act helped American productivity [Ref #1], although it has probably reduced that of American science [Ref #2].

With Bayh-Dole we have the complete picture: Funding is heavily weighted towards the practical, not the fundamental. Universities push students and professors to be useful, to make things they can patent and hawk to industry. So what could possibly stop brilliant inventions revolutionizing the field of health care? Well, nothing, really, other than a fundamental misunderstanding of what makes science valuable. Science is about measurement, and the prediction of measurement. The sheer lack of the former and the sheer overabundance of shoddy, over-parameterized, over-hyped examples of the latter defines the state of the field today. Furthermore, because there are now many fewer links between the worlds of academia and industry than there were before the 1940s, academics typically don’t even know what problems industry cares about! I asked a major employer of molecular modelers in pharma how many academic groups they could hire from and expect the new hires to be able to get right to work without additional training. The answer was two. Two out of all the myriad of groups believing they do molecular modeling relevant to the activity of drug discovery.

My suggestions to improve matters come in two flavors: the practical and the impractical. My first impractical wish is that Bayh-Dole be repealed. That isn’t going to happen because we do not elect politicians capable of understanding the nature of science. If only they’d listened to Vannevar! We nearly got the perfect government support of pure science, but reality fell too short. My next impractical suggestion is to cut off all funding for anything to do with molecular modeling, unless it could be considered basic research—and even then only if it were coupled with an experimental component.

Why should industry fund basic research? That is a role for government. Government used to do this, providing long grants that allowed researchers to tackle hard, basic problems. (Talk to the people about to retire in academic departments; they know.) It is, however, entirely reasonable to ask industry to fund “translational” research. Let them decide what needs to be translated, because they—not academics, not politicians—know what the problems are. This is my number one practical suggestion: that private companies shoulder the cost of translational research (provide the incentives) that they used to. True that they effectively would be taxed twice—but I think judicious tax breaks could still make this attractive. My second suggestion is that academics ignore Bayh-Dole. I’ve had a lot of academics tell me they can’t help trying to become millionaires, that Bayh-Dole forces them to act so. This is not true. The operations within universities, the offices of science and technology that seek to patent and to license, are entirely at the mercy of the bench-level scientists. They don’t have the skills to know what is truly novel and what is not: they rely on what the scientists tell them. So. Don’t. Tell them. America was changed forever by the simple civil disobedience of its citizens in the 1960s. What would happen if academics were to rebel—even a little—and stop supporting elements of the system that are ruining science?

Finally, I suggest academia and industry need to talk more directly, exchange ideas, agendas and—most important from the industrial side—data. There are some real problems, both applied and theoretical, that could be addressed. Don’t quote me on this but I might even pay academia to work on problems that would actually have an impact on drug discovery.

In conclusion, academics don’t make much of a difference. They should. I’m not blaming the players, I’m blaming the game. We just have to wake up and realize that it’s really our game.

References:
1: David Mowery, Richard Nelson, Bhaven Sampat, Arvids Ziedonis, “Ivory Tower and Industrial Innovation: University-Industry Technology Transfer Before and After the Bayh-Dole Act” (Stanford Business Books, 2004)
2: Jennifer Washburn: “University, Inc.: The Corporate Corruption of Higher Education” (Basic Books, 2005)

Preserved Comments

The research group I was in did molecular modeling of actin/myocin, bacteriorhodopsin as a proton pump, virus capsid interactions, and other non-drug discovery-based research. And I in retrospect would say that none of the people I worked with would be able to apply their modeling experience to drug discovery, given that they were interested in understanding the target system and didn't have a chemistry background.

In that respect then, yes, most modeling groups don't have an impact on the drug discovery industry. But all? You've pointed out that there are at least two relevant groups.

You listed a few things out of industry which were really useful for you. I'll take MMFF. That's one of many force fields, and I no longer know enough about the field to judge how MMFF (commercial) compares to MM3/MM4 (academic), GROMOS/GROMACS (academic), OPLS (academic), or CHARMM (academic) or, for some I've never heard of before, AMOEBA (academic) and X-Pol (academic). Strange how there are so many academic ones outside of the drug discovery arena.

Perhaps your talk should have been about the effectiveness of profit on commercial research as applied to molecular modeling?

Some other quick rejoinders to your rant: who decides what counts as "an experimental component"? For those academics who think Bayh-Dole forces them to work to be millionaires - ask them to point out more than a handful of such millionaires. Perhaps the problem here is more that the licensing offices at the universities have no incentives to deemphasize their importance.

"What would happen if academics were to rebel—even a little—and stop supporting elements of the system that are ruining science?" Like proprietary software? ;)

- Andrew Dalke

Hi Ant,

You write that academics don't make much of a difference, but OpenEye benefits every day from academia.

For example, the AM1 in AM1/BCC comes from U. Texas. The concept of an empirical force-field, of which MMFF is one example, comes from the Weizmann Institute and was further developed at various other universities. Implicit solvent you know about. I think the idea of ligand-protein docking comes from Tack Kuntz at UCSF followed soon after by Art Olson's early work. Protein crystallography comes from academia. The use of the Hessian matrix to estimate entropy, and scaled particle theory, both used in Szbki, come from academia. The liquid theory underlying Szmap comes from academia.

And I'll bet that academia trained every scientist at OpenEye.

Dismissing the genuine contributions of academia is not a useful step toward improving it. A more thorough analysis is needed if you want to go beyond ranting to a serious conversation.

Possible discussion topic for a future CUP meeting?

Regards,
Mike

Hi Mike (Gilson- attribution with permission)

First, saying that everyone at OpenEye is trained at academia is not a fair point- was there an alternative? That's a bit like saying there's nothing wrong with high school education in the USA because look at all the great scientists who had high school education.

Secondly, my point is that the useful application of theory does not get done in academia- not that good, general purpose theory is not done in academia- it does and I would like to see more government support for it, not less. My argument is that government support of applied research does not work because those granting the money and those getting the money have little idea as to what the real problems in industry actually are- because they do not have to know they do not. In the past they needed to because industry was where the funding came from.

Looking at your examples, you mentioned that Szmap uses liquid theory- it doesn't, it uses statistical mechanics- the great, general, work of Boltzmann. WaterMap uses inhomogenous fluid theory, not Szmap, and I am far from convinced it is useful. The use of a Hessian to estimate entropy is another pure statistical mechanical approach found in any good textbook, e.g. McQuarrie, i.e. is another example of general theory. AM1 is a great example- it is parameterized to the heats of formation of molecules, something that is not useful to my field. Chris Bayly had the exceptional insight, while at Merck, that it could be modified to produce accurate charges- now the standard method of achieving the most accurate charges without going beyond semi-empirical QM. Most force fields were, and to some extent are, e.g. GAFF, useless for the small molecules used in pharma- MMFF, done in industry, changed this. Docking was 'invented' by Jeff Blaney and Tack Kuntz (Jeff had the idea, Tack made it happen) and is an example of academia trying to tackle a real problem in industry- I don't consider it to be a great success. In fact, the dominance of it in academic spheres is an example, in my mind, of how academia misses the point of what if truly useful in pharma- simpler methods, like our POSIT that uses ligand information is much more effective at producing the pose of the molecule than docking, and docking itself is useless at predicting affinity!

I could go on, but I think it's clear we are arguing different cases. You are saying "look at all these things that arose from basic research, so Ant is wrong". But that's not my point. I'm saying academia has become fundamentally ineffective at addressing practical problems- so much so that most of the recent advances have been by commercial companies or scientists in industry, and I think I give plenty of evidence for this whether you feel I have or not. This is a very different situation from both other fields and from the historical precedence of the first half of the twentieth century. Am I not constructive? I think I am. I give concrete, meaningful suggestions. I clearly haven't convinced you there is a problem, which is my failure, but that doesn't mean there isn't a problem. My opinion has been formed from a wide sampling of interactions with people in industry- I suspect you would say the same, replacing "industry" with "academia. Hence the problem, hence the divide.

Ant

A great post, and your detailing of the problems with the NIH not funding basic drug discovery research is especially spot on. However, your opinion of academic modelers reminds me of a story told by physicist Robert Serber about a conference of theoretical physicists in Vancouver. Several of them including Oppenheimer, Weisskopf, Rabi and Pauli were out in a boat on a lake. Someone asked what would happen if this boatload of theorists sank. Without batting an eyelid Oppenheimer replied, "It won't do any permanent good...". I think of a similar sentiment regarding academic modelers.

Given the current spate of lay offs and declining morale, I doubt that industry could keep on coming up with the kind of fundamental inventions you talk about (SMILES, AM1BCC etc.) which were primarily a result of research by a few first-rate researchers. If that's the case, academia needs to shoulder the burden of fundamental research even more. Unfortunately that seems even less likely, leading to a double whammy.

Although your suggestion to get rid of Bayh-Dole is laudable, I think the problems with the decline of the hallowed and ancient art of measurement go deeper. You did mention turn of the century American academia where university research could be funded by wealthy philanthropists. That was surely one of the cardinal reasons for the rise of American science after WW1. Consider the Rockefellers and the Carnegies who gave fellowships to American physicists to learn quantum mechanics in the great European centers of Cambridge, Copenhagen and Göttingen and quarry this knowledge back to the US to build great schools; Oppenheimer's school at Berkeley comes to mind. Such funding from businessmen could also be applied to 'big physics, as was demonstrated by Oppenheimer's friend Ernest Lawrence and his massive cyclotrons. The problems with 'big science' started after the war when increasingly bigger machines were needed to discover increasingly smaller particles. While a good deal of this research involved measurement, it also started to lead to the kind of competition for funds for their own sake that we see today. Scientists started to get more interested in securing funding than the research emerging from that funding. Is the situation any better today? Not at all, partly because of the NIH. And the recent example of Texas A & M planning to implement a system of 'calculating' faculty 'worth' seems the kind of thing that's just going to make the situation worse.

But as I rant in this post, I think the biggest casualty of all this- big science, NIH and Bayh-Dole- is that scientists have become victims of the false equivalence between money and fundamental discovery. They have started thinking that more expensive science actually means better science. Something like the relatively cheap measurement of solvation energies (a point which you nicely belabored in your J. Phys. Chem. SAMPL paper) seems not only unfashionable but also sounds like it won't lead to new knowledge. This is a misguided view which threatens to take us down the abyss.

However there are still pinpricks of light. This year's Nobel Prize was given to physicists who discovered a stunning new form of carbon by peeling off layers from graphite using scotch tape. Measuring solvation energies is likely not going to directly bag a Nobel, but perhaps it can again convince academics (and perhaps even industry) that cheap, measurement-focused fundamental science is actually important. Let's hope so.

-Ash Jogalekar

Thanks for the marvelous post! I love the physicists in a boat story- though I had heard them all, but not that one..

I completely agree with your contention with regards to big science, and with the content of your blog entry on 'small science'. Yet another consequence of the Manhattan project is that big science is equated with big success/ application. Which, to be honest, occasionally it is. And if the role of government is anything it should be to take on tasks beyond the scale of industry for the common good. But as you point out, the sheer political energy needed to support big science leads to hype, waste and the squeezing out of small science. Astronomers are finding this out the hard way with the James Webb. No doubt, if successful, a wonderful project but cost overruns are killing a lot of smaller, useful projects in their field. If it blows up on the launch pad, misses its L2 point, fails to operate once there astronomers have lost a generation of results.

I think the pinpricks you see, such as the graphene Nobel prize, will always be there. Science to me is an elemental force- it will continue regardless. It's just a pity we do not do a better job of harnessing its potential.

Ant

It would be interesting to get your thoughts on what the fundamental basic research questions in molecular modeling are. That is, what are the problems you "might even pay academia to work on?" Hilbert's 23 problems in mathematics probably helped guide research in a useful way. http://en.wikipedia.org/wiki/H... Something similar in the molecular modeling field would be interesting and probably helpful.

Your point about the need for more experimental measurement is well taken. Look forward to hearing about the next SAMPL challenge.

jjlangham

I'm glad to hear about your interest in SAMPL. As you may be aware, some of the data for the SAMPL3 challenge is now available for download, and the other datasets will be available shortly. You can find more info about it on our SAMPL page or go directly to the SAMPL website to register for an account if you'd like to download the data and participate. - Matt Geballe

Interesting post. You know, I actually have thought about that, including offering 'X-prize' incentives to acceptable solutions. Watch this space. - Anonymous