For AI-focused startups seeking out pharma deals, they have to go through people like Greg Meyers, the chief digital and technology officer at Bristol Myers Squibb.
Endpoints News
recently talked with Meyers about his views on artificial intelligence, including where he’s convinced the technology has value and where it’s mostly hype. At Bristol Myers, he plays a key role in determining what is and isn’t worth the $100 billion pharma company’s time and effort.
This interview has been substantially edited for length and clarity.
Andrew Dunn
: You worked up the ranks at J&J, Novartis and Biogen, and then left the drug industry for Motorola and Syngenta. Why’d you leave the industry and what lessons did you learn?
Greg Meyers:
I didn’t ever want to be painted as the pharma guy. I wanted to make sure I had a full perspective as a technologist.
There’s a lot of constraints on technology in biopharma in terms of the regulations we face. When you go into places where the risks are lower, you start to question if you take another month to do this, do you actually de-risk it? Some of my assumptions — the longer it takes, the safer something is — actually turn out not to be true. In many cases, the faster you move, the safer it is because you have the ability to change course faster.
Coming back to life sciences, I’m always measuring up whether we’re moving fast enough, being too conservative or too bureaucratic.
Dunn:
You hear a lot of pitches from startups using AI in drug discovery. What do you see as the biggest gaps between the AI bio field and what Bristol Myers wants?
Meyers
: There’s too much media attention on lead identification, probably because it sells headlines: Some big AI lab created a protein-folding tool and now we won’t need scientists anymore. The truth is the vast majority of leads that can be hit will fail.
We actually just put a molecule into Phase 1 for sickle cell disease. We’ve built a
CELMoD
, which is basically a molecule that can degrade proteins. If you can degrade the protein that inhibits the production of fetal hemoglobin, you can ameliorate the symptoms of sickle cell disease.
What does that have to do with AI? When you develop a CELMoD, you’ve got an engineering problem. We made subtle changes to the molecule, nothing that would change its functionality, but that allow you to stabilize the molecule so that it could do its job.
We’re seeing a ton of value; 100% of our small-molecule programs and 50% of our large molecules won’t make their way to the wet lab until a predictive model suggests that experiment will work.
We’re building a lot of this from scratch because the software that you need is inextricably linked to the data you have, and the data is unique. It’s got its own level of dirtiness and cleansing you need to do, so we find using these open-source models and tools really gets us to where we need to go.
Dunn:
Chai
launched earlier this month as the latest AI startup advancing a protein structure model. Is this healthy competition? Or is there a risk of redundancy, given all the different tasks inside the R&D process? From what you’re saying, it sounds like the field could have more impact focusing on optimization. Do we really need another protein language model?
Meyers:
This competition is great. It’s pretty normal, too. If you rewind the clock to maybe 1998, there were about 12 different search engines. You didn’t know whether Lycos or Yahoo or Google were going to win. The market clears out who really has a substantive product, and who has a solution looking for a problem. It’s probably going to be the same here.
I’ll take AlphaFold2 as an example. The new one’s not open-sourced — funny how that works — but AlphaFold2 is something we use almost daily. But we don’t use it the way the media would think you would, to predict proteins. It doesn’t really do that. It does a good job of taking well-characterized proteins, already bound by X-ray crystallography behind the scenes, and allows you to look at adjacencies, like subtle changes, to them.
Dunn:
A lot of these startups are more about a single model or product than building an end-to-end biopharma company. What does that say about today’s startup landscape?
Meyers:
There’s a lot of venture money flowing into AI. If we’re honest, there are probably companies that don’t have great product-market fit or great ideas, but because the word GenAI is on the pitch deck, they’re going to get money.
In a place like Bristol Myers Squibb, we don’t want to spend time stitching together 100 different point solutions that all think they’re the one-stop shops. You end up having a mall of one-stop shops.
You have to engineer these things in a cohesive way if you want to transform your workflow. I don’t think that’s the fault of any startups; that’s just the nature of the lack of maturity of where things are.
Dunn:
For startups seeking partnerships, what do they need to understand?
Meyers:
As a startup, when you look at life sciences, there is a systematic oversimplifying in the way we actually operate.
I’ll go back to the example of obsessing over lead identification, when in fact being able to predict something like absorption or lipophilicity, these are the things that achieve breakthroughs.
But when you look at it from the outside, it looks like what we really need is a scientist in a box. The technology is not in that credible place now. It’s so very complicated. Maybe 5% of cellular interactions have actually been modeled. The other 95% of biology is largely a black box.
Dunn:
It feels like there’s some conflict with the VC world seeking out huge visions of foundation models and generalizability. Then I’m hearing you say the needs of today’s big pharmas are specific, applied models. Is that going to be a natural tension for the field?
Meyers:
I think you’re onto something. The venture system probably values simple, easy-to-understand business models. A lot of what exists today tends to be more the ingredients that you need that might be able to achieve a really valuable breakthrough. But that’s not a simple, easy-to-explain business model.
Dunn:
Given your career in IT, how do you decide when it makes sense to roll out AI-powered technologies, or really any new technology, to thousands of workers at a large company?
Meyers:
A demo is worth 1,000 slides. Most big companies probably start off with 1,000 slides, and then there’s this huge process to get everyone aligned on whether we should or shouldn’t attempt something. Once you decide to attempt it, it becomes very high-stakes for people. What should be a proof of concept becomes a project where people’s reputations are at stake on it being successful.
I like to do things very quietly and small. We’re going to fail nine out of 10 times. The only way you’re going to know what AI can do is getting your hands dirty. You need to make sure there’s no shame or blame or embarrassment in those failures.
When you actually have something to show people, you’re either going to get, “This is the dumbest idea I’ve ever heard,” or, “Oh my God, that’s amazing.” You don’t need to write even one slide, because all the organizational energy gets channeled into moving that thing forward.
Dunn:
You’ve rolled out an internal version of ChatGPT. I know everyone loves to talk about humans in the loop, and that AI won’t replace people. But do you see any jobs or tasks, especially things like medical writing, adverse-event reporting, regulatory submission preparations, at the highest risk of being replaced?
Meyers:
My base case is not jobs being replaced. I do think tasks will change, but not really any differently than in the past.
Adverse events, that’s something we’ve done. We have AI now reading all of our events, and they’re categorizing them. You get a ton of noise in pharmacovigilance. How do you get your best pharmacovigilance people the cases they really need to focus on?
This work might have taken days to troll through what you’re getting, but now being able to have them automatically prioritized, it doesn’t require fewer people. It just makes better use of their time. That’s the case for most of these things.