Reaching New Frontiers With Founding CEO & Current Director of Generate Biomedicines 

Two scientist working in a lab use technology to do their research. Representing the use of generative AI in drug development.
Insight by

On this episode of TD Cowen’s Biotech Decoded Podcast Series, Geoff von Maltzahn, general partner at Flagship Pioneering and founding CEO and current Director of Generate Biomedicines joins Yaron Werber, Biotechnology Analyst. They discuss the AI revolution in biotech and how generative biology has impacted the drug development process.  

They also discuss Geoff’s approach towards designing and building companies, and building a culture centered around machine learning where data science is a driver and not just an aggregator of data. This is key given that AI models get wiser the more data flows into them, tapping into an “intelligence of scale” in which greater scale compounds discovery prowess.  

Press play to listen the podcast.  

Transcript

Speaker 1:

Welcome to TD Cowen Insights, a space that brings leading thinkers together to share insights and ideas shaping the world around us. Join us as we converse with the top minds who are influencing our global sectors.

Yaron Werber:

Thank you for joining us for another exciting episode in our Biotech Decoded Podcast Series. I’m Yaron Werber, senior biotechnology analyst at TD Cowen. I’m super excited to be joined today by Geoff von Maltzahn in this episode, reaching new frontiers to discuss how generative biology. He’s bending the challenges of scale and is reaching new frontiers in drug development.

Geoff is general partner at Flagship Pioneering. He’s an inventor, entrepreneur, CEO and co-founder of multiple companies that integrate biology and data science to transform human health and sustainability. Most prominent to our conversation today, Geoff was the founding CEO of Generate Biomedicines, a company that is employing a cutting edge generative AI platform to develop novel protein therapeutics. Geoff today remains involved in Generate as a director and is incubating several new companies that have machine intelligence at their core. Geoff, always great to see you and thank you so much for joining us. We appreciate it.

Geoff von Maltzahn:

Thank you, likewise. And it’s great to be here.

Yaron Werber:

I’m really excited about this podcast specifically since this is really about innovation. There’s a lot going on right now in biology in new therapeutics, both on the small molecule and obviously on large molecule side and weaving in AI into it. And generative AI is really vaulted now into the mainstream. There’s a lot of debates now and intrigue about how it’s going to get woven into daily life applications, future impact, and definitely in biotech there’s an AI revolution going on in terms of new therapeutics development. How do we get here and what does this really mean for innovation?

Geoff von Maltzahn:

Beautiful question. I’ll start by rewinding the clock a little bit. 20 years ago I was an undergraduate at MIT, focused on chemical engineering, but I fell in love with biology. That wasn’t instantaneous. In fact, it was one of these love affairs that started with joining a biology lab, mustering the courage to quit three months later and that it was really monotonous. And as I looked around, I started to imagine that I might have decades of monotony ahead of me.

It triggered a set of conversations that have become the best guiding light I can point to in 20 years, which was the biology would inevitably have a transition from slow expensive guesswork to predictive success. And although that sounds really simple, its implications are extraordinary and they might not even have balance in that when a subset of chemical systems did that, the whole world changed, subset of electrical systems doing that whole world changed, mechanical systems the same.

So chunks of life having that same transition are going to be a really big deal. And the reason machine intelligence appears to be so pivotal in allowing that transition is a couple fold. One, whether we like it or not, biology mostly doesn’t work in ways that our brains are well suited to understand and that human language is adept at capturing. When you have thousands of quantitatively causal contributors to a cellular process, human sentences don’t usually well describe those things.

2D diagrams in a textbook, on a chalkboard, et cetera, similarly are an injustice to them. And unfortunately physics hasn’t helped us a whole lot. It’s of course helped us get to the moon, but a beautiful thing called the diffraction limit of light means that when you put the most interesting biological systems and the smallest under a microscope, everything gets blurry below 20 nanometers or 200 nanometers or so depending on the tools that you’re using, which means that these things we call proteins and the biomolecules they interact with at the form of DNA and RNA, we don’t get to see how they really work.

We don’t have videos of biology doing its most majestic things and first principles-based physics models of how those events happen, how a protein folds itself, how two proteins touch or bind to one another also haven’t been particularly successful. So that’s left a wide open playing field for machines. And to oversimplify it, the ability of machines to recognize patterns at scale that we cannot, and to start with the fundamentals of biology being the most sophisticated information technology on a planet affords it the unique position to learn biology from scratch in ways that we haven’t had access to.

Yaron Werber:

Got it. And so there is a huge need looking at… Is it looking at big data based on genetics? Is it based on a function, structure relationship on the pathway level? Maybe level set us. At what point do you see where the innovation is happening right now?

Geoff von Maltzahn:

Yeah, great question. So unfortunately, despite the biology being the information technology sentiment I just described, for the most part, biological data sets are really messy. They’re relatively sparse and there isn’t an easy to articulate or conceptualized scenario where AI can just run wild and figure out biology.

I don’t think it’s going to do that by reading every paper in that half or more of all papers can’t be reproduced by, well-meaning scientists trying to reproduce the exact same thing. And back to that human narrative set of inadequacies, much of what we’ve been describing at biology is built around the way our brains can conceptualize things and less so the way biology truly works.

So the number of places where there are large high quality data sets in biology from which one can extract valuable outcomes is still relatively low. But I can give you some insight into the way we thought about the foundation of Generate in that regard in that we felt privileged to have the ability to at least start with two such domains that to us might be applicable to creating protein therapeutics in entirely new ways.

Yaron Werber:

Okay. So right before we dive into that, one of the intersections that is met with a virtual sort of an AI-based approach is there’s the concept of data mining, learning potentially linking new biological processes together, but then ultimately a lot of this needs to get redacted from an AI platform to actual real world experimentation. How does that intersection really mean based on literally assay designed, as you say, the way we process assays as humans is limited in many ways and obviously data integration.

Geoff von Maltzahn:

If one can use machine intelligence in ways that can make useful predictions, it helps if there’s some filter in between those predictions and particularly in medicine setting real world medical decisions or products in motion. And in the realm of drug discovery, of course, we have as a field been building those in various forms for 100 plus years. So the platform we built to generate all of the hallucinations, all of the grounded predictions that come out of the model are all evaluated quantitatively in assays that allow us to be able to determine whether or not the prediction is in fact accurate for the quantitative outcome that matters and what are the biological consequences of that and is a given biomolecule eventually ready to be a therapeutic?

What you just hinted at is actually a really powerful subtlety of an era of machine intelligence, which is all of those assays I just described, we have been designing them and implementing them with of course the presumption accurate thus far that the output of those assays is going to flow into human intelligence from which small numbers of people are going to make decisions about what to do next.

And because human intelligence and machine intelligence is an only partially overlapping Venn diagram, when you design assays for machine intelligence, you have to remove some of those filters of, “Well, this is how we’ve done it, which might’ve subtly entered our subconscious a decade ago,” and instead figure out what could allow the machine to learn most voraciously from quantitative data. And in some cases there are simple heuristics like machines typically extract more value from quantitative losers in an assay than humans tend to.

We tend to zoom in on, “All right, which was the best in the high-throughput screen, or what are the hits and what can we learn about them? But that can discount or erase all of the value of learning from each of the predictions that was unsuccessful and having the model become wiser results. Additionally, there are some differences in the way that one would prioritize cycle time versus throughput.

In some cases, we found that the human inclination that high-throughput is for machine intelligence, in fact, better suited to be fast is high-throughput or fast cycle times with iterative generation allows one to climb the mountain more rapidly than slow high-throughput campaigns and there are other subtleties to it. And we, the world are going to be learning how do we best wield this new extraordinary tool of intelligence in the way that we discover medicines.

Yaron Werber:

Yeah. So that’s a super interesting concept because as you said, you think of screening tens of thousands, hundreds of thousands, whatever that throughput is going to be, and you’re looking for those 37 hits, and you’re discounting all the other data. And all that the other data is obviously fairly powerful, especially when you’re trying to validate and develop a model. In the past you’ve talked about scale is the enemy of the startup, and what you’re talking about right now is looking at intelligence in a different scale and really powering that up.

When Generate Biomedicines is trying to develop a new therapeutic, a new protein therapeutic, be it an antibody, let’s say, do you start with a validated pathway and a known protein? And at that point you try to really optimize how a best therapeutic, whether it’s an antibody or whatever, it’s going to end up being a best protein-based therapeutic is going to be developed. Do you need to know the structure, function, relationship of the pathway or as you noted when you’re changing the epitope, you’re changing the stoichiometry, the binding geometry, the energy permutations, maybe literally the space connotations between interactions and the receptor, if that’s what you’re looking at? And you’re looking at “disappointing” or negative results. How does the model then factor that in?

Geoff von Maltzahn:

Yeah, great question. Well, let me zoom out quickly and I’ll ground the foundation of Generate and then give you a specific perspective. So for your amusement, somebody asked me recently whether Generate uses generative AI and I said, “Well, that’s why we named that five years ago.” So we started the company five years ago. It was based on explorations we started about three years prior, and those explorations that flagship were recognizing that physics was making relatively little progress. Maybe we could start to set physics aside and begin to apply machine learning to implicitly what it means at the level of DNA code for a protein to have a given category of function or even better to have a specific quantitative function.

And although that sounds simple, DNA is of course the code of life and that means that the sequence of DNA that encodes every single antibody used in therapeutics today, every protein in biotechnology as the quantitative parameters that drives those hundreds of billions of dollars of annual revenue encoded in that sequence of DNA. And if we knew what those were, then we might be able to read, i.e., Look at DNA, know what it’s saying quantitatively about the function of the protein and write, meaning create completely new proteins that haven’t existed but outperform what we’ve been able to do thus far.

We had the virtue of two large data sets of high quality to get going. One was all genomes that have ever been sequenced, and we created the equivalent of a human large language model, that applied to the language of life every DNA sequence we could get our hands on. The second was the regime of three-dimensional protein crystals and protein protein interactions. We fed every one of those into our models with the idea that maybe the single most valuable function in all of biotech, which is to predict what something would need to look like to stick at a very precise location on a target while not sticking to other stock.

Simplistically, that’s all antibody value, that’s all peptide value, and that is most protein therapy value. So if we could figure out how to generate compositions of what does an antibody need to look like to adhere, it might offer an advantage. And we surmise that maybe the way amino acids like to interact inside of proteins and between proteins would have a common rulebook that if learned might allow us to perform that generative.

Those public data sets sort of helped us crawl. It was in inelegant. I have a six-month-old at home, so I’m full of brawling analogies right now. And yet it was locomotion like our daughter Zelda is appropriately proud of her progress. But what has really allowed these models to blast off is this sort of bespoke reinvention of what quantitative assays and what quantities of data could flow into models allowing us to continuously evaluate generative predictions for their therapeutic merit.

And so to your point, if you sort of put a dotted line between generating the ideal protein therapeutic 4A target versus figuring out the ideal target for a given disease, right now we’re entirely focused on the first, which is if you just took the CDR of an antibody, 60 amino acids or so are involved in determining what an antibody binds to. That means every one of those has 20 options. There’s 20 amino acids, which means there’s 20 to the 60 power of potential antibodies. That is atoms of the entire universe, every galaxy, every star, every planet, combinatorial diversity.

When you think of it in that light, it’s actually more miraculous that our B cells or a mouse’s B cells or a lama’s B cells, or a high-throughput assay ever come up with a viable answer than it is to imagine that those answers are always optimal.

And the power of a generative platform is that you can start to move away from sequence being your search space and understand where function space resides and what sequences exhibit a common function, and what does a gradient of positive function look like at times predicting sequences that are hugely distinct from one another, but are next to our neighbors in a functional realm. So what all that allows is us to be able to put more antibodies against a desired target than any prior approaches that we’re aware of and to precisely position them against to give an epitope. And that diversity of winners allows one to start to embed therapeutic advantages at multiple levels of antibody.

Higher potency, better dosing regimen, longer half-life erase manufacturing liabilities. And the rate at which this speeds up biotech is pretty spectacular to us. We think the best moat in the future is going to be very simple. It’ll be best drugs. Other moats won’t be very effective. And we are really pleased and impressed by the way that these tools can allow us to achieve that.

Yaron Werber:

And so the model is very much sequence variability that ultimately based on the model generates obviously different structure of the antibody in that case and more specifically binds. And you can look at the CDR to obviously look at potency. Are you also varying a lot of the FC domain relating to manufacturing and how to tighter up ADCC, tighter down, do neurogenesis as need be?

Geoff von Maltzahn:

Yeah. I mean simplistically, any function of a protein that confers a therapeutic advantage is within the remit of these models. The one that we’ve started with is the feat that gets you on the field for all biotechnology that relies upon binding to a target. But we found that the models can simultaneously overlay manufacturability and resistance to aggregation and other parameters that help you create a better therapeutic. And these models get wiser the more data flows into them, which back to your scale point has some really interesting implications for future biotech innovation and that thus far as one scales because some of the virtues of small human teams pushing forward the frontiers of innovation start to dissipate.

While benefits of economies of scale step into the picture, companies have typically had their discovery prowess plateau. That may not be the case and doesn’t appear to be the case if you can tap into intelligence of scale where the scale at which you are discovering brings greater intelligence and discovery prowess with it. What we’ve seen is that our antibody programs help us get better at peptide programs and vice versa. And antibody programs make other antibody programs better.

That has powerful implications for the general application of machine intelligence to the future of biology. And that we may start to see a subset of startups that do this to a really elite degree continually accruing more platform value and becoming more prolific in their ability to prosecute programs as they grow.

Yaron Werber:

And so you have a model that is… It’s somewhere still young in the building mode, it’s generating a lot of data, and you need to validate some of those findings to fine tune the model and allow it to reverse back and learn from data. So a lot of it to your point now needs to be redacted to three-dimensional real life and manufacturing of these antibody modalities, let’s say if we’re talking about an antibody formation, and then fitting it back in a high throughput way to look at structure function and that receptor. Let’s say if we’re looking at a ligand or receptor specifically and feed that back into the model.

So inevitably you do have a scale to solve for in manufacturing and testing. Because the model I would imagine would probably outpace your ability to generate antibodies and test them in real life. Can you talk about that process?

Geoff von Maltzahn:

I’ll give you a quick snapshot. So we have brought the first, as far as where we’re aware, generative AI antibody into clinical testing. Our first program was focused on identifying antibodies that would hit a portion of the spike protein of COVID. That hasn’t evolved almost at all since the first variance came out and it might not be able to evolve, and that it happens to be this sort of amazing mouse trap that allows a fusogen to push two membranes together as opposed to the ACE2 binding part, which is mutated voraciously since arrival. We have our next program moving to the clinic before the end of this year. And this has allowed us to do a number of categories of things. So from a binding perspective, first as our models started to go from crawling to better locomotion, when I was still leading the team, I challenged the computational group to try to take the top $50 billion of antibody therapeutic sales and generate antibodies that would hit the exact same target.

In fact, the same epitope of the same target. In the same binding pose with a comparable structural interface with a comparable or better affinity without being anywhere near the parent intellectual property. Within three months, they were able to do that for 100% of them. And to put that in context, I think if you had a big pharmaceutical company’s resources, you probably wouldn’t be able to do that in five years.

It’s also allowed us to generate antibodies to epitopes that as far as we’re aware, neither immune systems nor prior empirical approaches have found valid solutions to. Including scenarios where straight out of the computer predictions function as an antibody to one of those sites that as far as we know, hasn’t been targeted previously. So that allows one to access either de-risked biology, but to erase therapeutic liabilities or novel biology in new ways.

So you’re right. We have a portfolio prioritization challenge that doesn’t usually exist for a company five years old. We have 17 programs. I mentioned two going to the clinic, more going to the clinic next year. And the rate limiting steps for us don’t appear to be on the discovery side. Some of the things that we’re doing might have been decade campaigns otherwise. So it does create a unique question of, “All right, well then what is limiting?”

Something is always limiting when you’re building a company. It’s often one of three things: Your opportunity, your resources or your team. And for most of the past 20 years that has been opportunity and that a biotech company has had an insight into an area of biology that might be best applied to a few therapeutic programs.

Here, if what we’re seeing is right, then there are universal rules that govern how proteins function. Proteins are the anchor of biotech and antibodies is a subset of them. Peptides are another portion and gene therapies are another portion. Therefore the order in which one does things merits a lot of scrutiny. So we’re focused on de-risk biology on things where manufacturing isn’t a major limitation and we are applying our models to the manufacturability of protein therapeutics as well.

As far as we can tell, they’re well suited towards that because the production of a protein inside of a cell has been a very difficult thing for our human brains or any prior modeling approaches to really understand. But there are some meaningful advantages to these approaches now.

Yaron Werber:

And within the model, what are the limitations of the model and how important is it for understanding the function of the target protein to be inputted into the model? If you’re looking to bind to a novel structure, you have to really understand how that epitope, that area will impact the overall functionality of the target pathway, right? So maybe talk a little bit about that. How much new experimentation do you need to do on the biology itself, even de-risk biology?

Geoff von Maltzahn:

For that reason, we’re trying in our initial portfolio to isolate as much technology risk as we can and limit the amount of biology risk that we’re taking. And it seems that we’re able to embed sufficiently meaningful therapeutic advantages to antibodies against de-risked protein targets that we can live for a short period of time in a virtuous place of relatively low biology risk, but while adding important medicines to what patients can access.

Of course, inevitably we’re going to be in high risk biology as well. And what you’re describing, which is being able to choose areas where our advantages can be brought to bear and with highest fidelity translate to an outcome that matters in the clinic is something that we think a lot about. If you oversimplify what we can do just from a technology perspective, if one has a novel target, instead of saying, “Let me look for an antibody that sticks to this target somewhere,” we may have the ability to say, “Let’s generate antibodies across a range of affinities for every one of the epitopes that this target presents, and let’s conduct a meritocracy of which of those antibodies elicits a biological response that appears to be most efficacious.” That allows one to start to potentially access the best drug for a target much more methodically than our current tool.

Yaron Werber:

That’s fantastically interesting and obviously really pushes the boundaries. So if you look at Generate Biomedicines that you’ve raised about 700 million so far since you essentially founded the company through Series C so far. That’s without the capital coming from the Amgen deal. What suggests that the platform is working? As you mentioned, there’s one IND going in, there’s actually two, and then a host coming potentially even next year. And you’re looking at INI, you’re looking at ID, infectious disease, and you’re obviously looking at oncology too as an initial priority set. How well is it working so far and where would you be a year from now?

Geoff von Maltzahn:

Yeah. I’ll give you two categories of examples that matter to me and you can tell me whether they resonate. First category is what leads us to believe that this is going to create valuable therapeutics. Second category is what leads us to believe that this is going to change the rules of biomedicine? The first has evident value in the short term. The second in some ways is much more aligned with the reason we asked the questions that became Generate in the first place.

None of us get to live very long. Maybe we have a little bit of time, a few decades might be zero. It’s much more fulfilling to try to work on things that may just change the very rules that a whole category operates based on. And although to some of my family members, protein as a word sounds boring because they think bacon or soy, these are the most amazing machines in the world.

So we figure out how to generate them. The implications are really vast. So I’ll give you examples in both. We’ve been able to take antibodies for de-risk targets and improve their therapeutic potency by more than order of magnitude without straying from the epitope. And while improving dosing frequency and reducing other liabilities. That in our mind offers the ability to take areas where the biology has been demonstrated and bring the best therapeutic or what to us may be a better representation of a therapeutic intervention into the clinic.

And it appears that we can do that for a very large number of programs. To change the rules side of things, here’s one fun example. So we took asparaginase, which is an enzyme used in a subset of cancers. It’s bacteria derived and therefore of course it causes immunogenicity upon administration to patients. And we asked a crazy question, which was, “What if we could rebuild asparaginase so that it would be entirely composed of peptides that are already inside us or that are hard for our innate immune system to present and recognize.”

In order to do that, we had to generate versions of asparaginase where we had to simultaneously change more than a hundred amino acids. Again, that’s like in that case, 19 to the 100 power. And with a coin flip, those would still function as an asparaginase. Probability wise that’s like jumping off of earth and landing on another planet somewhere in the universe that’s habitable for human lives.

Sometimes they functioned as better asparaginases higher [inaudible 00:31:10] over Km. And I give that as an example because that would’ve been impossible with prior tools of rational data science or mutagenesis and directed evolution based modulation of that protein. It may just mean that we’re going to be able to make things like gene therapy vectors that are dramatically safer and that our immune system doesn’t pay attention to. And it certainly means that we’re going to be able to create complex proteins that are generated to optimized to be the best tool for performing a given task.

Yaron Werber:

And that specific example with asparaginase, you’re varying a hundred amino acids out of how many in that protein?

Geoff von Maltzahn:

Couple hundred.

Yaron Werber:

So it’s half the proteins being varied and maintaining the underlying functional integrity. What’s gospel at that point? Is it literally the binding pocket, the active moiety of the protein, or can you vary that as well?

Geoff von Maltzahn:

We’ve been able to vary that as well, but you’re right that what you just described as gospel is going to be important to figure out for the future of intellectual property in biotech in that just saying, “I found this amazing sequence. Please protect things that rhyme with it that are 90% similar, 95% similar.” It is not going to be the moat that it has been over the past few decades and that these algorithms appear to be able to jump over those moats almost while touching outer space.

Those will probably function more like copyright law. So in order to fend off the level of mastery that machines can have over protein function, one will have to really figure out what are the conservative moieties and what does that regime of optimal function space look like in sequences, subsets of sequences, arrangements of atoms or side chains and the like.

Yaron Werber:

Yeah. So when you’re putting a company like this together, what’s the hardest part to get right? Is it the code? Is it the model? Is it the biological computation that goes into it?

Geoff von Maltzahn:

I’ll give you an answer that may sound surprising. I think getting the culture in ways of working right is the hardest. For much of the past couple decades, data science teams have been on the receiving end almost like a core facility in many organizations. And human brilliance has been driving the bus. Now, the cloud has parted a bit, and the mountain rage of intelligence appears to have higher peaks of machines times human. And figuring out how do you create the sense of trust and love and collaboration and adherence to that mission and what cycles of interaction and people almost undertaking the need to take on another PhD of expertise in order to speak the language of colleagues that they hadn’t formally worked hand in hand with before is really important.

There’s a lot of subtleties to it. I have intense admiration for the life form of Generate for that reason. There’s a level of pride and collaboration instilled in the ways of working that’s really amazing and hard to get right.

Yaron Werber:

Yeah. When you’re looking at someone from the outside, when they look at Generate, how do they measure success and how can they tell if the model is continuing to get better?

Geoff von Maltzahn:

Good question. So I haven’t been in the shoes of an investor trying to figure out the difference between buzzwords and authentic capability. But I’ll ride with your question. I would spend as much time on this as I would portfolio assessment and that the companies that are going to have the ability to change the rules are going to exhibit some of the things that I’ve described, intense collaboration, individual leaders who are bought in and who have real mastery.

Of course that includes deeply understanding whether an ML team, comparable team is extraordinary. I would spend almost as much time asking the people outside of that team, what do they think and what do they know about the machine intelligence strategy and how would they describe the frequency, which data flows into it, and how do they experience predictions that come out of the machine? And it probably can pretty quickly discern when a company has its very being built around a machine intelligence versus the other end of the continuum. It’s just saying things in a buzzword realm or the things in between where it is a useful component, but it isn’t defining the very pace of innovation at the end.

Yaron Werber:

And then maybe finally, is the model completely internal or can the model go out, as you said before, access databases, acquire experimentation, Medline, PubMed, and do its own searches and learn didactically from that too?

Geoff von Maltzahn:

Great question. So both is a simple answer. There are reasonable debates over whether the companies like OpenAI creating human large language models have any moat relative to their competitors that could train a similar size model by writing an appropriate check on all of the vast human language that’s available online. And so long as you are only using publicly available data, I think advantages will be fleeting and democratize more rapidly than people may think. And therefore, I think that inference that I was describing before of does a biotech company have the ability to generate valuable data of high quality in quantities that matter for creating products that are actually valuable, that pool of data and whether or not it’s proprietary and whom else would have access and how hard would it be for others to emulate it is going to be one of the most important determinations as competitive advantage.

Yaron Werber:

So I think I’ve looked recently, the ChatGPT 3.5 when it was tested to look at the ability to predict this disease, just looking at clinical data, I think got about 75% of diagnoses right. And I think four or whether it’s even the next version that’s being developed, I think it’s up to a hundred now. So it’s almost like the role of the physician increasingly like we all learned in med school eventually. All we’re going to do is be we’re going to be the collectors of information that they can feed into something else that’s going to help actually put the diagnosis together. And to your point, what’s going to be really interesting, we look at Relay that does a lot of protein simulation and what we believe is a validated platform now to look at undruggable targets in the chemical space obviously.

Up to now it’s been in oncology, some of the challenges are not all targets are created equal. And in many times you end up validating an undruggable previously unvalidated target by designing great therapeutic against it and then saying obviously if it works. It sounds like with what you’re doing now is you’re not quite pushing, you’re looking at validated targets and pathways that have been druggable and ultimately led to a clinical benefit. And as you mentioned, you can continue to do innovation in that fashion. At what point do you start looking at, I don’t want to call it undruggable or maybe pathways that haven’t been explored yet or haven’t been a new therapeutic hasn’t been developed?

Geoff von Maltzahn:

Yeah. I didn’t intend to imply that we’re only active there, but it’s a virtue to be able to work in areas that are low risk and high value. Those usually don’t exist because if you’re using comparable tools as others, you would come up with comparable answers to things that are already de-risked. And the therapeutic advantage in the molecules we’ve created is because of the distinctiveness of our platform. But the most exciting things that we’re working on are in the, “Oh, nobody could ever do that before, or the degree to which we can prosecute it is very distinctive.”

So several of those 17 programs are either areas that you would describe as undruggable or for example, the ability to access epitopes with a level of specificity for a particular cancer type or another disease that wouldn’t have been feasible with current approaches. And that’s a big part of the mission of Generate and our near-term strategy.

Yaron Werber:

Yeah. Great. Let’s move to my favorite part of each podcast, and it’s something a little personal and a little touchy humor. And this one we haven’t asked anybody before. If you could have one superpower, what would it be? And you could only choose one.

Geoff von Maltzahn:

Ooh, that’s awesome. I’m not confiding myself to comic book once. I don’t know if this exists, but time travel comes to mind. And the reason is what I love about startups is you’re trying to push yourself to the very edge and figure out something that might seem like speculation to the world, but in your mind, the world will definitively have in the future, and then devise how to actually make that feasible. So at times it feels like a version of time travel.

Of course, the rate at which one can assess the validity of that hypothesis is dictated by the way we experienced time. And it’d be awesome to be able to hop around a little bit, experience arcs of innovation, experience my kids when they’re 60 years old, 30 years old, and alongside the ages that I get them with right now.

Yaron Werber:

And if you could only choose to go back in time or go forward in time, which one would you choose?

Geoff von Maltzahn:

Oh, that’s got to be forward.

Yaron Werber:

It was cold and dark in the dark ages.

Geoff von Maltzahn:

You might be trying to push the remote.

Yaron Werber:

I literally do sit every once in a while and try to figure out what are we all doing here exactly, and how much time are we all going to have left on this planet? And I don’t want to get into a discussion on is the planet warming on itself? Is this a part of a normal cycle? Are we just accelerating it? We’re probably not going to be around and maybe we wouldn’t have been around regardless of our innovation. If you look, the planet changes. Constantly it’s a slow ongoing process, but what this really means, and it begins to fathom how limited capability we really know or really have, and just think about how well understanding we actually have, not just on biology of just the physical, chemical and physical world, right? There’s got to be more to this than what we appreciate and know so far.

Geoff von Maltzahn:

Well, going back to your medical school example, the models just prior to the 75% couldn’t even take the test. It was the equivalent of handing a student a test and they’re like, “What’s this?” Just think about with any extrapolation what the future holds. And there are so many reasons to be pessimistic about the trajectory of the life form of humanity. A reason to be intensely optimistic is that we’re in the climax of the movie of intelligence and the movie of biology.

And in fact, this wave isn’t just going to touch biology, but some of the stuff that I’m working on right now is the implications of generative AI for the world of material science. Can we pull a century of material science progress into the near future with these advances? And that’s going to be a really interesting race of sorts, the problems we’ve created, their compounding nature. Versus the rate at which we can potentially vent our way out of this.

Yaron Werber:

You talk about potentially materials relating to different polymers and different alloys, or we also talking about potentially different fuel sources?

Geoff von Maltzahn:

The things that we’re primarily thinking about are the critical nodes for sustainability. So how can we access dramatic advances in batteries, solar cells, CO2 capture and utilization, and others? But we’ve defined the progress of humanity on the basis of materials and in biology and in materials one of the most beautiful reasons to be optimistic is with any reasonable assumptions, you could add up all of mother nature’s experiments and conclude that from a biological perspective and materials perspective, she’s only been able to test one drop of water out of all the earth’s ocean of potential. And maybe these tools are going to help us traverse those open waters.

Yaron Werber:

Yeah. That’s very cool. What’s the one story or memory when you think about your childhood that you look back today and say, “Wow, that part of me really hasn’t changed at all?”

Geoff von Maltzahn:

All right. I’ve got two examples that come to mind. One is, I broke my collarbone three times, my arms five times. As a kid, my hand once. I don’t break bones these days, but that was more of an expression of prioritizing independent of personal pains. And I think my affinity for startups has a similarity to it. You have to be willing to embrace pain and levels of challenges all the time. And the wonderful aspect of that is it makes you grow. The second is somebody told me once that when you swallow bubble gum, it stays in your system for seven years. I thought [inaudible 00:46:26]. This might not be what you wanted, but I decided to buy two rolls of Bubble Tape, the six feet long bubble gum, and just eat them like a sandwich. And it turns out it doesn’t stay in your system.

Yaron Werber:

It does not. I hope not. What flavor was that? That’s like eating a 60-ounce steak.

Geoff von Maltzahn:

Both of them were the bright pink original gum flavor.

Yaron Werber:

That’s a good one though. If you got to choose any of them, that’s a good one. Well, great, Geoff, always good to see you. Thank you so much for joining us. We really appreciate it. And we’ll continue to follow the story closely.

Geoff von Maltzahn:

Cool. Thank you, Yaron. Wonderful to be here.

Speaker 1:

Thanks for joining us. Stay tuned for the next episode of TD Cowen Insights.


Get in touch

Reach out to us directly for more information.