–––––––– –––––––– archives investing twitter
Brian Naughton | Sun 28 September 2025 | biotech | biotech ai

This is a continuation of my past articles on protein binder design. Here I'll cover the state-of-the-art in AI antibody design.

Antibodies and antibody fragments (e.g., Fab, scFv, VHH) are particularly important in biotech, because they are highly specific, adaptable to almost any target, and have a proven track record as therapeutics. Full antibodies also have Fc regions, so they can activate the immune system as well as bind. In this article I'll just use the term "antibody" but many of the design approahces discussed below generate these smaller antibody fragments.

A menagerie of antibody fragments (Engineered antibody fragments and the rise of single domains, Nature Biotech, 2005)

Last year we saw a lot of progress in mini-binder design (especially BindCraft), but this year there has been a lot of activity in antibody and peptide design too, as it becomes clear that there are commercially important opportunities here. BindCraft 2 will likely include the ability to create antibody fragments; a fork called FoldCraft already enables this.

Antibodies are proteins, so why is antibody design not just the same problem as mini-binder design? In most ways they are the same. The main difference is that the CDR loops that drive antibody binding are highly variable and do not benefit directly from evolutionary information the way other binding motifs do. Folding long CDR loops correctly is especially difficult.

Here I'll review the latest antibody design tools. I'll also provide some biomodals code to run in case the reader wants to actually design their own antibodies!

RFantibody

While there were other antibody design tools before it, especially antibody language models, RFantibody was arguably the first successful de novo antibody design model. It is a fine-tuned variant of RFdiffusion, and like RFdiffusion it requires testing thousands of designs to have a good shot at producing a binder. The original RFantibody paper is originally from way back in March 2024, so as you'd expect, the performance—while remarkable for the time—has been surpassed, and Baker lab seems to have moved on to the next challenge. (Note, the preprint was first published in 2024 but the code was only released this year.)

The diffusion process as illustrated in the RFantibody paper

IgGM

It's pretty interesting how many Chinese protein models there are now. Many of these models are from random internet companies just flexing their AI muscles. IgGM is a brand new, comprehensive antibody design suite from Tencent (the giant internet conglomerate). It can do de novo design, affinity maturation, and more.

There are some troubling aspects to the IgGM paper. Diego del Alamo notes that the plots have unrealistically low variance (see the suspicious-looking plot below). When I run the code, I see what look like not-fully-folded structures. However, there is also strong empirical evidence it's a good model: a third place finish in the AIntibody competition (more information on that below).

Suspiciously tight distributions in plots from the IgGM paper. Sometimes this is due to plotting standard error vs standard deviation.

To run IgGM and generate a nanobody for PD-L1, run the following code:

# get the PD-L1 model from the Chai-2 technical report, only the A chain
curl -s https://files.rcsb.org/download/5O45.pdb | grep "^ATOM.\{17\}A" > 5O45_chainA.pdb
# get a nanobody sequence from 3EAK; replace CDR3 with Xs; tack on the sequence of 5O45 chain A
echo ">H\nQVQLVESGGGLVQPGGSLRLSCAASGGSEYSYSTFSLGWFRQAPGQGLEAVAAIASMGGLTYYADSVKGRFTISRDNSKNTLYLQMNSLRAEDTAVYYCXXXXXXXXXWGQGTLVTVSSRGRHHHHHH\n>A\nNAFTVTVPKDLYVVEYGSNMTIECKFPVEKQLDLAALIVYWEMEDKNIIQFVHGEEDLKVQHSSYRQRARLLKDQLSLGNAALQITDVKLQDAGVYRCMISYGGADYKRITVKVNAPYAAALEHHHHHH" > binder_X.fasta
# run IgGM; use the same hotspot from the Chai-2 technical report (add --relax for pyrosetta relaxation)
uvx modal run modal_iggm.py --input-fasta binder_X.fasta --antigen 5O45_chainA.pdb --epitope 56,115,123 --task design --run-name 5O45_r1

IgGM has one closed library dependency, PyRosetta, but this is only used for relaxing the final design, so it is optional. There are other ways to relax the structure, like using pr_alternative_utils.py from FreeBindCraft (a fork of BindCraft that does not depend on PyRosetta) or openmm via biomodals as shown below. FreeBindCraft's relax step has extra safeguards that likely make it work better than the code below.

uvx modal run modal_md_protein_ligand.py --pdb-id out/iggm/5O45_r1/input_0.pdb --num-steps 50000

PXDesign

Speaking of Chinese models, there is also a new mini-binder design tool called PXDesign from ByteDance, which is available for commercial use, but only via a server. It came out of beta just this week. The claimed performance is excellent, comparable to Chai-2. (The related Protenix protein structure model, "a trainable, open-source PyTorch reproduction of AlphaFold 3", is fully open.)

PXDesign claims impressive performance, comparable to Chai-2

Germinal

The Arc Institute has been on a tear for the past year or so, publishing all kinds of deep learning models, including the Evo 2 DNA language model and State virtual cell model.

Germinal is the latest model from the labs of Brian Hie and Xiaojing Gao, and this time they are joining in on the binder design fun. Installing this one was not easy, but eventually Claude and I got the right combination of jax, colabdesign, spackle and tape to make it run.

Unfortunately, there are also a couple of closed libraries required: IgLM, the antibody language model, and PyRosetta, which both require a license. AlphaFold 3 weights, which are thankfully optional, require you to petition DeepMind, but don't even try if you are a filthy commercial entity!

At some point all these tools need to follow Boltz and become fully open, or it will keep creating unnecessary friction and slowing everything down.

The code below uses Germinal to attempt one design for PD-L1. It should take around 5 minutes and cost <$1 to run (using a H100). Note, I have not gotten Germinal to ever pass all its filters, which may be a bug, but it does still output designs with reasonable metrics. The code was only released this week and is still in flux, so I don't recommend any serious use of Germinal until it settles down a bit. My code below just barely works.

# Get the PD-L1 pdb from the Chai technical report
curl -O https://files.rcsb.org/download/5O45.pdb
# Make a yaml for Germinal
echo 'target_name: "5O45"\ntarget_pdb_path: "5O45.pdb"\ntarget_chain: "A"\nbinder_chain: "C"\ntarget_hotspots: "56,115,123"\ndimer: false\nlength: 129' > target_example.yaml
# Run Germinal; this is lightly tested, no guarantees of sensible output!
uvx --with PyYAML modal run modal_germinal.py --target-yaml target_example.yaml --max-trajectories 1 --max-passing-designs 1

Mosaic

Mosaic is a general protein design framework that is less plug-and-play than the others listed above, but enables the design of mini-binders, antibodies, or really any protein. It's essentially an interface to sequence optimization on top of three structure prediction models (AF2, Boltz, and Protenix.) You can construct an arbitrary loss function based on structural and sequence metrics, and let it optimize a sequence to that loss.

While mosaic is not specifically for antibodies, it can be configured to design only parts of proteins (e.g., CDRs), and it can easily incorporate antibody language models in its loss (AbLang is built in). The main author, Nick Boyd from Escalante Bio, wrote up a recent blog post on mosaic, and showed results comparable to the current state-of-the-art models like BindCraft. Unlike some other tools listed here, it is completely open.

Mosaic has performance comparable to BindCraft on a small benchmark set (8/10 designs bound PD-L1 and 7/10 bound IL7Ra)


Commercial efforts

Chai-2

Chai-2 was unveiled in June 2025, and the technical report included some very impressive results. They claim a "100-fold" improvement over previous methods (I think this is a reference to RFantibody, which advised testing thousands of designs, versus tens for Chai-2.)

Chai-2 successfully created binding antibodies for 50% of targets tested, and some of these were even sub-nanomolar (i.e., potencies comparable to approved antibodies). It is a bit dangerous to compare across approaches without a standardized benchmark—for example, some proteins like PD-L1 are easier to make binders for—but I think it's fair to say Chai-2 can claim the best performance of any model to date, mini-binder or antibody.

Diffuse Bio

Diffuse Bio's DSG2-mini model was also published in June 2025. There is not too much information on performance apart from a claim that it "outperforms RFantibody on key metrics". Like Chai-2, the Diffuse model is closed, though their sandbox is accessible so it's probably a bit easier to take for a test drive than Chai-2.

Screenshot from the Diffuse Bio sandbox

Tamarind, Ariax, Neurosnap, Rowan

Every year there are more online services that make running these tools easier for biologists.

Tamarind does not develop its own models, but allows anyone to easily run most of the open models. Tamarind have been impressively fast at getting models onboarded and available for use. They have a free tier, but realistically you need a subscription to do any real work, and I believe that costs tens of thousands per year. Neurosnap looks like it has similar capabilities to Tamarind, but the pricing may be more suitable for academics or more casual users. Ariax have done an incredible job making BindCraft (and FreeBindCraft) available and super easy to run. They don't generate antibodies yet, but they will once a suitably open model is released. Rowan are more small molecule- and MD-focused than antibody-focused—they even release their own MD models—so although a fantastic toolkit, less relevant to antibody design.

Tamarind has over one hundred models, including all the major structure prediction and design models

Xaira, Generate, Cradle, Profluent, Isomorphic, Nabla, BigHat, etc

There are a gaggle of other actual drug companies working on computational antibody design, but these models will likely stay internal to those companies. Cradle is the outlier in this list since it is a service business, but I believe they do partnerships with pharma/biotech, rather than licensing their models.

It will be interesting to see which of these companies figure out a unique approach to drug discovery, and which get overtaken by open source. Most people in biotech will tell you that if you want a highly optimized antibody and can wait a few months, companies like Adimab, Alloy, or Specifica can already reliably achieve that, and the price will be a small fraction of the total cost of the program anyway.


Benchmarks

AIntibody

The AIntibody competition, run by the antibody discovery company Specifica, is similar to last year's Adaptyv binder design competition, but focused on antibodies.

The competition includes three challenges, but unlike the Adaptyv competition, none of the challenges is a simple "design a novel antibody for this target". The techniques used in this competition ended up being quite complex workflows specific to the challenges: for example, a protein language model combined with a model fine-tuned on affinity data provided by Specifica.

Interestingly, the "AI Biotech" listed as coming third is—according to their github—IgGM. The Specifica team have given a webinar on the results with some interesting conclusions, but the full write-up is still to come.

Conclusions from the AIntibody webinar

BenchBB

BenchBB is Adaptyv Bio's new binder design benchmark. While it's not specifically for antibodies, if you did try to generate PD-L1 binders using the biomodals commands given above, you could test your designs here for $99 each.

We know we need a lot more affinity data to improve our antibody models, and $99 is a phenomenal deal, so some crypto science thing should fund this instead of the nonsense they normally fund!

There are seven currently available BenchBB targets

Conclusion

I often seem to end these posts by saying things are getting pretty exciting. I think that's true, especially over the past few weeks with IgGM and Germinal being released, but there are also some gaps. RFantibody was published quite a while ago, and we still only have a few successors, most of which are not fully open. The models are improving, but large companies like Google (Isomorphic) are no longer releasing models, so progress has slowed somewhat. Mirroring the LLM world, it's left to academic labs like Martin Pacesa, Sergey Ovchinnikov, Bruno Correia and Brian Hie, and Chinese companies like Tencent to push the open models forward.

I did not talk about antibody language models here even though there are a lot of interesting ones. It would be a big topic, and they are more applicable to downstream tasks, once you have a binder to improve upon.

As with protein folding (see SimpleFold from this week!), there is not a ton of magic here, and many of the methods are converging on the same performance, governed by the available data. To improve upon that, we/someone probably needs to spend a few million dollars generating consistent binding and affinity data. In my opinion, Adaptyv Bio's BenchBB is a good place to focus efforts.

Publicly available affinity data from the AbRank paper. Most of the data is from SARS-CoV-2 or HIV, so it's not nearly as much as it seems.

Running the code

If you want to run the biomodals code above and design some antibodies for PD-L1 (or any target) you'll need to do a couple of things.

 1. Sign up for modal. They give you $30 a month on the free tier, more than enough to generate a few binders.

 2. Install uv. If you use Python you should do this anyway!

 3. Clone my biomodals repo:

git clone https://github.com/hgbrian/biomodals # or gh repo clone hgbrian/biomodals
Comment
Brian Naughton | Mon 05 May 2025 | biotech | biotech ai ip

The new class of protein AI design tools are amazing, and could revolutionize many areas of science, including therapeutics, diagnostics, and biosensors. Surprisingly, one important area that I haven't seen discussed too much is how these tools could impact patents. I am not a lawyer, so obviously this post is just my basic understanding, and I'd be happy to hear corrections. If there is a more expert critique I did not find it.

Patents are wordy and convoluted by design. For proteins, because a string of amino acids defines them, there are some common elements: they often include the sequence(s) being patented, and a threshold for how similar another sequence can be before infringing. That means there is a target to hit, and AI is really good at hitting targets.

There are two major categories of protein patents: biologics (usually meaning antibodies) and enzymes.

Antibodies

According to the European Patent Office, there are two main ways to patent an antibody:

  • "functional" claims, usually meaning the antibody's associated antigen or epitope;
  • "structural" claims, usually meaning a sequence and sequence identity threshold, along with the epitope or some other support.

Over the past few years, the "functional" claim has been going away. In the US it was killed off by the 2023 Amgen vs Sanofi ruling, which essentially said you can't patent the concept of an antibody against PCSK9. That means antibodies are now almost exclusively patented based on their structure (more specifically, a sequence plus some supporting functional information like epitope affinity.)

For antibody sequences, it used to be common for claims to cover any sequence 80%+ identical in the heavy or light chains. These days it seems like you have to be more specific, with claims only covering 100% identity to all 6 CDRs.

To take some real examples:

  • Zanidatamab, a HER2 bispecific approved in 2024, claims sequences with 100% sequence identity to its CDRs;
  • Epcoritamab, a CD3/CD20 bispecific approved in 2024, also has claims sequences with 100% sequence identity to its CDRs;
  • Trastuzumab, the famous HER2 antibody approved in 1998 (filed in 2013), claims sequences with 85%+ sequence identity to the heavy and light chains, and does not mention CDRs at all.

The EPO says: "the slightest modification of the CDRs can affect the recognition of the target." There is a nice breakdown of the differences between the USPTO vs EPO approach to antibody patents here.

Enzymes

For enzymes, the patent landscape is more complicated, or at least more varied. Unlike antibodies, where the patents are pretty uniformly focused on the sequence that binds an epitope, enzymes can perform any number of functions. Enzyme types include enzyme replacement therapies, industrial enzymes like detergents, and molecular biology tools like CRISPR-Cas9. It is still typical for these patents to include a sequence and supporting information.

Some examples:

  • this detergent patent, granted in 2018, claims sequences with 60%+ sequence identity to the reference;
  • this proteinase patent, granted in 2022, claims sequences with 90%+ sequence identity to the reference;
  • this novel Taq polymerase patent, granted in 2025, claims sequences with 95%+ sequence identity to the reference.

Cas9

The Cas9 patents are unusually diverse: there are hundreds of them and they mostly cover the many applications of the invention rather than the sequences. Since the 2013 ruling against Myriad Genetics, sequences from naturally occurring enzymes like Cas9 cannot be patented. Engineered sequences can be patented with other supporting functional information. You cannot take one of the thousands of unique Cas9 sequences in GenBank and use that to circumvent the CRISPR-Cas9 patents.

There are hundreds of Cas9 patents covering everything anyone could think of

AI

Given that the amino acid sequence is so important in protein patents, I am surprised that it is not bigger news that AI has effectively broken the direct connection between sequence and function.

For patents where protein sequence identity is protected, it is now relatively straightforward to generate new sequences that fold to the same structure but have 50% or lower sequence identity.

For antibody patents where the CDR sequence is protected, I believe it is also relatively straightforward to introduce a mutation that does not disrupt binding. To be honest, I am not even sure AI is required here, since a mutation scan could perform the same function. Perhaps for this reason, a recent paper called for "comprehensive CDR scanning" to protect a panel of CDR sequences instead of just one.

ProteinMPNN, published in 2022 by Baker lab, is the most prominent tool for producing a new sequence that folds to a known structure. ProteinMPNN is widely used as a step in many protein design workflows. For example, methods like RFdiffusion generate backbone coordinates only, and ProteinMPNN turns that into an amino acid sequence.

In a follow-up ProteinMPNN paper, the authors demonstrated that they could make a myoglobin and TEV protease with comparable or better function and greater stability than the natural versions, with sequence identities as low as 40%. This is below the sequence identity threshold in any patent I have seen.

ProteinMPNN can be used to produce a new sequence for a protein while maintaining its function

Sequence vs Structure

If this ability for AI to circumvent sequence-based patents is an issue, maybe the obvious change here would be to base patent protection on structure. This is a bit more complex than sequence identity, but one way to do this would be with TM-align or a similar tool. TM-align has >3k citations so it is arguably the standard in the field. A TM-score of above 0.8 indicates "the same topology"—in other words a very close structure. I think this would work well for many proteins, though it might need to be constrained to subdomains (akin to CDRs) in some cases.

Interestingly, the only literature I found on patenting 3D structure is from 20 years ago. Maybe this has been debated already and rejected for some reason. I suspect it was just easier to use sequence though.

OpenCRISPR-1

OpenCRISPR-1 was published in 2024 by the protein AI company Profluent. This is a de novo Cas9 enzyme that is substantially different in sequence to any known Cas9 (according to the abstract, "400 mutations away in sequence [from SpCas9]"—specifically 403/1380, or 71% identity).

Cas9 is a bilobed enzyme, with a REC lobe (nucleotide recognition) and a NUC lobe (DNA cleavage and PAM recognition.) Broadly speaking, the REC lobe is the first half of the enzyme (amino acids 50–700), and the NUC lobe is the second (1–50 and 700–1350.) These two lobes are connected by a "bridge helix".

Cartoon representation of Cas9 from addgene.

The OpenCRISPR-1 enzyme is not as novel as it might seem. In fact, I found it is actually 98% identical to a sequence constructed from three Cas9's spliced together from Streptococcus cristatus, Streptococcus pyogenes and Streptococcus sanguinis (24 amino acids are unique to OpenCRISPR-1).

This raises an interesting question, which is whether you could create a "novel" Cas9 by simply stitching together the REC lobe from one species' Cas9 and the NUC lobe from another. I believe this enzyme would work, and this sequence would meet any sequence identity threshold requirements.

The Profluent paper says the OpenCRISPR-1 enzyme was released for "research and commercial applications", but there is a big caveat here. Since CRISPR-Cas9 patents post-date the Myriad decision, almost all are functional / method of use, and naturally the most protected part is the use of Cas9 in "commercial applications" like therapeutics and diagnostics.

It is commendable that Profluent tried to broaden the availability of Cas9, so I appreciate the work behind this, but as I understand it, OpenCRISPR-1 is not really more available for commercial use than any Cas9.

There is actually another "royalty-free" Cas, a "Class 2 Type V" Cas nuclease called MAD7, released by Inscripta for commercial use in 2023. I do not know how this enzyme intersects with the many Cas9 patents.

Conclusion

One upshot of all this AI work is that me-too and biosimilar antibodies will be easier to make. That saves some time and money, but does not necessarily save on the major clinical trial costs, although the probability of success could go up a lot if the antibody is functionally identical.

While many enzyme patents will be affected, patents like CRISPR-Cas9 that rely on functional or method of use claims do not seem to be impacted as much. I don't know how many enzyme patents rely on sequence identity claims vs other claims these days. It would be interesting to (get an AI to) do a proper survey.

For internal research use, it's unclear to me that using AI to reproduce a patented protein does a whole lot, since at least in drug development, the research exemption seems to allow for the use of patented material quite broadly.

Comment
Brian Naughton | Sat 08 March 2025 | biotech | biotech ai

I have written about protein binder design a few times now (the Adaptyv competition; a follow up). Corin Wagen recently wrote a great piece about protein–ligand binding. This purpose of this post is to review how well protein binder design is working today, and point out some interesting differences in model performance that I do not understand.

Protein design

There are two major types of protein design:

  1. Design a sequence to perform some task: e.g., produce a sequence that improves upon some property of the protein
  2. Design a structure to perform some task: e.g., produce a protein structure that binds another protein

There is spillover between these two classes but I think it's useful to split this way.

Sequence models

Sequence models include open-source models like the original ESM2, ProSST, SaProt, and semi-open or fully proprietary models from EvolutionaryScale (ESM3), OpenProtein (PoET-2), and Cradle Bio. The ProteinGym benchmark puts ProSST, PoET-2 and SaProt up near the top.

Many of the recent sequence-based models now also include structure information, represented as a parallel sequence, with one "structure token" per amino acid. This addition seems to improve performance quite a lot, allows sequence models to make use of the PDB, and — analogously to Vision Transformers — blurs the line between sequence and structure models.

SaProt uses a FoldSeek-derived alphabet to encode structural information

The most basic use-case for sequence models is probably improving the stability of a protein. You can take a protein sequence, make whatever edits your model deems high likelihood, and this should produce a sequence that retains the same fold, but is more "canonical", and so may have improved stability too.

An elaboration of this experiment is to find some data, e.g., thermostability for a few thousand proteins, and fine-tune the original language model to be able to predict that property. SaProtHub makes this essentially push-button.

A further elaboration is doing active learning, where you propose edits using your model, generate empirical data for these edits (e.g., binding affinity), and go back and forth, hopefully improving performance each iteration. For example, EVOLVEpro, Nabla Bio's JAM (which also uses structure), and Prescient's Lab-in-the-loop. These systems can be complex, but can also be as simple as running regressions on the output of the sequence models.

EvolvePro's learning loop

Sequence-based models are a natural fit to these kinds of problems, since you can easily edit the sequence but maintain the same fold and function. Profluent and other companies make use of this ability by producing patent-unencumbered sequences like OpenCRISPR.

This is especially enabling for the biosimilars industry. Many biologics patents protect the sequence by setting amino acid identity thresholds. For example, in the Herceptin/trastuzumab patent they protect any sequence >=85% identical to the heavy (SEQ ID NO: I) or light chain (SEQ ID NO: II).

Excerpt from the main trastuzumab patent

Patent attorneys will layer as many other protections on top of this as they can think of, but the sequence of the antibody is the primary IP. (Tangentially, it is insane how patents always give examples of numbers greater than X. Hopefully, the AIs that will soon be writing patents won't do this.)

For binder design, sequence models appear to have limits. Naively, since you do not know the positions of the atoms, then unless you are apeing known interaction motifs, you would assume binder design should be difficult?

Diego del Alamo points out apparent limits in the performance of sequence models for antibody design

Structural models

Structural models include the original RFdiffusion and the recently released antibody variant RFantibody from the Baker lab, RSO from the ColabDesign team, BindCraft, EvoBind2, foldingdiff from Microsoft, and models from startups like Generate Biomedicines (Chroma), Chai Discovery, and Diffuse Bio. (Some of these tools are available on my biomodals repo).

Structural models are trained on both sequence data (e.g., UniRef) and structure data (PDB), but they deal in atom co-ordinates instead of amino acid strings. That difference means diffusion-style models dominate here over the discrete-token–focused transformers.

There are two major classes of structural models:

  • Diffusion models like RFdiffusion and RFantibody
  • AlphaFold2-based models like BindCraft, RSO, and EvoBind2

The success rates of RFdiffusion and RFantibody are not great. For some targets they achieve a >1% success rate (if we define success as finding a <1µM binder), but in other cases they nominate thousands of designs and find no strong binder.

An example from the RFantibody paper showing a low success rate

BindCraft and RSO are two similar methods that produce minibinders (small-ish non-antibody–based proteins) and rely on inverting AlphaFold2 to turn structure into sequence. EvoBind2 produces cyclic or linear peptides, and also relies heavily on an AlphaFold confidence metric (pLDDT) as part of its loss.

BindCraft (top) and EvoBind2 (bottom) have similar loss functions that rely on AF2's pLDDT and intermolecular contacts

Even though these AF2-based models work very well, one non-obvious catch is that you cannot take a binding pose and get AlphaFold2 to evaluate it. These models can generate binders, but not discriminate binders from non-binders. In the EvoBind2 paper, they found that "No in silico metric separates true from false binders", which means the problem is a bit more complex than just "ask AF2 if it looks good".

According to the AF2Rank paper, the AF2 model has a good model of the physics of protein folding, but may not find the global minimum. The MSAs' job is to help focus that search. This was surprising to me! The protein folding/binding problem is more of a search problem than I realized, which means more compute should straightforwardly improve performance by simply doing more searching. This is also evidenced by the AlphaFold 3 paper, where re-folding antibodies 1000 times led to improved prediction quality.

Excerpt from the AF2Rank paper (top), and a tweet from Sergey Ovchinnikov (bottom) explaining the primacy of sequence data in structure prediction

RFdiffusion/RFantibody vs BindCraft/EvoBind2

The main comparison I wanted to make in this post is between RFdiffusion/RFantibody vs BindCraft and EvoBind2.

These are all recently released, state-of-the-art models from top labs. However, the difference in claimed performance is pretty striking.

While the RFdiffusion and RFantibody papers caution that you may need to test hundreds or even thousands of proteins to find one good binder, the BindCraft and EvoBind2 papers appear to show very high success rates, perhaps even as high as 50%. (EvoBind2 only shows results for one ribonuclease target but BindCraft includes multiple).

Words of caution from the RFantibody github repo (top) and BindCraft's impressive results for 10 targets (bottom)

There is no true benchmark to reference here, but I think under reasonable assumptions, BindCraft (and arguably EvoBind2) achieve a >10X greater success rate than RFdiffusion or RFantibody. The Baker lab is the leading and best resourced lab in this domain, so what accounts for this large difference in performance? I can think of a few possibilities:

  • RoseTTAFold2 was not the best filter for RFantibody to use, and switching to AlphaFold3 would improve performance. This is plausible, but it is hard to believe that is a 10X improvement.
  • Antibodies are just harder than minibinders or cyclic peptides. Hypervariable regions are known to be difficult to fold, since they do not have the advantage of evolutionary conservation. However, RFdiffusion also produces minibinders, so this is not a satisfactory explanation.
  • BindCraft and EvoBind2 are testing on easier targets. There is likely some truth to this. Most (but not all) examples in the BindCraft paper are for proteins with known binders; EvoBind2 is only tested against a target with a known peptide binder. However, most of RFantibody's targets also have known antibodies in PDB.
  • Diffusion currently just does not work as well as AlphaFold-based methods. AlphaFold2 (and its descendants, AF3, Boltz, Chai-1, etc.) have learned enough physics to recognize binding, and by leaning on this ability heavily, and filtering carefully, you get much better performance.

What comes next?

RFdiffusion and RFantibody are arguably the first examples of successful de novo binder design and antibody design, and for that reason are important papers. BindCraft and EvoBind2 have proven they can produce one-shot nanomolar binders under certain circumstances, which is technically extremely impressive.

However, if we could get another 10X improvement in performance, then I think these tools are being used in every biotech and research lab. Some ideas for future directions:

  • More compute: One of the interesting things about BindCraft and EvoBind2 is how long they take to produce anything. In BindCraft's case, it generates a lot of candidates, but has a long list of criteria that must be met. One BindCraft run will screen hundreds or thousands of candidates and can easily cost $10+. Similary, EvoBind2 can run for 5+ hours before producing anything, again easily costing $10+. This approach of throwing compute at the problem may be generally applicable, and may be analogous to the recently successful LLM reasoning approaches.
  • Combine diffusion and AlphaFold-based methods: I have no specific idea here, but since they are quite different approaches, maybe integrating some ideas from RFdiffusion into EvoBind2 or BindCraft could help.
  • Combine sequence models and structure models: There is already a lot of work happening here, both from the sequence side and structure side. In the simplest case, the output of a sequence model like ESM2 could be an independent contributor to the loss of a structure model. At the very least, this could help filter out structures that do not fold.
  • Neural Network Potentials: Neural Network Potentials are an exciting new tool for molecular dynamics (see Duignan, 2024 or Barnett, 2024). Achira just got funded to work on this, and has several of the pioneers of the field on board. Semi-open source models like orb-v2 from Orbital Materials are being actively developed too. The amount of compute required is prohibitive right now, but even a short trajectory could plausibly help with rank ordering binders, and would be independent of the AF2 metrics.

Tweet from Tim Duignan at Orbital Materials

Comment
Brian Naughton | Sun 09 February 2025 | health | health

A list of health-related products

Read More
Brian Naughton | Mon 30 December 2024 | ai | ai biotech proteindesign

What we learned about binder design from the Adaptyv competition

Read More
Brian Naughton | Sat 30 November 2024 | ai | ai biotech proteindesign

Comparing Alphafold 3, Boltz and Chai-1

Read More
Brian Naughton | Sat 07 September 2024 | biotech | biotech ai llm

Some notes on the Adaptyv binder design competition

Read More

A simulation of evolution and predator–prey dynamics

Read More

Using LLMs to search PubMed and summarize information on longevity drugs.

Read More
Brian Naughton | Sun 14 January 2024 | datascience | datascience ai llm

Using LLMs to search pubmed and summarize information.

Read More

Boolean Biotech © Brian Naughton Powered by Pelican and Twitter Bootstrap. Icons by Font Awesome and Font Awesome More