–––––––– –––––––– archives investing twitter
Brian Naughton | Mon 30 December 2024 | ai | ai biotech proteindesign

This article is a deeper look at Adaptyv's binder design competition, and some thoughts on what we learned. If you are unfamiliar with the competition, there is background information on the Adaptyv blog and my previous article.

The data

Adaptyv did a really nice job of packaging up the data from the competition (both round 1 and round 2). The also did a comprehensive analysis of which metrics predicted successful binding in this blogpost.

The data from round 2 is more comprehensive than round 1 — it even includes Alphafolded structures — so I downloaded the round 2 csv and did some analysis.

Regressions

Unlike the Adaptyv blogpost, which does a deep dive on each metric in turn, I just wanted to see how well I could predict binding affinity (Kd) using the following features provided in the csv: pae_interaction, esm_pll, iptm, plddt, design_models (converted to one-hot), seq_len (inferred from sequence). Three of these metrics (pae_interaction, esm_pll, iptm) were used to determine each entry's rank in the competition's virtual leaderboard, which was used to prioritize entries going into the binding assay.

I also added one more feature, prodigy_kd, which I generated from the PDB files provided using prodigy. Prodigy is an old-ish tool for predicting binding affinity that identifies all the major contacts (polar–polar, charged–charged, etc.) and reports a predicted Kd (prodigy_Kd).

I used the typical regression tools: Random Forest, Kaggle favorite XGBoost, SVR, linear regression, as well as just using the mean Kd as a baseline. There is not a ton of data here for cross-validation, especially if you split by submitter, which I think is fairest. If you do not split by submitter, then you can end up with very similar proteins in different folds.

# get data and script
git clone https://github.com/adaptyvbio/egfr_competition_2
cd egfr_competition_2/results
wget https://gist.githubusercontent.com/hgbrian/1262066e680fc82dcb98e60449899ff9/raw/regress_adaptyv_round_2.py
# run prodigy on all pdbs, munge into a tsv
find structure_predictions -name "*.pdb" | xargs -I{} uv run --with prodigy-prot prodigy {} > prodigy_kds.txt
(echo -e "name\tprodigy_kd"; rg "Read.+\.pdb|25.0˚C" prodigy_kds.txt | sed 's/.*\///' | sed 's/.*25.0˚C:  //' | paste - - | sed 's/\.pdb//') > prodigy_kds.tsv
# run regressions
uv run --with scikit-learn --with polars --with matplotlib --with seaborn --with pyarrow --with xgboost regress_adaptyv_round_2.py

The results are not great! There are a few ways to slice the data (including replicates or not; including similarity_check or not; including non-binders or not). There is a little signal, but I think it's fair to say nothing was strongly predictive.


Model RMSE (log units) Median Fold Error
Linear Regression 0.150 0.729 1.8x
Random Forest Regression 0.188 0.712 1.4x
SVM Regression 0.022 0.781 1.2x
XGBoost 0.061 0.766 1.2x
Mean Kd only -0.009 0.794 1.9x

XGBoost performance looks ok here but is not much more predictive than just taking the mean Kd

Surprisingly, no one feature dominates in terms of predictive power

Virtual leaderboard rank vs competition rank

If there really is no predictive power in these computational metrics, there should be no correlation between rank in the virtual leaderboard and rank in the competition. In fact, there is a weak but significant correlation (Spearman correlation ~= 0.2). However, if you constrain to the top 200 (of 400 total), there is no correlation. My interpretation is that these metrics can discriminate no-hope-of-binding from some-hope-of-binding, but not more than that.

It may be too much to ask one set of metrics to work for antibodies (poor PLL, poor PAE?), de novo binders (poor PLL), and EGF/TNFa-derived binders (natural, so excellent PLL). However, since I include design_models as a covariate, the regression models above can use different strategies for different design types, so at the very least we know there is not a trivial separation that can be made.

BindCraft's scoring heuristics

So how can BindCraft work if it's mostly using these same metrics as heuristics? I asked this on twitter and got an interesting response.

It is possible that PyRosetta's InterfaceAnalyzer is adding a lot of information. However, if this were the case, you might expect Prodigy's Kd prediction to also help, which it does not. It is also possible that by using AlphaFold2, the structures produced by BindCraft are inherently biased towards natural binding modes. Then a part of the binding heuristics are implicit in the weights of the model?

What did we learn?

I learned a couple of things:

  • Some tools, specifically BindCraft, can consistently generate decent binders, at least against targets and binding pockets present in its training set (PDB). (The BindCraft paper also shows success with at least one de novo protein not present in the PDB.)
  • We do not have a way to predict if a given protein will bind a given target.

I think this is pretty interesting, and a bit counterintuitive. More evidence that we cannot predict binding comes from the Dickinson lab's Prediction Challenges, where the goal is to match the binder to the target. Apparently no approach can (yet).

The Adaptyv blogpost ends by stating that binder design has not been solved yet. This is clearly true. So what comes next?

  • We could find computational metrics that work, based on the current sequence and structure data. For example, BindCraft includes "number of unsatisfied hydrogen bonds at the interface" in its heuristics. I am skeptical that we can do a lot better with this approach. For one thing, Adaptyv has already iterated once on its ranking metrics, with negligible improvement in prediction.
  • We could get better at Molecular Dynamics, which probably contains some useful information today (at exorbitant computational cost), and could soon be much better with deep learning approaches.
  • We could develop an "AlphaFold for Kd prediction". There are certainly attempts at this, e.g., ProAffinity-GNN and the PPB-Affinity dataset to pick two recent examples, but I don't know if anything works that well. The big problem here, as with many biology problems, is a lack of data; PDBbind is not that big (currently ~2800 protein–protein affinities.)

Luckily, progress in this field is bewilderingly fast so I'm sure we'll see a ton of developments in 2025. Kudos to Adaptyv for helping push things forward.

Comment
Brian Naughton | Sat 30 November 2024 | ai | ai biotech proteindesign

Alphafold 3 (AF3) came out in May 2024, and included several major advances over Alphafold 2 (AF2). In this post I will give a brief review of Alphafold 3, and compare the various open and less-open AF3-inspired models that have come out over the past six months. Finally, I will show some results from folding antibody complexes.

Alphafold 3

AF3 has many new capabilities compared to AF2: it can work with small molecules, nucleic acids, ions, and modified residues. It also has arguably a streamlined architecture compared to AF2 (pairformer instead of evoformer, no rotation invariance).

310.ai did a nice review and small benchmark of AlphaFold3 that is worth reading.

The AF3 paper hardly shows any data comparing AF3 to AF2, and is mainly focused on its new capabilities working with non-amino acids. In all cases tested, it performed as well as or exceeded state-of-the-art. For most regular protein folding problems, AF3 and AF2 work comparably well (more specifically, Alphafold-Multimer (AF2-M), the AF2 revision that allowed for multiple protein chains) though for antibodies there is a jump in performance.

Still, despite being an excellent model, AF3 gets relatively little discussion. This is because the parameters are not available so nobody outside DeepMind/Isomorphic Labs really uses it. The open source AF2-M still dominates, especially when used via the amazing colabfold project.

Alphafold-alikes

As soon as AF3 was published, the race was on to reimplement the core ideas. The chronology so far:

Date Software Code available? Parameters available? Lines of Python code
2024-05 Alphafold 3 ❌ (CC-BY-NC-SA 4.0) ❌ (you must request access) 32k
2024-08 HelixFold3 ❌ (CC-BY-NC-SA 4.0) ❌ (CC-BY-NC-SA 4.0) 17k
2024-10 Chai-1 ❌ (Apache 2.0, inference only) ✅ (Apache 2.0) 10k
2024-11 Protenix ❌ (CC-BY-NC-SA 4.0) ❌ (CC-BY-NC-SA 4.0) 36k
2024-11 Boltz ✅ (MIT) ✅ (MIT) 17k

There are a few other models that are not yet of interest: Ligo's AF3 implementation is not finished and perhaps not under active development, LucidRains' AF3 implementation is not finished but is still under active development.

It's been pretty incredible to see so many reimplementation attempts within the span of a few months, even if most are not usable due to license issues.

Code and parameter availability

As a scientist who works in industry, it's always annoying to try to figure out which tools are ok to use or not. It causes a lot of friction and wastes a lot of time. For example, I started using ChimeraX a while back, only to find out after sinking many hours into it that this was not allowed.

There are many definitions of "open" software. When I say open I really mean you can use it without checking with a lawyer. For example, even if you are in academia, if the license says the code is not free for commercial use, then what happens if you start a collaboration with someone in industry? What if you later want to commercialize? These are common occurrences.

In some cases (AF3, HelixFold3, Protenix, and Chai-1), they make a server available, which is nice for very perfunctory testing, but precludes testing anything proprietary or folding more than a few structures. If you have the code and the training set, it would cost around $100k to train one of these models (specifically, the Chai-1 and Protenix papers give numbers in this range, though that is just the final run). So in theory there is no huge blocker to retraining. In practice it does not seem to happen, perhaps for license issues.

The specific license matters. Before today, I thought MIT was just a more open Apache 2.0, but apparently there is an advantage to Apache 2.0 around patents! My non-expert conclusion is that unlicense, MIT and Apache are usable, GPL and CC-BY-NC-SA are not.

Which model to choose?

There are a few key considerations: availability; extensibility / support; performance.

1. Availability

In terms of availability, I think only Chai-1 and Boltz are in contention. The other models are not viable for any commercial work, and would only be worth considering if their capabilities were truly differentiated. As far as I know, they are not.

2. Extensibility and support

I think this one is maybe under-appreciated. If an open source project is truly open and gains enough mindshare, it can attract high quality bug reports, documentation, and improvements. Over time, this effect can compound. I think currently Boltz is the only model that can make this claim.

A big difference between Bolt and Chai-1 is that Boltz includes the training code and neural network architecture, whereas Chai-1 only includes inference code and uses pre-compiled models. I only realized this when I noticed the Chai-1 codebase is half the size of the Boltz codebase. Most users will not retrain or finetune the model, but the ability for others to improve the code is important.

To be clear, I am grateful to Chai for making their code and weights available for commercial purposes, and I intend to use the code, but from my perspective Boltz should be able to advance much quicker. There is maybe an analogy to Linux or Blender vs proprietary software.

3. Performance

It's quite hard to tell from the literature who has the edge in performance. You can squint at the graphs in each paper, but fundamentally all of these models are AF3-derivatives trained on the same data, so it's not surprising that performance is generally very similar.

Chai-1 and AF3 perform almost identically

Boltz and Chai-1 perform almost identically

Protenix and AF-3 perform almost identically

Benchmarking performance

I decided to do my own mini-benchmark, by taking 10 recent (i.e., not in any training data) antibody-containing PDB entries and folding them using Boltz and Chai-1.

Both models took around 10 minutes per antibody fold on a single A100 (80GB for Boltz, 40GB for Chai-1). Chai-1 is a little faster, which is expected since it uses ESM embeddings instead of multiple sequence alignments (MSAs). (Note, I did not test Chai-1 in MSA mode, giving it a small disadvantage compared to Boltz.)

Tangentially, I was surprised I could not find a "pdb to fasta" tool that would output protein, nucleic acids, and ligands. Maybe we need a new file format? You can get protein and RNA/DNA from pdb, but it will be the complete sequence of the protein, not the sequence in the PDB file (this may or may not be what you want). Extracting ligands from PDB files is actually very painful since the necessary bond information is absent! The best code I know of to do this is a pretty buried old Pat Walters gist.

Most of the PDBs I tested were protein-only, one had RNA, and I skipped one glycoprotein. I evaluated performance using USalign, using either the average "local" subunit-by-subunit alignment (USalign -mm 1) or one "global" all-subunit alignment (USalign -mm 2). Both models do extremely well when judged on local subunit accuracy, but much worse for global accuracy — sadly this is quite relevant for an antibody model! It appears that these models well understand how antibodies fold, but not how they bind.

Conclusions

On my antibody benchmark, Boltz and Chai-1 perform eerily similar, with a couple of cases where Boltz wins out. That, combined with all the data from the literature, makes the conclusion straightforward, at least for me. Boltz performs as well as or better than any of the models, has a clean, complete codebase with relatively little code, is hackable, and is by far the most open model. I am excited to see how Boltz progresses in 2025!

Technical details

I ran Boltz and Chai-1 on modal using my biomodals repo.

modal run modal_boltz.py --input-faa 8zre.fasta --run-name 8zre
modal run modal_chai1.py --input-faa 8zre.fasta --run-name 8zre

Here is a folder with all the pdb files and images shown below.

Addendum

On BlueSky, Diego del Alamo notes that Chai-1 outperformed Boltz in a head-to-head of antibody–antigen modeling.

On linkedin, Joshua Meier (co-founder Chai Discovery) recommended running Chai-1 with msa_server turned on, to make for a fairer comparison. I reran the benchmark with Chai-1 using MSAs, and it showed improvements in 8ZRE (matching Boltz) and 9E6K (exceeding Boltz.)

I think it is still fair to say that the results are very close.



Complex Boltz Chai-1
9CIA: T cell receptor complex
Local
TM-Score: 0.9449
RMSD: 1.5783
Global
TM-Score: 0.3928
RMSD: 6.6600
Local
TM-Score: 0.9411
RMSD: 1.3858
Global
TM-Score: 0.3980
RMSD: 7.3400
8ZRE: HBcAg-D4 Fab complex
Local
TM-Score: 0.9216
RMSD: 1.4688
Global
TM-Score: 0.3468
RMSD: 6.6200
Local
TM-Score: 0.9070
RMSD: 1.4062
Global
TM-Score: 0.2856
RMSD: 6.1000
9DF0: PDCoV S RBD bound to PD41 Fab (local refinement)
Local
TM-Score: 0.8733
RMSD: 1.1500
Global
TM-Score: 0.7020
RMSD: 2.7900
Local
TM-Score: 0.2957
RMSD: 2.2400
Global
TM-Score: 0.7022
RMSD: 2.7100
9CLP: Structure of ecarin from the venom of Kenyan saw-scaled viper in complex with the Fab of neutralizing antibody H11
Local
TM-Score: 0.9762
RMSD: 0.9667
Global
TM-Score: 0.6545
RMSD: 2.3700
Local
TM-Score: 0.9607
RMSD: 1.2233
Global
TM-Score: 0.6675
RMSD: 3.2100
9C45: SARS-CoV-2 S + S2L20 (local refinement of NTD and S2L20 Fab variable region)
Local
TM-Score: 0.9903
RMSD: 1.3600
Global
TM-Score: 0.5288
RMSD: 4.0900
Local
TM-Score: 0.9912
RMSD: 2.7033
Global
TM-Score: 0.5141
RMSD: 4.3500
9E6K: Fully human monoclonal antibody targeting the cysteine-rich substrate-interacting region of ADAM17 on cancer cells.
Local
TM-Score: 0.7462
RMSD: 2.4400
Global
TM-Score: 0.7732
RMSD: 4.2200
Local
TM-Score: 0.9676
RMSD: 1.3633
Global
TM-Score: 0.8015
RMSD: 2.7300
9CMI: Cryo-EM structure of human claudin-4 complex with Clostridium perfringens enterotoxin, sFab COP-1, and Nanobody
Local
TM-Score: 0.9307
RMSD: 2.1680
Global
TM-Score: 0.4448
RMSD: 5.6900
Local
TM-Score: 0.9307
RMSD: 2.3560
Global
TM-Score: 0.4464
RMSD: 4.4400
9CX3: Structure of SH3 domain of Src in complex with beta-arrestin 1
Local
TM-Score: 0.8978
RMSD: 1.3867
Global
TM-Score: 0.5045
RMSD: 2.3200
Local
TM-Score: 0.8916
RMSD: 1.2617
Global
TM-Score: 0.4487
RMSD: 2.6700
9DX6: Crystal structure of Plasmodium vivax (Palo Alto) PvAMA1 in complex with human Fab 826827
Local
TM-Score: 0.7870
RMSD: 3.1400
Global
TM-Score: 0.5551
RMSD: 5.6100
Local
TM-Score: 0.2757
RMSD: 2.3067
Global
TM-Score: 0.5861
RMSD: 4.7600
9DN4: Crystal structure of a SARS-CoV-2 20-mer RNA in complex with FAB BL3-6S97N .
Local
TM-Score: 0.9726
RMSD: 0.8500
Global
TM-Score: 0.9850
RMSD: 0.9300
Local
TM-Score: 0.9938
RMSD: 0.4550
Global
TM-Score: 0.9957
RMSD: 0.4900
Comment
Brian Naughton | Sat 07 September 2024 | biotech | biotech ai llm

Adaptyv is a newish startup that sells high-throughput protein assays. The major innovations are (a) they tell you the price (a big innovation for biotech services!) (b) you only have to upload protein sequences, and you get results in a couple of weeks.

A typical Adaptyv workflow might look like the following:

  • Design N protein binders for a target of interest (Adaptyv has 50-100 pre-specified targets)
  • Submit your binder sequences to Adaptyv
  • Adaptyv synthesizes DNA, then protein, using your sequences
  • In ~3 weeks you get affinity measurements for each design at a cost of $149 per data-point

This is an exciting development since it decouples "design" and "evaluation" in a way that enables computation-only startups to get one more step towards a drug (or sensor, or tool). There are plenty of steps after this one, but it's still great progress!

The Adaptyv binder design competition

A couple of months ago, Adaptyv launched a binder design competition, where the goal was to design an EGFR binder. There was quite a lot of excitement about the competition on Twitter, and about 100 people ended up entering. At around the same time, Leash Bio launched a small molecule competition on Kaggle, so there was something in the air.

PAE and iPAE

For this competition, Adaptyv ranked designs based on the "PAE interaction" (iPAE) of the binder with EGFR.

PAE (Predicted Aligned Error) "indicates the expected positional error at residue x if the predicted and actual structures are aligned on residue y". iPAE is the average PAE for residues in the binder vs target. In other words, how accurate do we expect the relative positioning of binder and target to be? This is a metric that David Baker's lab seems to use quite a bit, at least for thresholding binders worth screening. It is straightforward to calculate using the PAE outputs from AlphaFold.

Unusually, compared to, say, a Kaggle competition, in this competition there are no held-out data that your model is evaluated on. Instead, if you can calculate iPAE, you know your expected position on the leaderboard before submitting.

The original paper Adaptyv reference is Improving de novo protein binder design with deep learning and the associated github repo has an implementation of iPAE that I use (and I assume the code Adaptyv use.)

Confusingly, there is also a metric called "iPAE" mentioned in the paper Systematic discovery of protein interaction interfaces using AlphaFold and experimental validation. It is different, but could actually be a more appropriate metric for binders?

At the end of last month (August 2024), there was a new Baker lab paper on Ras binders that also used iPAE, in combination with a few other metrics like pLDDT.

Experiments

A week or so after the competition ended, I found some time to try a few experiments.

Throughout these experiments, I include modal commands to run the relevant software. If you clone the biomodals repo it should just work(?)

iPAE vs Kd

The consensus seems to be that <10 represents a decent iPAE, but in order for iPAE to be useful, it should correlate with some physical measurement. As a small experiment, I took 55 PDB entries from PDBbind (out of ~100 binders that were <100 aas long, had an associated Kd, and only two chains), ran AlphaFold, calculated iPAE, and correlated this to the known Kd. I don't know that I really expected iPAE to correlate strongly with Kd, but it's pretty weak.

PDBbind Kd vs iPAE correlation

# download the PDBbind protein-protein dataset in a more convenient format and run AlphaFold on one example
wget https://gist.githubusercontent.com/hgbrian/413dbb33bd98d75cc5ee6054a9561c54/raw -O pdbbind_pp.tsv
tail -1 pdbbind_pp.tsv
wget https://www.rcsb.org/fasta/entry/6har/display -O 6har.fasta
echo ">6HAR\nYVDYKDDDDKEFEVCSEQAETGPCRACFSRWYFDVTEGKCAPFCYGGCGGNRNNFDTEEYCMAVCGSAIPRHHHHHHAAA:IVGGYTCEENSLPYQVSLNSGSHFCGGSLISEQWVVSAAHCYKTRIQVRLGEHNIKVLEGNEQFINAAKIIRHPKYNRDTLDNDIMLIKLSSPAVINARVSTISLPTAPPAAGTECLISGWGNTLSFGADYPDELKCLDAPVLTQAECKASYPGKITNSMFCVGFLEGGKDSCQRDAGGPVVCNGQLQGVVSWGHGCAWKNRPGVYTKVYNYVDWIKDTIAANS" > 6har_m.fasta
modal run modal_alphafold.py --input-fasta 6har_m.fasta --binder-len 80

Greedy search

This is about the simplest approach possible.

  • Start with EGF (53 amino acids)
  • Mask every amino acid, and have ESM propose the most likely amino acid
  • Fold and calculate iPAE for the top 30 options
  • Take the best scoring iPAE and iterate

Each round takes around 5-10 minutes and costs around $4 on an A10G on modal.

# predict one masked position in EGF using esm2
echo ">EGF\nNSDSECPLSHDGYCL<mask>DGVCMYIEALDKYACNCVVGYIGERCQYRDLKWWELR" > esm_masked.fasta
modal run modal_esm2_predict_masked.py --input-fasta esm_masked.fasta
# run AlphaFold on the EGF/EGFR complex and calculate iPAE
echo ">EGF\nNSDSECPLSHDGYCLHDGVCMYIEALDKYACNCVVGYIGERCQYRDLKWWELR:LEEKKVCQGTSNKLTQLGTFEDHFLSLQRMFNNCEVVLGNLEITYVQRNYDLSFLKTIQEVAGYVLIALNTVERIPLENLQIIRGNMYYENSYALAVLSNYDANKTGLKELPMRNLQEILHGAVRFSNNPALCNVESIQWRDIVSSDFLSNMSMDFQNHLGSCQKCDPSCPNGSCWGAGEENCQKLTKIICAQQCSGRCRGKSPSDCCHNQCAAGCTGPRESDCLVCRKFRDEATCKDTCPPLMLYNPTTYQMDVNPEGKYSFGATCVKKCPRNYVVTDHGSCVRACGADSYEMEEDGVRKCKKCEGPCRKVCNGIGIGEFKDSLSINATNIKHFKNCTSISGDLHILPVAFRGDSFTHTPPLDPQELDILKTVKEITGFLLIQAWPENRTDLHAFENLEIIRGRTKQHGQFSLAVVSLNITSLGLRSLKEISDGDVIISGNKNLCYANTINWKKLFGTSGQKTKIISNRGENSCKATGQVCHALCSPEGCWGPEPRDCVSCRNVSRGRECVDKCNLLEGEPREFVENSECIQCHPECLPQAMNITCTGRGPDNCIQCAHYIDGPHCVKTCPAGVMGENNTLVWKYADAGHVCHLCHPNCTYGCTGPGLEGCPTNGPKIPSI" > egf_01.fasta
modal run modal_alphafold.py --input-fasta egf_01.fasta --binder-len 53

One of the stipulations of the competition is that your design must be at least 10 amino acids different to any known binder, so you must run the loop above 10 or more times. Of course, there is no guarantee that there is a single amino acid change that will improve the score, so you can easily get stuck.

After 12 iteratations (at a cost of around $50 in Alphafold compute), the best score I got was 7.89, which would have been good enough to make the top 5. (I can't be sure, but I think my iPAE calculation is identical!) Still, this is really just brute-forcing EGF tweaks. I think the score was asymptoting, but there were also jumps in iPAE with certain substitutions, so who knows?

Unfortunately, though the spirit of the competition was to find novel binders, the way iPAE works means that the best scores are very likely to come from EGF-like sequences (or other sequences in AlphaFold's training set).

Adaptyv are attempting to mitigate this issue by (a) testing the top 200 and (b) taking the design process into account. It is a bit of an impossible situation, since the true wet lab evaluation happens only after the ranking step.

Bayesian optimization

Given an expensive black box like AlphaFold + iPAE, some samples, and a desire to find better samples, one appropriate method is Bayesian optimization.

Basically, this method allows you, in a principled way, to control how much "exploration" of new space is appropriate (looking for global minima) vs "exploitation" of variations on the current best solutions (optimizing local minima).

Bayesian optimization of a 1D function

The input to a Bayesian optimization is of course not amino acids, but numbers, so I thought reusing the ESM embeddings would be a decent, or at least convenient, idea here.

I tried both the Bayesian Optimization package and a numpyro Thompson sampling implementation. I saw some decent results at first (i.e., the first suggestions seemed reasonable and scored well), but I got stuck either proposing the same sequences over and over, or proposing sequences so diverged that testing them would be a waste of time. The total search space is gigantic, so testing random sequences will not help. I think probably the ESM embeddings were not helping me here, since there were a lot of near-zeros in there.

This is an interesting approach, and not too difficult to get started with, but I think it would work better with much deeper sampling of a smaller number of amino acids, or perhaps a cruder, less expensive, evaluation function.

ProteinMPNN

ProteinMPNN (now part of the LigandMPNN package), maps structure to sequence (i.e., the inverse of AlphaFold). For example, you can input an EGF PDB file, and it will return a sequence that should produce the same fold.

I found that for this task ProteinMPNN generally produced sequences with low confidence (as reported by ProteinMPNN), and as you'd expect, these resulted in low iPAEs. Some folds are difficult for ProteinMPNN, and I think EGF falls into this category. To run ProteinMPNN, I would recommend Simon Duerr's huggingface space, since it has a friendly interface and includes an AlphaFold validation step.

ProteinMPNN running on huggingface


# download a EGF/EGFR crytal structure and try to infer a new sequence that folds to chain C (EGF)
wget https://files.rcsb.org/download/1IVO.pdb
modal run modal_ligandmpnn.py --input-pdb 1IVO.pdb --extract-chains AC --params-str '--seed 1 --checkpoint_protein_mpnn "/LigandMPNN/model_params/proteinmpnn_v_48_020.pt" --chains_to_design "C" --save_stats 1 --batch_size 5 --number_of_batches 100'

RFdiffusion

RFdiffusion was the first protein diffusion method that showed really compelling results in generating de novo binders. I would recommend ColabDesign as a convenient interface to this and other protein design tools.

The input to RFdiffusion can be a protein fold to copy, or a target protein to bind to, and the output is a PDB file with the correct backbone co-ordinates, but with every amino acid labeled as Glycine. To turn this output into a sequence, this PDB file must then be fed into ProteinMPNN or similar. Finally, that ProteinMPNN output is typically folded with AlphaFold to see if the fold matches.

Although RFdiffusion massively enriches for binders over random peptides, you still have to screen many samples to find the really strong binders. So, it's probably optimistic to think that a few RFdiffusion-derived binders will show strong binding, even if you can somehow get a decent iPAE.

In my brief tests with RFdiffusion here, I could not generate anything that looked reasonable. I think in practice, the process of using RFdiffusion successfully is quite a bit more elaborate and heuristic-driven than anything I was going to attempt.

Figure 1 from De novo design of Ras isoform selective binders, showing multiple methods for running RFdiffusion

# Run RFdiffusion on the EGF/EGFR crystal structure, and diffuse a 50-mer binder against chain A (EGFR)
modal run modal_rfdiffusion.py --contigs='A:50' --pdb="1IVO"

Other things

A few other strategies I thought might be interesting:

  • Search FoldSeek for folds similar to EGF. The idea here is that you might find a protein in another organism that wants to bind EGFR. I do find some interesting human-parasitic nematode proteins in here, but decided these were unlikely to be EGFR binders.
  • Search NCBI for EGF-like sequences with blastp. You can find mouse, rat, chimp, etc. but nothing too interesting. The iPAEs are lower than human EGF, as expected.
  • Search the patent literature for EGFR binders. I did find some antibody-based binders, but as expected for folds that AlphaFold cannot solve, the iPAE was low.
  • Delete regions of the protein with low iPAE contributions to increase the average score. I really thought this would work for at least one or two amino acids, but it did not seem to. I did not do this comprehensively, but perhaps there are no truly redundant parts of this small binder?

Conclusion

All the top spots on the leaderboard went to Alex Naka, who helpfully detailed his methods in this thread. (A lot of this is similar to what I did above, including using modal!) Anthony Gitter also published an interesting thread on his attempts. I find these kinds of threads are very useful since they give a sense of the tools people are using in practice, including some I had never heard of, like pepmlm and Protrek.

Finally, I made a tree of the 200 designs that Adaptyv is screening (with iPAE <10 in green, <20 in orange, and >20 in red). All the top scoring sequences are EGF-like and cluster together. (Thanks to Andrew White for pointing me at the sequence data). We can look forward to seeing the wet lab results published in a couple of weeks.

Tree of Adaptyv binder designs

Comment

A simulation of evolution and predator–prey dynamics

Read More

Using LLMs to search PubMed and summarize information on longevity drugs.

Read More
Brian Naughton | Sun 14 January 2024 | datascience | datascience ai llm

Using LLMs to search pubmed and summarize information.

Read More

An example and examination of using modal for bioinformatics

Read More
Brian Naughton | Mon 04 September 2023 | biotech | biotech machine learning ai

Molecular dynamics code for protein–ligand interactions

Read More

Using colab to chain computational drug design tools

Read More
Brian Naughton | Sat 25 February 2023 | biotech | biotech machine learning ai

Using GPT-3 as a knowledge-base for a biotech

Read More

Boolean Biotech © Brian Naughton Powered by Pelican and Twitter Bootstrap. Icons by Font Awesome and Font Awesome More