Brian Naughton // Sun 20 January 2019 // Filed under deeplearning // Tags deeplearning datascience neuralnet

Jake Dangerback is an instagram celebrity and influencer (of sorts). He's unusual in that he does not exist; he was created using millions of matrix multiplications. This post is a look at some of the freely available state-of-the-art neural networks I used to create him.


Neural nets, specifically GANs, are getting really good at hallucinating realistic faces. Notably, NVIDIA had a paper in December 2018 that showed some pretty amazing results.


Since these faces do not belong to anyone, they are perfect for use as celebrities. They don't need to be paid, never complain, and can be tailored to appeal to any niche demographic.

Step One: Making A Face

I used this awesome tool on kaggle (github) to create a face. I did not know that kaggle supported this colab-like interface but it's quite advanced. The tool has 21 levers to pull so you can create the perfect face for your audience.


Using this tool I created country music star Jake Dangerback's face (his fans call him jdb). He is handsome but rugged and most importantly he has no legal personhood so I can use his likeness to endorse and sell products of all kinds.


Photoshopping jdb

It's fairly easy to create new images using this face. There are several sources of royalty-free images (e.g., pexels) where I can photoshop jdb's face in. jdb_cowboy_hat

Photoshopping a face is not that hard — at least at this quality — but it would be easier if a neural net did the photoshopping for me. Luckily there are many papers and github repos that do face swapping since it produces funny pictures. Most face swapping tools are mobile apps, but I did find Reflect face swap online. It seems to do a good job generally, but the result below looks a bit weird. It seems to be trying to mix the photos for realism rather than just replace the face.

reagan reagan faceswap
Reagan wearing a cowboy hat; jdb wearing a cowboy hat


If we had enough photos, we might consider automatically optimizing the image for likes using some kind of selfie-rating neural net like @karpathy's. I think the photoshopped images would have to be autogenerated to make this worthwhile.

Image filters, which are popular on instagram, can also use neural nets. The most famous example is neural style, which maps the style of one image — usually a painting — onto the content of another. The website does this as a service. These neural style filters are very cool but not that useful for instagram content.

jdb deepart jdb deepart2

What About DeepFake?

DeepFake is a powerful technique for swapping faces in video that can produce very realistic results. There's even a tool called FakeApp that automates some of the steps. There's a nice blogpost showing how to swap Jimmy Fallon's face for John Oliver's. It looks pretty convincing.

jdb deepfake

DeepFake creates video, which I do not really need, and to train it you need video of the subject, which I do not have. I suppose theoretically you could create a 3D model and use that to generate the source video...

Step Two: The Third Dimension

It is limiting if every photo has to have jdb's face looking head-on. Luckily, there is a very cool 3D face reconstruction neural net that works based on a single photo.

jdb in 3D

The results are great and it takes less than a minute to work. You can even load it into Blender. jdb in blender

Taking it a step further, the free MakeHuman software will create a mesh of a body with various parameters. jdb in blender

This could probably be made to work well but it's way beyond my Blender skills. jdb in blender

There's also an interesting iOS app called mug life that will animate photos based on an inferred 3D mesh. jdb muglife The results are impressive, if creepy. jdb looks so alive! I don't think you can download the mesh though.


Sometimes the resulting photoshopped image can be pretty blurry, partially because the 3D model's texture resolution is not that high. Luckily there is another deep net to help here, called neural-enhance. It really does enhance a photo by doubling the resolution, which is pretty slick. The author includes a docker container, which makes running it very simple.

The results are very impressive in general, and it only takes a few minutes even on a CPU. Since it's trained on real photos, I am guessing it might also remove artifacts and rogue pixels due to photoshopping.

jdb singing jdb singing enhanced
From blurry (left) to enhanced (right). The shirt buttons are the most obvious improvement.

Step Three: Captioning

Instagram posts have captions, which I assume are important for engagement. There have been many attempts to caption or describe images using neural nets. The oldest one I remember is an influential Stanford paper from 2015. There are a few tools online too. I first tried Microsoft's, assuming it used a neural net. The results were very meh, and it turns out it's not a neural net and is famously vague/bad.

Google's Cloud Vision API does much better though it's still not super-engaging content. For now, computers have not solved the instagram captioning problem.

jdb beach walk
Captionbot: I think it's a person standing on a beach and he seems 😐.
Google Cloud Vision: couple looking at each other on beach

Step Four: Songs

jdb is a country music celebrity, so he may need some songs. Thankfully there's a neural net for everything, including country music lyrics!

Some example lyrics:

No one with the danger in the world
I love my black fire as I know
But the short knees just around me
Fun the heart couldnes fall to back

It's not terrible ("couldnes"?), but it's also pretty dark for instagram... You can also generate music using RNNs but I did not find an easy way to generate country music.


Creating a speaking/singing voice from lyrics was not as easy as I thought it would be. I tried a few iOS apps, including LyreBird, but got strange results. Macs also have the say command (just type say hello into the terminal), which works ok.

I ended up using Google Cloud text-to-speech, which uses WaveNet, to turn lyrics into speech. It works via a simple json upload. Sadly, none of the available voices sounded particularly country.

To produce actual singing, there are several autotune apps, e.g., Voloco. However, the ones I tried sounded pretty autotuned, so perhaps more suitable for another genre.

I replaced the RNN lyrics with wikipedia-derived specs for a truck (a product jdb could endorse and a scalable lyric-generation system), added some royalty-free music, and the result is really something.

The Future

One of the most difficult aspects here is finding working neural nets and getting them to run. Even if the code is on github there are often missing files or onerous installation steps. Things will be easier when more neural nets get converted to javascript/tensorflow.js or appear on colab.

I don't know how many instagram celebrities are computer-generated today, though computer-generated celebrities are not a new thing, especially in Japan. Lil Miquela has >1M followers on instagram, though it's clear she is computer-generated.

It's pretty obvious there will be a lot of this kind of thing in the future. We can evolve new celebrities as ecological niches emerge, catering to new audiences. As people lose interest in one celebrity, we can just create others. They could even inhabit the same world and date each other. Eventually they will outnumber us, then maybe skynet us.

In the meantime, jdb is available to endorse products of all kinds.

Brian Naughton // Sun 11 November 2018 // Filed under sequencing // Tags biotech sequencing dna

I took a look at the data in Albert Vilella's very useful NGS specs spreadsheet using Google's slick colab notebook. (If you have yet to try colab it's worth a look.)

Doing this in colab was a bit trickier than normal, so I include the code here for reference.

First, I need the gspread lib to parse google sheets data, and the id of the sheet itself.

!pip install --upgrade -q gspread
sheet_id = "1GMMfhyLK0-q8XkIo3YxlWaZA5vVMuhU1kg41g4xLkXc"

Then I authorize myself with Google (a bit awkward but it works).

from google.colab import auth

import gspread
from oauth2client.client import GoogleCredentials

gc = gspread.authorize(GoogleCredentials.get_application_default())

I parse the data into a pandas DataFrame.

sheet = gc.open_by_key(sheet_id)

import pandas as pd
rows = sheet.worksheet("T").get_all_values()
df = pd.DataFrame.from_records([r[:10] for r in rows if r[3] != ''])

I have to clean up the data a bit so that all the sequencing rates are Gb/day numbers.

import re
dfr = df.rename(columns=df.iloc[0]).drop(index=0).rename(columns={"Rate: (Gb/d) ":"Rate: (Gb/d)"}).set_index("Platform")["Rate: (Gb/d)"]
dfr = dfr[(dfr != "--") & (dfr != "TBC")]
for n, val in enumerate(dfr):
  if "-" in val:
    rg ="(\d+).(\d+)", val).groups()
    val = (float(rg[0]) + float(rg[1])) / 2
    dfr[n] = val
dfr = pd.DataFrame(dfr.astype(float)).reset_index()

I tacked on some data I think is representative of Sanger throughput, if not 100% comparable to the NGS data.

A large ABI 3730XL can apparently output up to 1-2 Mb of data a day in total (across thousands of samples). A lower-throughput ABI SeqStudio can output 1-100kb (maybe more).

dfr_x = pd.concat([dfr, 
                   pd.DataFrame.from_records([{"Platform":"ABI 3730xl", "Rate: (Gb/d)":.001}, 
                                              {"Platform": "ABI SeqStudio", "Rate: (Gb/d)":.0001}])])

dfr_x["Rate: (Mb/d)"] = dfr_x["Rate: (Gb/d)"] * 1000

If I plot the data there's a pretty striking, three-orders-of-magnitude gap from 1Mb-1Gb. Maybe there's not enough demand for this range, but I think it's actually just an artifact of how these technologies evolved, and especially how quickly Illumina's technology scaled up.

import seaborn as sns
import matplotlib.pyplot as plt
f, ax = plt.subplots(figsize=(16,8))
fax = sns.stripplot(data=dfr_x, y="Platform", x="Rate: (Mb/d)", size=8, ax=ax);
fax.set(xlim=(.01, None));

sequencing gap plot

Getting a single 1kb sequencing reaction done by a service in a day for a couple of dollars is easy, so the very low throughput end is pretty well catered for.

However, if you are a small lab or biotech doing any of:

  • microbial genomics: low or high coverage WGS
  • synthetic biology: high coverage plasmid sequencing
  • disease surveillance: pathogen detection, assembly
  • human genetics: HLA sequencing, immune repertoire sequencing, PGx or other panels
  • CRISPR edits: validating your edit, checking for large deletions

you could probably use a few megabases of sequence now and then without having to multiplex 96X.

If it's cheap enough, I think this is an interesting market that Nanopore's new Flongle can take on, and for now there's no competition at all.

Brian Naughton // Thu 12 July 2018 // Filed under biotech // Tags biotech alzheimers antibody

There have been a lot of results coming out from Alzheimer's trials recently, and a lot of discussion about the "amyloid hypothesis" and its role in the disease. In this post I'll review some of the evidence, and see how it relates to data from recent AD trial results from Merck and Biogen/Eisai. I mainly reference three good reviews that cover most of the basic facts and arguments around the amyloid hypothesis. Much of my additional data is from AlzForum, a fantastic resource for Alzheimer's news.

The basics

A simplified model of the amyloid hypothesis is that the cell-surface protein APP (Amyloid Precursor Protein) gets cleaved by BACE1 and γ-secretase and released as a 42 amino acid peptide, Aβ42; Aβ42 forms oligomers, then extracellular plaques in the brain; these oligomers and/or plaques somehow lead to intracellular Tau tangles which cause neuronal death.

One big question here is whether it's the plaques or oligomers that are the main trigger:

Several similar studies suggest that Aβ — particularly soluble oligomers of Aβ42 (Shankar et al, 2008) — can trigger AD‐type tau alterations

This model is nicely summarized by a diagram from NRD:

amyloid primer

As the diagram shows, the obvious drug targets are γ-secretase and BACE1 (to stop Aβ42 production), Aβ42 monomers/oligomers/plaques (to reduce plaque formation), Tau (to prevent Tau tangles).

There have been drugs targeting all of these processes. None have been successful:

The only approved drugs for Alzheimer's are fairly ineffectual cholinesterase inhibitors (and an accompanying NMDA receptor inhibitor). These drugs are usually thought of more as symptom relief than treatment.

Aβ42 Antibodies

Why do drug companies keep making Aβ42 antibodies after so many failures? In fact, there is quite a bit of variability in what these antibodies actually do. Ryan Watts, now CEO of Denali Therapeutics, gave an interview with AlzForum back in 2012 where he explained the difference between Genentech's crezenumab and other Aβ42 antibodies.

Q: How is crenezumab different from the other Aβ antibodies that are currently in Phase 2 and 3 trials?

A: We have a manuscript under review that describes its properties. Basically, crenezumab binds to oligomeric and fibrillar forms of Aβ with high affinity, and to monomeric Aβ with lower affinity. By comparison, solanezumab binds monomeric Aβ, and gantenerumab binds aggregated Aβ, as does bapineuzumab. Crenezumab binds all forms of the peptide. Crenezumab is engineered on an IgG4 backbone, which allows it to activate microglia just enough to promote engulfment of Aβ, but not so strongly as to induce inflammatory signaling through the p38 pathway and release of cytokines such as tumor necrosis factor α. Crenezumab is the only IgG4 anti-Aβ antibody in clinical development that I am aware of. We have not seen vasogenic edema in our Phase 1 trials, which was the first main hurdle for us to overcome.

Biogen describes the MOA of their aducanumab antibody like this:

Aducanumab is thought to target aggregated forms of beta amyloid including soluble oligomers and insoluble fibrils which can form into amyloid plaque in the brain of Alzheimer’s disease patients.

Denali Therapeutics

As an aside, Denali is not working on an Aβ42 inhibitor (perhaps for IP reasons since Ryan Watts was heavily involved in the development of crezenumab). Apart from their novel RIPK1 program, they are still pursuing BACE1 and Tau.

Our lead RIPK1 product candidate, DNL747, is a potent, selective and brain penetrant small molecule inhibitor of RIPK1 for Alzheimer’s disease and ALS. Microglia are the resident immune cells of the brain and play a significant role in neurodegeneration. RIPK1 activation in microglia results in production of a number of pro-inflammatory cytokines that can cause tissue damage.

Our three antibody programs are against known targets including aSyn, TREM2 and a bi-specific therapeutic agent against both BACE1 and Tau. Our BACE1 and Tau program is an example of combination therapy, which we believe holds significant promise in developing effective therapies in neurodegenerative diseases.

How does amyloid cause disease?

By one definition, the amyloid hypothesis "posits that the deposition of the amyloid-β peptide in the brain is a central event in Alzheimer's disease pathology". There are several ways that amyloid could cause AD. This diagram from a 2011 NRD review shows three options:

amyloid hypothesis

  • Aβ trigger: Aβ triggers the disease once it reaches a threshold, and once it starts, reducing Aβ levels does not help
  • Aβ threshold: Aβ triggers the disease once it reaches a threshold, but reducing Aβ levels back below the threshold does help
  • Aβ driver: Aβ causes Alzheimer's, and reducing Aβ levels at any time should ameliorate disease

Simplifying, if the Aβ trigger model is correct, then we don't expect anti-Aβ42 antibodies to work, except perhaps preventatively. If the Aβ driver model is correct, then these antibodies should work, at least partially.

From the same review:

A strong case can be made that the deposition of amyloid-β in the brain parenchyma is crucial for initiating the disease process, but there are no compelling data to support the view that, once initiated, the disease process is continuously driven by or requires amyloid-β deposition.

For this reason, after Aβ42 antibody trials fail, the stock answer from pharma is that they need to begin treatment earlier. Of course, the earlier you treat, the longer the trial takes, and the more you need new amyloid detection technologies like Florbetavir/PET to see what's going on. So it's probably natural that there is a gradual transition to ever earlier interventions and longer trials, even though this can also seem like excuse-making.

Evidence for the amyloid hypothesis

Despite all the failed drugs and holes in our understanding, the amyloid hypothesis remains durable due to the weight of evidence in its corner.


Mutations in APP both cause and prevent Alzheimer's. Half of people with trisomy 21 (or any APP duplication, it seems) develop AD by the time they reach their fifties.

A protective variant found in APP also points to a causal relationship, and therapeutic potential (see Robert Plenge on allelic series).

We found a coding mutation (A673T) in the APP gene that protects against Alzheimer's disease and cognitive decline in the elderly without Alzheimer's disease. This substitution is adjacent to the aspartyl protease β-site in APP, and results in an approximately 40% reduction in the formation of amyloidogenic peptides in vitro. Carriers are about 7.5 times more likely than non-carriers to reach the age of 85 without suffering major cognitive decline

A cryoEM structure of Aβ42 fibril from 2017 gives us structural evidence for why APP mutations should be protective or damaging, suggesting that APP's effect on AD is via amyloid/Aβ42. amyloid


The APOE e4 allele strongly predisposes people to Alzheimer's. It's one of the strongest genetic associations known, besides Mendelian diseases. In 2018, Yadong Huang's team at the Gladstone Institiute used iPSCs to investigate the mechanism. Confusingly, they found that APOE is independently associated with both Aβ42 and Tau.

"ApoE4 in human neurons boosted production of Aβ40 and Aβ42"

"It does not do that in mouse neurons. Independent of its effect on Aβ, ApoE4 triggered phosphorylation and mislocalization of tau."

"Based on these data, we should lower ApoE4 to treat AD"

This research may also help explain why mouse models of Alzheimer's have often been misleading.

Other evidence

  • Mutations in PSEN1 and PSEN2 (components of gamma-secretase) cause Alzheimer's.
  • Other diseases are caused by mutations in amyloid-forming proteins. For example, there is a mutation that enables IAPP to form amyloid, which causes Familial British Dementia. In ALS, the aggregated form of SOD1 may be protective and the soluble form disease-causing.

    The formation of large aggregates is in competition with trimer formation, suggesting that aggregation may be a protective mechanism against formation of toxic oligomeric intermediates.

Criticism of the amyloid hypothesis

The main criticism of the antibody hypothesis is that we have been testing anti-amyloid drugs — especially antibodies against Aβ42 — for a long time now, and none of them have had any effect on disease progression.

Derek Lowe (and many of his commenters) has written especially skeptically on his blog:

Eli Lilly remains committed to plunging through this concrete wall headfirst. [...] our gamma-secretase inhibitor completely failed in 2010. Then we took our antibody, solanezumab into Phase III trials that failed in 2012. And found out in 2013 that our beta-secretase inhibitor failed.

Morgan Sheng, VP of Neuroscience at Genentech, is much more positive. In a recent interview in NRD he said:

Let me start by saying that I fully believe in the amyloid hypothesis, and I think it’s going to be vindicated completely within years. [...] phase III results from drugs like Eli Lilly’s solanezumab suggest these agents sort of work; they just don’t work very well

It seems like targeting Tau is an acceptable strategy to amyloid hypothesis skeptics because it's not targeting Aβ42, even though it's still part of the standard amyloid hypothesis model. Drugs that are based on the "amyloid hypothesis" and drugs that work by trying to reduce amyloid tend to get conflated in a confusing way.

Evidence against the amyloid hyothesis

Here I am mainly summarizing from a 2015 review. In this review, the author mainly disputes the "linear story" of the amyloid hypothesis and not the fact that Aβ plays some kind of role in AD.

  • Many people have plaque but no disease.

    The existence of this group of individuals (healthy, but amyloid positive) is a substantial challenge to the amyloid cascade hypothesis. It is clearly possible to have amyloid deposits without dementia; therefore amyloid is not sufficient to cause disease.

    Such individuals are not rare; rather, they account for a quarter to a third of all older individuals with normal or near-normal cognitive function.

  • Anti-Aβ42 antibodies can reduce plaque without alleviating the disease.

    The second test of the amyloid cascade hypothesis has also been done: amyloid has been removed from the brains of individuals with AD and from mice with engineered familial forms of the disease. Here the tests have been less definitive and the evidence is mixed.

  • Other drugs that should work (beta-secretase inhibitors, gamma-secretase inhibitors, BACE1 inhibitors, Tau inhibitors) don't appear to work.

  • Mutations in the Tau gene can cause dementia without plaques forming, so amyloid is not a necessary step in the process.

  • We do not understand AD pathology well. For example, what are the toxic species of Aβ and Tau? What is the connection between Aβ and tangle pathology? Do Tau tangles spread between neurons like prions?

  • There are other possible causes of AD. For example, certain infections could be causative.


Recent work showing an association between herpes virus and Alzheimer's could be thought of as supporting or disputing the amyloid hypothesis. In this model, the virus "seeds" amyloid plaque formation, which then sequesters the virus. The idea that amyloid plaques are protective is not entirely new, beginning with the "bioflocculant hypothesis" for Aβ, published in 2002.

[Robinson and Bishop] posited that Aβ’s aggregative 332 properties could make it ideal for surrounding and sequestering pathogenic agents in the brain

If herpes causes AD, then we'd expect to see evidence in epidemiological datasets. Both herpes infection and periodontitis appear to be associated with AD risk. Further, antiherpetic medications appear to reduce the risk of AD. A lot more could be done here with a large database of phenotypic information, like UK biobank...

Relatedly, a Bay Area company, Cortexyme, recently raised $76M to pursue an AD drug against a bacterial protease found in plaques.

Recent news

So what about the recent trial results? There were two major trials with new results this year: Merck's BACE1 inhibitor, verubecestat, and Biogen/Eisai's anti-Aβ42 antibody, BAN2401. Meanwhile the trial design for Biogen's aducanumab is being tweaked — not a good sign generally — and there should be new data on that later this year.


After failing a Phase III in 2017 verubecestart had more bad news last month:

Treatment with verubecestat reduced the concentration of Aβ-40 and Aβ-42 in cerebrospinal fluid by 63 to 81%, which confirms that the drug had the intended action of reducing Aβ production. In the PET amyloid substudy, treatment with verubecestat reduced total brain amyloid load by a modest amount; the mean standardized uptake value ratio was reduced from 0.87 at baseline to 0.83 at week 78 in the 40-mg group. These results suggest that lowering Aβ in the cerebrospinal fluid is associated with some reduction in brain amyloid.

Notably, despite the drug working as intended, the reduction in brain amyloid was minimal. Hence, some people claim that amyloid removal has not been tested: gc tweet

Merck's Aβ42 antibody, BAN2401

New Phase II results for Merck's soluble protofibril antibody, BAN2401, were just released in July 2018. The results were hotly disputed because while the Bayesian analysis failed to show an effect, an alternative p-value based analysis (ANCOVA) showed positive results. I don't know exactly what the differences between the analyses were, but generally you would hope for agreement between the two, unless the effect was pretty marginal or just not real. The data pulled out in the tweet below shows how strange this situation is. Merck tweet

Given the ambivalent nature of the result, naturally some saw it as positive news, since there was at least something, while skeptics saw the opposite.


Aducanumab often seems like the most promising anti-Aβ antibody, and maybe the last chance for anti-Aβ antibodies to prove themselves. Back in 2015, Aducanumab showed some promising Phase Ib results. (I even wrote about it).

“They’re the most striking data we have seen with anything, period,” says [an AD trialist]

However, since then the many related trial failures, plus Biogen changing the trial design due to "variability", have left many people pessimistic. Perhaps BAN2401's recent results, however unsatisfying, show that an Aβ inhibitor is not just doomed to show no effect.


There doesn't actually seem to be much controversy about whether amyloid has a role in Alzheimer's; the genetic evidence is especially hard to dispute. I think the disagreement is more whether reducing Aβ plaques (or oligomers) can treat or prevent Alzheimer's. If the plaque is protective, then it's possible that reducing plaque may even worsen the disease.

There are also still plenty of unanswered disease mechanism questions, like whether it's oligomers or plaques that are causative, how Tau tangles cause neuronal death, and how tangles spread from neuron to neuron. Also, a 2018 paper suggests that Tau's function is the opposite of what we thought: instead of stabilizing microtubules, it keeps them dynamic.

One obvious question is why are there not more Tau-based drugs? Tau pathology is not a new idea and Tau's causal relationship with dementia is one of the least controversial parts of the AD story. In fact, there are now at least five Phase I trials underway, so these drugs might just be lagging behind Aβ42 antibodies by a few years. Certainly, Tau tangles being intracellular and in the brain makes drug development more complicated.

"Anti-tau antibodies don’t enter neurons and they don’t bind intracellular tau. We’ve invested a lot of careful rigorous work to try and understand this and I hope that the field will agree that we can put to rest that question"

(Crossing the blood-brain barrier is a problem for almost all AD drugs and especially antibodies — an interesting rule of thumb is that about 0.1% of antibody gets into the brain.)

Despite all the failures, I think the story is coming together and I'm pretty optimistic. We haven't actually tried that many ways of attacking the disease. I think that reducing plaque and/or oligomers very early could still work — mainly because we have seen the "drug" APP A673T working — meanwhile, reducing Tau tangles is arguably the most promising avenue of intervention, and it is yet to be properly tested.

Brian Naughton // Tue 17 October 2017 // Filed under genomics // Tags bioinformatics genomics programming

Implementing Needleman-Wunsch a few different ways in Python, Nim, and JavaScript.

Read More
Brian Naughton // Fri 22 September 2017 // Filed under stats // Tags stats probability bayesianism maxent

Deriving the normal distribution and others using maximum entropy.

Read More
Brian Naughton // Mon 11 September 2017 // Filed under biotech // Tags biotech vc

A brief look at Y Combinator's biotech investments in 2017.

Read More
Brian Naughton // Tue 27 June 2017 // Filed under biotech // Tags biotech drug development

Some notes on drug development: classes of drug targets and therapies.

Read More
Brian Naughton // Mon 06 February 2017 // Filed under biotech // Tags biotech transcriptic snakemake synthetic biology

How to automate protein synthesis pipeline using transcriptic and snakemake

Read More
Brian Naughton // Thu 26 January 2017 // Filed under biotech // Tags biotech iolt nodemcu arduino

An internet-connected lab thermometer

Read More
Brian Naughton // Mon 10 October 2016 // Filed under genomics // Tags genomics nanopore

What's been going on with Oxford Nanopore in 2016

Read More

Boolean Biotech © Brian Naughton Powered by Pelican and Twitter Bootstrap. Icons by Font Awesome and Font Awesome More