Brian Naughton // Tue 17 October 2017 // Filed under genomics // Tags bioinformatics genomics programming

This small project started when I was looking for an implementation of Needleman-Wunsch (pairwise global DNA alignment) in javascript. I just wanted to align two sequences on a website and in a google sheet (using Google Apps Script).

I looked around for a simple javascript implementation (preferably one self-contained javascript file) but I was surprised that I couldn't find one. Needleman-Wunsch is a pretty simple algorithm so I decided to implement it. I did get a little bit side-tracked...

The first step was to find someone else's implementation to copy, so I started with some numpy code from @brent_p. Based on his other work, I think it's a safe assumption it's implemented correctly. (There is also a complete Cython version in this repo, which implements gap_extend and other parameters, and is obviously much faster. I really just need a very basic alignment so the simpler numpy version is fine for me).

numpy and friends

There are lots of ways to tweak the numpy implementation of Needleman-Wunsch to try to make it faster. Here are the things I tried:

  1. orig: the original numpy implementation.
  2. orig3: the original numpy implementation run with python 3.
    This is just to test how much faster or slower Python 3.6 is than 2.7.
  3. numpy: my numpy implementation.
    This is like the original numpy code, but modified a bit to make it more like my code.
  4. numba: my numpy implementation, but with numba applied.
    Numba is a pretty amazing JIT compiler you can turn on by adding one line of code. It comes with anaconda, and it's always worth trying just in case.
  5. torch: my numpy implementation, but with numpy replaced with PyTorch.
    PyTorch seems like a friendly alternative to TensorFlow, especially with its numpy-like syntax. Without explicitly applying .cuda() to my arrays it just uses the CPU, so it should not be too different to regular numpy.
  6. torchcuda: my numpy implementation, but with numpy replaced with PyTorch, and .cuda() applied to each array.
    The same as torch except using the GPU.
  7. cupy: my numpy implementation, but with numpy replaced with CuPy.
    CuPy is a drop-in replacement for numpy and, like PyTorch, only requires changing a couple of lines.


Nim is an interesting language that can compile to C (nimc) or javascript (nimjs). I thought this was a pretty good use-case for nim since I need javascript but writing scientific code in javascript is not fun. I started with a numpy-like library called arraymancer, which worked well, but since it relies on BLAS it would not compile to javascript (I could have checked that earlier...) Luckily, changing the code to simple for loops was pretty easy. Nim's syntax is a lot like Python, with some unnecessary differences like using echo instead of print. As someone used to Python, I didn't find it to be as friendly as I expected. The dream of Python-with-static-types is still a dream...


Finally, I just programmed the alignment in javascript (js). All of the implementations are almost line-for-line identical, so this did not take long.

Speed comparison

I ran all the above implementations on some random DNA of various lengths and the results are plotted below.

import pandas as pd
from matplotlib import pyplot as plt
%matplotlib inline
data = {'orig': {500: 2.23, 1000: 8.50, 1500: 18.96, 2000: 33.45, 2500: 52.70, 3000: 76.44, 5000: 209.90}, 
        'orig3': {500: 2.62, 1000: 10.34, 1500: 22.68, 2000: 40.22, 2500: 62.03, 3000: 90.15, 5000: 248.93}, 
        'numpy': {500: 1.54, 1000: 3.41, 1500: 6.27, 2000: 10.60, 2500: 16.11, 3000: 22.87, 5000: 67.45}, 
        'numba': {500: 5.54, 1000: 7.05, 1500: 9.28, 2000: 13.08, 2500: 17.40, 3000: 22.21, 5000: 56.69}, 
        'torch': {500: 1.67, 1000: 3.92, 1500: 7.61, 2000: 12.86, 2500: 19.55, 3000: 27.48, 5000: 82.90}, 
        'torchcuda': {500: 8.54, 1000: 22.47, 1500: 46.26, 2000: 80.12, 2500: 119.92, 3000: 169.95, 5000: 467.04}, 
        'cupy': {500: 35.71, 1000: 138.86, 1500: 951.97, 2000: 1713.57, 2500: 2660.11, 3000: 3798.51}, 
        'nimc': {500: 0.016, 1000: 0.041, 1500: 0.08, 2000: 0.13, 2500: 0.20, 3000: 0.31, 5000: 0.85}, 
        'nimjs': {500: 0.14, 1000: 0.28, 1500: 0.48, 2000: 0.75, 2500: 1.12, 3000: 1.53, 5000: 4.06}, 
        'js': {500: 0.09, 1000: 0.14, 1500: 0.20, 2000: 0.34, 2500: 0.41, 3000: 0.82, 5000: 1.64}}
vfast_ones = ["nimjs", "js", "nimc"]
fast_ones = ["torch", "numpy", "numba"] + vfast_ones
ok_ones = ["torchcuda", "orig3", "orig"] + fast_ones
df = pd.DataFrame(data)
cupy js nimc nimjs numba numpy orig orig3 torch torchcuda
500 35.71 0.09 0.01 0.13 5.54 1.54 2.23 2.62 1.67 8.54
1000 138.86 0.13 0.04 0.28 7.05 3.41 8.50 10.34 3.92 22.47
1500 951.97 0.20 0.08 0.48 9.28 6.27 18.96 22.68 7.61 46.26
2000 1713.57 0.34 0.13 0.75 13.08 10.60 33.45 40.22 12.86 80.12
2500 2660.11 0.41 0.20 1.12 17.49 16.11 52.70 62.03 19.55 119.92
3000 3798.51 0.82 0.31 1.53 22.21 22.87 76.44 90.15 27.48 169.95
5000 NaN 1.64 0.85 4.06 56.69 67.45 209.90 248.93 82.90 467.04

I'll skip cupy since it's much slower than everything else and throws the plots off. That doesn't imply anything negative about cupy and I'd use it again. It's extremely easy to replace numpy with cupy, and for properly vectorized code I'm sure it's much faster than numpy.

f,ax = plt.subplots(figsize=(16,12))
ax.set_title("fast: everything except cupy")
_ = df[ok_ones].plot(ax=ax)


f,ax = plt.subplots(figsize=(16,12))
ax.set_title("faster: numpy vs C vs js")
_ = df[fast_ones].plot(ax=ax)


f,ax = plt.subplots(figsize=(16,12))
ax.set_title("fastest: C vs js")
_ = df[vfast_ones].plot(ax=ax)



I learned some interesting things here...

  • numba is good. It didn't speed this code up very much, but it was a bit faster than numpy for large alignments and didn't cost anything. I expected this to be the fastest Python-based code because there are several Python for loops (i.e., unvectorized code), which is where numba can help a lot.

  • I'm not sure why my numpy is faster than the original numpy since my changes were minimal. The original version is not coded for speed anyway.

  • GPUs don't help unless your code is written for GPUs. That basically means one repetitive task handed off to the GPU along with the data (no back-and-forth). There are ways to implement Needleman-Wunsch in a GPU-friendly way, but it complicates the code a lot. On the one hand this is a very obvious result to anyone who has used GPUs for computation — on the other hand, maybe a really smart compiler could use the CPU where appropriate and GPU where appropriate...

  • Nim is a pretty interesting language. I have seen it described as either "an easier C" or a "statically typed Python". To me it's definitely more like the former. It's not all that friendly compared to Python, but I think I'd try it again as a C/Cython replacement. Don't forget to compile with -d:release.

  • Javascript is fast! If nim is not compiled with -d:release it's even faster than nim's C code. Sadly, Google Apps Scripts' javascript is extremely slow for some reason. That was an unfortunate surprise, especially since it times out after about five minutes, so long alignments just fail! I can't explain why it's so slow...

Finally, just to note that this implementation is good enough for my purposes, but I haven't really spent any time making sure it works in all situations (apart from affirming that its output is the same as the original numpy code), so I wouldn't trust it too much. The code is available in this repo.

Brian Naughton // Mon 10 October 2016 // Filed under genomics // Tags genomics nanopore

Many genomics people, especially in the US, are still unfamiliar with Oxford Nanopore's MinION sequencer. I was lucky enough to join their early access program last year, so I've been using it for a while. In that time I've become more and more excited about its potential. In fact, I think it's the most exciting thing to happen in genomics in a long time. I'll try to explain why.

MinION vs Illumina

The MinION is a tiny little sequencer that has some serious advantages over Illumina sequencers:

  • it's very portable (see the photos!) and doesn't require any special equipment to run
  • it's simple to run: there's a 10 minute prep with just a couple of pipetting steps
  • the sequencer itself is essentially free, with a cost of $500-900 per flow-cell (which can be reused several times).
  • the reads are very long, about as long as the input DNA (100kb is not unusual)
  • it's a single molecule sequencer, so you can detect per molecule variation, including base modifications (this is still low accuracy though)
  • it can read RNA directly, giving you full-length transcripts
  • the turnaround time is very quick: you can generate tens to hundreds of megabases of data in an afternoon
  • data analysis is easier than for short-read sequencers, since alignment and assembly are simpler. You may not even really need any alignment if you are sequencing a plasmid or insert.
  • the data arrives per read instead of per base: so in one hour you can have thousands of long reads (as opposed to Illumina, where you'd have millions of partial reads, each only a few bases)
  • seeing reads appear in real-time is amazing and you can literally pull the USB plug when you have enough data

There are also two big disadvantages:

  • its accuracy is at least an order of magnitude worse than Illumina (~90% vs >99%)
  • its per base cost is at least an order of magnitude higher than an Illumina HiSeq ($0.5/Mbase vs <$0.02/Mbase) and 2–10X more expensive than a MiSeq. Of course, these numbers are rough and in flux. For example, a HiSeq or MiSeq will require a service contract that could be $20k/yr — the cost of an Illumina run is highly volume-dependent.

Something that's not often discussed is the error rate of short-read sequencers. On a per base level they are extremely accurate, but incorrectly determined structural variants are also errors. In a human genome a miscalled 3Mb inversion could by itself be considered a 0.1% error rate. and there are lots of structural variants in humans. Unlike incorrect base-calls, it is often impossible to overcome this issue with greater read-depth.

Despite these advantages, many scientists remain skeptical of the MinION. There are probably two things going on here: (a) Oxford has consistently overpromised since announcing in 2012; (b) the MinION only started to be really competitive in the past few months, so there is a lag.

What changed?

About six months ago, you could expect to get about 500Mb of DNA from a flow-cell, with each pore reading at 70 bases/second and accuracy of 70-80% (at least in our novice hands).

Earlier this year, Oxford made two important changes that improved performance: they updated their pore from an unspecified pore ("R7", which was tangled up in a patent dispute with Illumina) to an E. coli pore ("R9"), which has both better throughput and better accuracy than R7. At the same time, they updated their base-calling algorithm to a deep learning-based method, further improving accuracy.

They are still incrementally improving R9, and are already on version R9.4. At the time of writing, this version is currently only in the hands of the inner circle of nanoporati, but luckily they are all on Twitter so we can get a pretty good sense of how well it works. People are reporting excellent results, with runs of over 5Gb at the new R9 speed of 240 bases/second (this should be 500 bases/second soon, apparently with no loss in accuracy). Accuracy is also up, with 1D reads perhaps even edging over 90% in experienced hands.

So, compared to six months ago, you are probably getting 5-10 times as much data with half the error rate.

OK, what can I do with one of these gizmos?

The stats are definitely exciting, but I don't think they really capture why I think the MinION is so interesting. The MinION has several key areas where it can do some damage, and other areas where it opens up new possibilities.

sequencing microbial genomes de novo

This is very doable. I wouldn't say it's easy yet, but long reads negate a lot of the computational problems of de novo assembly: finding overlapping 10,000mers is a very different problem to finding overlapping 100mers.

infectious agent detection

Once you have prepped DNA, which takes from 10 minutes (with the "rapid" kit) to two hours, the actual process of detecting a pathogen could be under ten minutes. In practice I don't think anybody is going from blood sample to diagnosis this quickly, but the potential is there.

There is even software (Mykrobe) that detects drug-resistance genes in bacteria, and recommends appropriate antibiotics. When this is done cheaply and routinely it should help a lot with drug resistance and overprescription of antibiotics.

Since the data comes in one read at a time, as soon as you get one read from the infectious agent you are done.

direct RNA sequencing

If you want to read full-length transcripts, and see base modifications too, then the MinION is the only option that I know of. This capability is new, and the base modification detection is not accurate, but there's still plenty of interesting research to do with this.


Sequencing often requires barcoding, which adds fiddly extra steps before and after sequencing. But, if your reads are long enough, then you may not need to barcode. For example, you can sequence 96 plasmids at the same time — simply throw away any reads that are not the full length of the plasmid.

other long-read problems

There are a few classic long-read problems like HLA sequencing, VDJ sequencing and structural variant detection (especially for cancer). These are reasonably good applications for MinION, though VDJ sequencing probably needs more accuracy, and structural variant detection might need more throughput. (10X + Illumina makes the most sense for anything like this)

MinION in the Field

Oxford is making an effort to eliminate the "cold chain" for the MinION. The flow-cell itself already seems to keep well at temperatures well above refrigeration, and they claim they can lyophilize the other reagents. Even before that happens, with basically just a cooler, a laptop, and a way to extract DNA, doctors, ecologists, and other scientists can go out into the field and do sequencing anywhere.

Earlier this year, as part of the Zibra project, scientists from the UK and Brazil drove a van through Brazil, sampling and sequencing Zika virus along the way.

Biology labs and Biotechs

The advantage of MinION for non-genomics–focused biology labs is not really widely discussed, but I think it's one of the most important.

Basically, if you want a few megabases sequenced and you have a MinION and a flow-cell, you can have the data in your hands today. When you're done you can put the flow-cell away and use it again tomorrow. Depending on your needs, you might get 4-10 uses out of the flow-cell, meaning each run costs $150-300 including sample prep.

In contrast, if you want to get some data from a MiSeq, you are probably signing up for a gigabase of sequence. That's overkill for most labs, and it produces many gigabytes of raw data to manage too... If you want reasonable length reads (2x150bp), then sequencing will take at least 24 hours. If you are lucky enough to have a core lab at your institute then that helps, but you may still have to wait your turn.

If you don't work at a university — perhaps you're at a small biotech — then the alternative is buying a MiSeq (or MiniSeq) at $50-150k plus service contract, or sending your samples out to a CRO for sequencing. A CRO will have a turnaround time of at least a week, and that's after you've explained to them what you need and agreed on the terms.

It's hard to imagine a one-off MiSeq run happening in under a week, so being able to just do it yourself is a huge increase in efficiency.

If you're sequencing a thousand of anything, then Illumina is much cheaper, but I wonder how many biology labs need megabase-scale sequencing occasionally, but don't do it because of the current barriers to entry, including the computational burden of aligning and assembling short reads. There are cases where I would not have bothered with the hassle of getting something sequenced except that we could just do it ourselves with the MinION.

Genomics for Everyone

I think the most exciting thing going on here is just taking sequencing and genomics out of the lab and into the real world. Admittedly, this does require some improvements and inventions from Oxford, like easier DNA preparation, so there are caveats here, but nothing too crazy.

Oxford's metrichor site spells out some of the use-cases too. I'll just give some scattered examples of things to sequence, some more realistic than others, but I think each plausibly represent something new that has real economic value:

  • hospital surfaces and employees for MRSA
  • food at factories (detecting E. coli etc)
  • the environment at airports, workplaces, etc for flu (flu is expensive!)
  • at crime scenes (also a big deal since the current methods of forensic DNA analysis are awful)
  • at home, to see if you have a cold or flu, the same cold or a new one, and even figure out where you picked up the virus
  • the air to detect mold in buildings
  • farm animals' microbiomes to monitor gut health and improve growth
  • at methane farms, wine fermenters, beer fermenters, to monitor and manage the process
  • various kinds of labs for bacterial contamination
  • the sewage system of a city to monitor the city's diet and health
  • for educational purposes, and at competitions like iGEM
  • fish and other foods to detect mislabeling (a surprisingly big problem)
  • animals out in the wild for conservation purposes
  • your own microbiome to monitor your gut health
  • soil, plants, droppings, insects at farms to monitor pests etc.
  • at the dentist's to detect decay-causing bacteria
  • at the dermatologist's (cosmetologist?) to detect and treat acne-causing bacteria

These applications (apps?) can potentially be run by anyone. Stick some DNA in, wait a bit, processing happens on the cloud and the answer appears on your phone in a few minutes to a few hours. You don't need to know anything about genetics or molecular biology, you'll just see a readout that says "E. coli detected" in food or "DNA from new rhino detected" in droppings.

There's already a teaser of this with Oxford's What's in my Pot app. It figures out which microbes are in a sample, and draws a nice cladogram for you.

To realize this potential, the sequencer still needs to be cheaper, but the lower bound on that seems good, since the number of molecules involved is really tiny. (That's another advantage of single-molecule sequencers.)

Finally, coming back to present-day reality a little bit, Oxford will need to execute on their plans to make sequencing easier and cheaper (reagent lyophilization, Zumbador, SmidgION, Voltrax, FPGAs, etc. — watch Oxford's latest tech update for more on that), but I think MinION is going to become a very big deal in the next few years.

Brian Naughton // Thu 14 May 2015 // Filed under genomics // Tags sequencing nanopore minion

The unveiling of the MinION MkII at the 2015 Oxford Nanopore Conference may be remembered as a very big deal in the history of genomics. A number of tweets even compared it to the 2007 iPhone unveiling. Of course, that's crazy — it's a much bigger deal than a simple Candy Crush vehicle.

I look forward to reading accounts from people who actually attended the conference, but I wanted to assemble what I learned from the #nanoporeconf tweets and this great genomeweb article.


The major news at the conference was the announcement of a new MinION sequencer, the MkII, replacing the MkI, itself only available as part of an early-access program.

The MkII has a new "fast mode" that's about 10X as fast as before and the output is now an impressively Illumina-like 5GB per hour. The error rates are also apparently improved.

Apart from its size, one of the most interesting things about the tiny MinION is that it streams sequence in real-time to your computer (I think it even needs USB 3). That means you can do things like sequence until you find what you are looking for, then stop, or, perhaps in the future, leave it on all the time, monitoring your sewer system etc.

Because of how the machine works, you are billed by the hour, like an AWS instance. The new price is $20 an hour, down from about $90 per hour for the MkI. At that price you really start to think about sequencing in a new way. This is the most mind-blowing aspect of this machine to me, and I think it will radically change how sequencing will be used, especially outside core human genomics applications.


Voltrax is a lab-on-a-chip that attaches directly to your MinION and does sample prep for you. That allows you to sequence from samples while you're out and about. It's programmable in Python (an excellent choice). Sadly, it's not out yet and they didn't give a timeline for it.

What's MinION for?

Although it's an amazing little machine, MinION doesn't compete with MiSeq/HiSeq for typical human genomics applications. This is mainly because the error rate is still very high. For the MkI the error rate is apparently up to 30%(!) For the MkII there is talk of getting it down to 5% — still very high compared to Illumina.

The major applications I've seen proposed so far are around: genome scaffolding (like PacBio — you just need long reads); pathogen/environmental sequencing (like @pathogenomenick sequencing Ebola in Guinea — here you need something fast and portable, and long reads help); sequencing messy parts of the genome (like HLA and CYP2D6, previously requiring Sanger sequencing or other difficult methods — here you need long and reasonably accurate reads).


This is a bigger, benchtop instrument with higher throughput than the MinION (6.4 terabases per run). It's less obvious to me how this machine will be used, but it's extremely impressive throughput.


Boolean Biotech © Brian Naughton Powered by Pelican and Twitter Bootstrap. Icons by Font Awesome and Font Awesome More