archives –––––––– @btnaughton
Brian Naughton | Sun 03 October 2021 | datascience | statistics datascience

I really like resampling methods for data analysis, and I think they are under-utilized. The main thing these techniques have in common is that instead of an analytical solution, they make a computer repeatedly analyze variations of the data. Examples of resampling methods include permutation tests, the bootstrap, and cross-validation.

Cross-validation in particular seems to have really taken over model selection over the last ten years or so (e.g., compared to the more "statistical" AIC or BIC). There may be an argument here that this is due to non-statisticians needing statistical tests (e.g., in machine learning) so they naturally gravitate to using computational power instead of statistical expertise.

In biology, there are a few tests than come up a lot, but the main one is probably testing if distribution A is different to distribution B (e.g., experiment vs control). This is usually done with a t-test, but can also be done — with some advantages — by a permutation test. I looked for a post on this topic but could not find one that exactly covered what I was interested in, so this article will show some data on this.

Here are a few related articles that may be of interest:

The t-test

I would guess most biologists use t-tests occasionally, as the simplest way to check if an experimental result differs from a control. I would also argue that most people who use the t-test don't really know the main assumptions:

  • the distributions are normal
  • the variances of the distributions are equal
  • the sample size is large

If you violate any of these assumptions, you are supposed to consider another test.

If distributions are not normal

We usually assume if a distribution is not normal then it's some unknown distribution. In this case you could use a non-parametric test, like the Mann–Whitney U test. Of course, there's no such thing as a completely normal distribution, so it's a bit unclear where to draw the line here.

If variances are unequal

If the variances are unequal, Welch's t-test is recommended. Welch's t-test is "an adaptation of Student's t-test, and is more reliable when the two samples have unequal variances and/or unequal sample sizes." I'm not sure I've ever seen this test used in a biology paper, though variances are very often unequal. For example, any time you are counting things (colonies, etc) it's likely a Poisson distribution where the variance equals the mean.

If sample size is small

It's a bit of a grey area how few samples count as "small", but some sources say below 30. That is actually a pretty large number for many biological experiments!

For example, if I do a quick review of the first four papers that pop up for the search "xenograft tumor treatment t-test":

In order to publish, the authors of these papers need to be able to report a p-value, and the t-test is the most convenient, and accepted tool.

Violations

It's possible that the above violations in assumptions are just fine, and the stats work out fine. Still, the lack of clarity on what counts as a violation here seems to defeat the purpose of having the rules, and a permutation test might offer a safer approach.

Permutation Test

The general problem below will be to determine if two distributions with mean of 0 (red) and diff (yellow), are different.

from scipy.stats import norm, laplace, ttest_ind, median_test, mannwhitneyu, pearsonr, linregress
from matplotlib import pyplot as plt
from IPython.display import SVG
import seaborn as sns
import pandas as pd
import numpy as np

from tqdm.auto import tqdm
tqdm.pandas()
f, ax = plt.subplots(figsize=(12,4))
diff = 1
dist1 = norm(0, 1).rvs(100)
dist2 = norm(diff, 1).rvs(100)
sns.histplot(dist1, color='r', bins=40);
sns.histplot(dist2, color='y', bins=40);

png

This is the code for a permutation test, for use in place of a one- or two-sided t-test, Welch's t-test, etc. In all examples below I will use a two-sided t-test, since this is more commonly used than one-sided. (Although to me, one-sided makes more sense most the time!)

def ptest_ind(dist1, dist2, fn=np.mean, N=30000, alternative="two-sided"):
    """permutation test, matching scipy.ttest_ind syntax"""
    from collections import namedtuple
    from warnings import warn

    PTResult = namedtuple("PTResult", ["stat", "pvalue"])

    new_dists = np.concatenate([dist1, dist2])
    test = fn(dist1) - fn(dist2)

    res = []
    for _ in range(N):
        np.random.shuffle(new_dists)
        ndist1 = new_dists[:len(dist1)]
        ndist2 = new_dists[len(dist1):]
        res.append(test > fn(ndist1) - fn(ndist2))

    stat = sum(res) / N

    if alternative == "less":
        pvalue = stat
    elif alternative == "greater":
        pvalue = 1 - stat
    elif alternative == "two-sided":
        pvalue = 2*min(stat, 1-stat)
    else:
        raise ValueError(f"{alternative=}")

    if pvalue == 0:
        warn(f"permutation test sample limit reached: <1/{N}")
        pvalue = 1.0/N

    return PTResult(stat, pvalue)
def plot_corr(df, x, y, min_xy):
    """Plot the correlation between two tests, and report Pearson's r"""
    lx, ly = f"log_{x}", f"log_{y}"
    _df = (df_rows
             .replace([np.inf, -np.inf], np.nan)
             .dropna()
             .assign(**{lx: lambda df: df[x].apply(np.log10), ly: lambda df: df[y].apply(np.log10)}))

    res = linregress(_df[ly], _df[lx])

    f, ax = plt.subplots(figsize=(6,6))
    sns.scatterplot(data=_df, x=lx, y=ly, ax=ax);
    sns.lineplot(x=np.array([0, min_xy]), y=res.intercept + res.slope*np.array([0, min_xy]), color='r', linewidth=0.5, ax=ax)
    ax.set_xlim(min_xy, 0);
    ax.set_ylim(min_xy, 0);
    ax.axline((min_xy, min_xy), (0, 0), linewidth=0.5, color='g');

    return res.rvalue

t-test with normal distributions

This is the simplest case: comparing two equal variance, high sample size, normal distributions. Here we see an almost perfect correspondence between the results of the t-test and permutation test. This is good, since it shows the permutation test is not reporting biased (e.g., too low) p-values.

sample_size = 30

rows = []
for diff in tqdm(np.tile(np.arange(0, 0.4, 0.02), 1)):
    dist1 = norm(0, 1).rvs(sample_size)
    dist2 = norm(diff, 1).rvs(sample_size)

    tt = ttest_ind(dist1, dist2, alternative="two-sided").pvalue
    pt = ptest_ind(dist1, dist2, alternative="two-sided").pvalue

    rows.append((tt, pt))

df_rows = pd.DataFrame(rows, columns=["tt", "pt"])
print(f't-test vs permutation r: {plot_corr(df_rows, "tt", "pt", -3):.3f}')
t-test vs permutation r: 1.000

png

t-test with normal distributions, but low sample size

What happens to the results of a t-test when the number of samples is low? Here, the low sample size results in some noise that decreases the correlation, but it still seems to work very similarly to the permutation test, even with just 3 samples (despite the regression line below).

sample_size = 3

rows = []
for diff in tqdm(np.tile(np.arange(0, 0.6, 0.03), 5)):
    dist1 = norm(0, 1).rvs(sample_size)
    dist2 = norm(diff, 1).rvs(sample_size)

    tt = ttest_ind(dist1, dist2, alternative="two-sided").pvalue
    pt = ptest_ind(dist1, dist2, alternative="two-sided").pvalue

    rows.append((tt, pt))

df_rows = pd.DataFrame(rows, columns=["tt", "pt"])
print(f't-test vs permutation r: {plot_corr(df_rows, "tt", "pt", -2):.3f}')
/tmp/ipykernel_2790/1233266942.py:30: UserWarning: permutation test sample limit reached: <1/30000
  warn(f"permutation test sample limit reached: <1/{N}")


t-test vs permutation r: 0.760

png

t-test with normal distributions, but unequal variances

scipy's ttest_ind has an option to use Welch's t-test instead of a t-test. Again, the results are almost identical between Welch's t-test and the permutation test.

sample_size = 30

rows = []
for diff in tqdm(np.tile(np.arange(0, 1.5, 0.075), 5)):
    dist1 = norm(0, 1).rvs(sample_size)
    dist2 = norm(diff, 2).rvs(sample_size)

    tt = ttest_ind(dist1, dist2, alternative='two-sided', equal_var=False).pvalue
    pt = ptest_ind(dist1, dist2, alternative="two-sided").pvalue

    rows.append((tt, pt))

df_rows = pd.DataFrame(rows, columns=["tt", "pt"])
print(f't-test vs permutation r: {plot_corr(df_rows, "tt", "pt", -3):.3f}')
/tmp/ipykernel_2790/1233266942.py:30: UserWarning: permutation test sample limit reached: <1/30000
  warn(f"permutation test sample limit reached: <1/{N}")


t-test vs permutation r: 0.967

png

t-test with non-normal distributions

In this case, the Mann-Whitney U test is usually recommended instead of the t-test. The Mann-Whitney U test "is a nonparametric test of the null hypothesis that, for randomly selected values X and Y from two populations, the probability of X being greater than Y is equal to the probability of Y being greater than X." However, two distributions can have equal medians, but different distributions, and Mann-Whitney may return a significant p-value.

Another option is to directly test for a difference in medians, with Mood's median test.

I'm not sure why the permutation test and median test results don't match more closely here, though the median test only produces a few possible p-values. Still, the results are quite close, and appear unbiased. The correlation between the permutation tests and Mann-Whitney U test is also very high.

sample_size = 30

rows = []
for diff in tqdm(np.tile(np.arange(0, 0.5, 0.025), 5)):
    dist1 = laplace(0, 1).rvs(sample_size)
    dist2 = laplace(diff, 1).rvs(sample_size)

    mt = median_test(dist1, dist2, ties="ignore")[1] # ugh, inconsistent scipy syntax
    ut = mannwhitneyu(dist1, dist2, alternative="two-sided")[1] # ugh, inconsistent scipy syntax
    pt = ptest_ind(dist1, dist2, fn=np.median, alternative="two-sided").pvalue

    rows.append((mt, ut, pt))

df_rows = pd.DataFrame(rows, columns=["mt", "ut", "pt"])
print(f'median test vs permutation r:  {plot_corr(df_rows, "mt", "pt", -3):.3f}')
print(f'mann-whitney vs permutation r: {plot_corr(df_rows, "ut", "pt", -3):.3f}')
print(f'mann-whitney vs median test r: {plot_corr(df_rows, "ut", "mt", -3):.3f}')
median test vs permutation r:  0.863
mann-whitney vs permutation r: 0.920
mann-whitney vs median test r: 0.873

png

png

png

t-test with normal distributions, but one outlier sample

A common problem with real-world datasets is outlier datapoints that greatly skew the distribution and hence skew p-values. Removing or accounting for outliers will increase the robustness of a test (e.g., how sensitive the result is to an erroneous reading).

It's interesting that the t-test is reasonably well correlated with the other two tests, but way off the diagonal.

sample_size = 30

rows = []
for diff in tqdm(np.tile(np.arange(0, 0.5, 0.025), 5)):
    dist1 = np.concatenate([norm(0, 1).rvs(sample_size), np.array([0])])
    dist2 = np.concatenate([norm(diff, 1).rvs(sample_size), np.array([100])])

    mt = mannwhitneyu(dist1, dist2, alternative='two-sided').pvalue
    tt = ttest_ind(dist1, dist2, alternative="two-sided").pvalue
    pt = ptest_ind(dist1, dist2, fn=np.median, alternative="two-sided").pvalue

    rows.append((mt, tt, pt))

df_rows = pd.DataFrame(rows, columns=["mt", "tt", "pt"])

print(f'mann-whitney vs permutation r: {plot_corr(df_rows, "mt", "pt", -3):.3f}')
print(f't-test vs mann-whitney r:      {plot_corr(df_rows, "tt", "mt", -3):.3f}')
print(f't-test vs permutation r:       {plot_corr(df_rows, "tt", "pt", -3):.3f}')
mann-whitney vs permutation r: 0.923
t-test vs mann-whitney r:      0.926
t-test vs permutation r:       0.835

png

png

png

Conclusion

In general, the permutation test gives you the same answer you would get if you chose the "right" test for your task. But, you have to exert less effort in figuring out what the right test is.

There is a very direct connection between the test you want and the calculation performed. For example, with the Mann-Whitney U test, it's confusing what is actually being tested, apart from "the non-parametric alternative to a t-test".

The permutation test is also robust in a way that serves most use-cases. Outliers are handled conservatively, and there are few or no assumptions to violate.

Disadvantages

There are some disadvantages too, but relatively minor I think.

  • Regular t-tests do pretty well in my tests above, even with only 3 samples! The only major problem is with outliers, which can imply extremely wide distributions if taken at face-value.
  • Permutation tests can be slow, so if you are doing thousands of tests, or you need precise, very low p-values, then they will likely not be suitable.
  • You have to know precisely what you are testing. For example, "are the medians of these two distributions identical?". This is mostly an advantage in that it's more explicit.
  • You get more statistical power if you use the test that includes the correct assumptions. This seems to be a very minor effect in my tests above.

Overall, you can't go too far wrong with permutation tests since they are so simple. If nothing else, they are a good way to sanity check results from other tests with more assumptions.

Comment

Oxford Nanopore (ONT) sells an amazing, inexpensive sequencer called the MinION. It's an unusual device in that the sequencing "flowcells" use protein pores. Unlike a silicon chip, the pores are sensitive to their environment (especially heat) and can get damaged or degraded and stop working.

When you receive a flowcell from ONT, only a fraction of the possible 2048 pores are active, perhaps 800–1500. Because of how the flowcell works, you start a run using zero or one pores from each of the 512 "channels" (each of which contains 4 pores).

As the pores interact with the DNA and other stuff in your solution, they can get gummed up and stop working. It's not unusual to start a run with 400 pores and end a few hours later with half that many still active. It's also not unusual to put a flowcell in the fridge and find it has 20% fewer pores when you take it out. (Conversely, pores can come back to life after a bit of a rest.)

All told, this means it's quite difficult to tell how much sequence you can expect from a run. If you want to sequence a plasmid at 100X, and you are starting with 400 pores, will you get enough data in two hours? That's the kind of question we want an answer to.

import pymc3 as pm
import numpy as np
import scipy as sp
import theano.tensor as T

import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

from functools import partial
from IPython.display import display, HTML, Image

For testing purposes — and because I don't have nearly enough real data — I will simulate some data.

In this toy example, DNA of length 1000+/-50 bases is sequenced for 1000, 20000 or 40000 seconds at 250bp/s. After 100 reads, a pore gets blocked and can no longer sequence. The device I am simulating has only 3 pores (where a real MinION would have 512).

datatypes = [
    ('total_num_reads', np.int),        ('total_dna_read', np.int),
    ('num_minutes', np.float),          ('num_active_pores_start', np.int),
    ('num_active_pores_end', np.int),   ('mean_input_dna_len_estimate', np.float)]

MAX_PORES = 3
AXIS_RUNS, AXIS_PORES = 0, 1

data = np.array([
  ( MAX_PORES*9,   MAX_PORES*  9*1000*.95,    1000.0/60,  MAX_PORES,  MAX_PORES,  1000.0),
  ( MAX_PORES*10,  MAX_PORES* 10*1000*1.05,   1000.0/60,  MAX_PORES,  MAX_PORES,  1000.0),
  ( MAX_PORES*10,  MAX_PORES* 10*1000*.95,    1000.0/60,  MAX_PORES,  MAX_PORES,  1000.0),
  ( MAX_PORES*11,  MAX_PORES* 11*1000*1.05,   1000.0/60,  MAX_PORES,  MAX_PORES,  1000.0),
  ( MAX_PORES*100, MAX_PORES*100*1000*.95,   20000.0/60,  MAX_PORES,          0,  1000.0),
  ( MAX_PORES*100, MAX_PORES*100*1000*1.05,  20000.0/60,  MAX_PORES,          0,  1000.0),
  ( MAX_PORES*100, MAX_PORES*100*1000*.95,   40000.0/60,  MAX_PORES,          0,  1000.0),
  ( MAX_PORES*100, MAX_PORES*100*1000*1.05,  40000.0/60,  MAX_PORES,          0,  1000.0),
], dtype=datatypes)

MinION Simulator

To help figure out how long to leave our sequencer running, I made a MinION simulator in a jupyter notebook. I then turned the notebook into an app using dappled.io, an awesome new service that can turn jupyter notebooks into apps running on the cloud (including AWS lambda). Here is the dappled nanopore simulator app / notebook.

The model is simple: a pore reads DNA until it's completely read, then with low probability it gets blocked and can no longer sequence. I've been fiddling with the parameters to give a reasonable result for our runs, but we don't have too much data yet, so it's not terribly accurate. (The model itself may also be inaccurate, since how the pore gets blocked is not really discussed by ONT.)

Nevertheless, the simulator is a useful tool for ballparking how much sequence we should expect from a run.

nanopore simulation

Hierarchical Bayesian Model

Here's where things get more complicated...

In theory, given some data from real MinION runs, we should be able to learn the parameters for a model that would enable us to predict how much data we would get from a new run. Like many problems, this is a good fit for Bayesian analysis, since we have data and we want to learn the most appropriate model given the data.

For each run, I need to know:

  • what length DNA I think I started with
  • how long the run was in minutes
  • how many reads I got
  • how much sequence I got
  • the number of active pores at the start and at the end of a run

I'll use PyMC3 for this problem. First, I need to specify the model.

Input DNA

The input DNA depends a lot on its preparation. I believe the distribution of input DNA lengths could be:

  • exponential if it is genomic DNA breaking randomly
  • normal with a small variance if it is from a plasmid
  • normal but sharply truncated if it is cut out of a gel

The length of input DNA is different to the length of reads produced by the sequencer, which is affected by breakage, capture biases, secondary structure, etc. The relationship between input DNA length and read length could be learned. We could get arbitrarily complex here: for example, given enough data, we could include a mixture model for different DNA types (genomic vs plasmid).

A simple distribution with small variance but fat tails seems reasonable here. I don't have much idea what the standard deviation should be, so I'll make it a fraction of the mean.

with pm.Model() as model:
    mean_input_dna_len = pm.StudentT('mean_input_dna_len', nu=3, mu=data['mean_input_dna_len_estimate'],
                                     sd=data['mean_input_dna_len_estimate']/5, shape=data.shape[0])

This is the first PyMC3 code, so here are some notes on what's going on:

  • the context must always specify the pymc3 model
  • Absent better ideas, I'll generally be using a T distribution instead of a normal, as MacKay recommends. This distribution should be truncated at zero, but I can't do this for multidimensional data because of what I think is a bug in pymc3.
  • I am not modeling the input_dna_len here, just the mean of that distribution. As I mention above, the distribution of DNA lengths could be modeled several ways, but I currently only need the mean in my model.

the shape parameter

In this experiment, We will encounter three different kinds of distribution shape:

  • shape=1: this is a scalar value. For example, we might think the speed of the nanopore machine is the same for all runs.
  • shape=data.shape[0]: data.shape[0] is the number of runs. For example, the length of DNA in solution varies from run to run.
  • shape=(data.shape[0],MAX_NUM_PORES): we estimate a value for every pore in every run. For example, how many reads until a pore gets blocked? This varies by run, but needs to be modeled per pore too.

R9 Read Speed

I know that the R9 pore is supposed to read at 250 bases per second. I believe the mean could be a little bit more or less than that, and I believe that all flowcells and devices should be about the same speed, therefore all runs will sample from the same distribution of mean_read_speed.

Because this is a scalar, pymc3 lets me set a lower bound of 0 bases per second. (Interestingly, unlike DNA length, the speed could technically be negative, since the voltage across the pore can be positive or negative, as exploited by Read Until...)

Truncated0T1D = pm.Bound(pm.StudentT, lower=0)
with pm.Model() as model:
    mean_read_speed = Truncated0T1D('mean_read_speed', nu=3, mu=250, sd=10, shape=data.shape)

Capturing DNA

DNA is flopping around in solution, randomly entering pores now and then. How long will a pore have to wait for DNA? I have a very vague idea that it should take around a minute but this is basically an unknown that the sampler should figure out.

Note again that I am using the mean time only, not the distribution of times. The actual distribution of times to capture DNA would likely be distributed by an exponential distribution (waiting time for a Poisson process).

Hierarchical model

This is the first time I am using a hierarchical/multilevel model. For more on what this means, see this example from pymc3 or Andrew Gelman's books.

There are three options for modeling mean_time_to_capture_dna: (a) it's the same for each run (e.g., everybody us using the same recommended DNA concentration) (b) it's independent for each run (c) each run has a different mean, but the means are drawn from the same distribution (and probably should not be too different).

Image("hierarchical_pymc3.png")
nanopore simulation Diagram taken from PyMC3 dev `twiecki's blog <http://twiecki.github.io/blog/2014/03/17/bayesian-glms-3/>`__
with pm.Model() as model:
    prior_mean_time_to_capture_dna = Truncated0T1D('prior_mean_time_to_capture_dna', nu=3, mu=60, sd=30)
    mean_time_to_capture_dna = pm.StudentT('mean_time_to_capture_dna', nu=3, mu=prior_mean_time_to_capture_dna,
                                           sd=prior_mean_time_to_capture_dna/10, shape=data.shape)

Reading DNA

I can use pymc3's Deterministic type to calculate how long a pore spends reading a chunk of DNA, on average. That's just a division, but note it uses theano's true_div function instead of a regular division. This is because neither value is a number; they are random variables. Theano will calculate this as it's being sampled. (A simple "a/b" also works, but I like to keep in mind what's being done by theano if possible.)

with pm.Model() as model:
    mean_time_to_read_dna = pm.Deterministic('mean_time_to_read_dna', T.true_div(mean_input_dna_len, mean_read_speed))

Then each pore can do how many reads in this run? I have to be a bit careful to specify that I mean the number of reads possible per pore.

with pm.Model() as model:
    num_reads_possible_in_time_per_pore = pm.Deterministic('num_reads_possible_in_time_per_pore',
        T.true_div(num_seconds_run, T.add(mean_time_to_capture_dna, mean_time_to_read_dna)))

Blocking Pores

In my model, after each read ends, a pore can get blocked, and once it's blocked it does not become unblocked. I believe about one in a hundred times, a pore will be blocked after if finishes sequencing DNA. If it were more than that, sequencing would be difficult, but it could be much less.

We can think of the Beta distribution as modeling coin-flips where the probability of heads (pore getting blocked) is 1/100.

with pm.Model() as model:
    prior_p_pore_blocked_a, prior_p_pore_blocked_b, = 1, 99
    p_pore_blocked = pm.Beta('p_pore_blocked', alpha=prior_p_pore_blocked_a, beta=prior_p_pore_blocked_b)

Then, per pore, the number of reads before blockage is distributed as a geometric distribution (the distribution of the number of tails you will flip before you flip a head). I have to approximate this with an exponential distribution — the continuous version of a geometric — because ADVI (see below) requires continuous distributions. I don't think it makes an appreciable difference.

Here the shape parameter is the number of runs x number of pores since here I need a value for every pore.

with pm.Model() as model:
    num_reads_before_blockage = pm.Exponential('num_reads_before_blockage', lam=p_pore_blocked,
        shape=(data.shape[0], MAX_PORES))

Constraints

Here things get a little more complicated. It is not possible to have two random variables x and y, set x + y = data, and sample values of x and y.

testdata = np.random.normal(loc=10,size=100) + np.random.normal(loc=1,size=100)
with pm.Model() as testmodel:
    x = pm.Normal("x", mu=0, sd=10)
    y = pm.Normal("y", mu=0, sd=1)
    # This does not work
    #z = pm.Deterministic(x + y, observed=data)
    # This does work
    z = pm.Normal('x+y', mu=x+y, sd=1, observed=testdata)
    trace = pm.sample(1000, pm.Metropolis())

I was surprised by this at first but it makes sense. This process is more like a regression where you minimize error than a perfectly fixed constraint. The smaller the standard deviation, the more you penalize deviations from the constraint. However, you need some slack so that it's always possible to estimate a logp for the data given the model. If the standard deviation goes too low, you end up with numerical problems (e.g., nans). Unfortunately, I do not know of a reasonable way to set this value.

I encode constraints in my model in a similar way: First I define a Laplace distribution in theano to act as my likelihood. Why Laplace instead of Normal? It returned nans less often...

Using your own DensityDist allows you to use any function as a likelihood. I use DensityDist just to have more control over the likelihood, and to differentiate from a true Normal/Laplacian distribution. This can be finicky, so I've had to spend a bunch of time tweaking this.

def T_Laplace(val, obs, b=1):
    return T.log(T.mul(1/(2*b), T.exp(T.neg(T.true_div(T.abs_(T.sub(val, obs)), b)))))

Constraint #1

The total DNA I have read must be the product of mean_input_dna_len and total_num_reads. mean_input_dna_len can be different to my estimate.

This is a bit redundant since I know total_num_reads and total_dna_read exactly. In a slightly more complex model we would have different mean_read_length and mean_input_dna_len. Here it amounts to just calculating mean_input_dna_len as total_dna_read / total_num_reads.

def apply_mul_constraint(f1, f2, observed):
    b = 1 # 1 fails with NUTS (logp=-inf), 10/100 fails with NUTS too (not positive definite)
    return T_Laplace(T.mul(f1,f2), observed, b)

with pm.Model() as model:
    total_dna_read_constraint = partial(apply_mul_constraint, mean_input_dna_len, data['total_num_reads'])
    constrain_total_dna_read = pm.DensityDist('constrain_total_dna_read', total_dna_read_constraint, observed=data['total_dna_read'])

Constraint #2

The number of reads per pore is whichever is lower:

  • the number of reads the pore could manage in the length of the run
  • the number of reads it manages before getting blocked

To calculate this, I compare num_reads_before_blockage and num_reads_possible_in_time_per_pore.

First, I use T.tile to replicate num_reads_possible_in_time_per_pore for all pores. That turns it from an array of length #runs to a matrix of shape #runs x #pores (this should use broadcasting but I wasn't sure it was working properly...)

Then I take the minimum value of these two arrays (T.lt) and if the minimum value is the number of reads before blockage (T.switch(T.lt(f1,f2),0,1)) then that pore is blocked, otherwise it is active. I sum these 0/1s over all pores (axis=AXIS_PORES) to get a count of the number of active pores for each run.

The value of this count is constrained to be equal to data['num_active_pores_end'].

def apply_count_constraint(num_reads_before_blockage, num_reads_possible_in_time_broadcast, observed):
    b = 1
    num_active_pores = T.sum(T.switch(T.lt(num_reads_before_blockage, num_reads_possible_in_time_broadcast),0,1),axis=AXIS_PORES)
    return T_Laplace(num_active_pores, observed, b)

with pm.Model() as model:
    num_reads_possible_in_time_broadcast = T.tile(num_reads_possible_in_time_per_pore, (MAX_PORES,1)).T
    num_active_pores_end_constraint = partial(apply_count_constraint, num_reads_before_blockage, num_reads_possible_in_time_broadcast)
    constrain_num_active_pores_end = pm.DensityDist('constrain_num_active_pores_end', num_active_pores_end_constraint, observed=data['num_active_pores_end'])

Constraint #3

Using the same matrix num_reads_possible_in_time_broadcast this time I sum the total number of reads from each pore in a run. I simply sum the minimum value from each pore: either the number before blockage occurs or the total number possible.

def apply_minsum_constraint(num_reads_before_blockage, num_reads_possible_in_time_broadcast, observed):
    b = 1 # b=1 fails with ADVI and >100 pores (nan)
    min_reads_per_run = T.sum(T.minimum(num_reads_before_blockage, num_reads_possible_in_time_broadcast),axis=AXIS_PORES)
    return T_Laplace(min_reads_per_run, observed, b)

with pm.Model() as model:
    total_num_reads_constraint = partial(apply_minsum_constraint, num_reads_before_blockage, num_reads_possible_in_time_broadcast)
    constrain_total_num_reads = pm.DensityDist('constrain_total_num_reads', total_num_reads_constraint, observed=data['total_num_reads'])

Sampling

There are three principal ways to sample in PyMC3:

  • Metropolis-Hastings: the simplest sampler, generally works fine on simpler problems
  • NUTS: more efficient than Metropolis, but I've found it to be slow and tricky to get to work
  • ADVI: this is "variational inference", the new, fast way to estimate a posterior. This seems to work great, though as I mention above, it needs continuous distributions only.

I used ADVI most of the time in this project, since NUTS was too slow and had more numerical issues than ADVI, and the ADVI results seemed more sensible than Metropolis.

with pm.Model() as model:
    v_params = pm.variational.advi(n=500000, random_seed=1)
    trace = pm.variational.sample_vp(v_params, draws=5000)

Results

In my toy model: there are 8 runs in total; each device has 3 pores; it takes about 4 seconds to sequence a DNA molecule (~1000 bases / 250 bases per second) and 96 seconds to capture a DNA molecule (so 100 seconds in total to read one DNA molecule).

In my 8 runs, I expect that [10, 10, 10, 10, 200, 200, 400, 400] reads are possible. However, I expect that runs 4, 5, 6, 7 will have a blockage at 100 reads.

Finally, I expect there to be 3 pores remaining in the first four runs, and 0 pores remaining in the last four.

ADVI Results

mean_input_dna_len__0                    950.0 (+/- 0.1) # All the input_dna_lens are ok.
mean_input_dna_len__1                   1050.0 (+/- 0.1) # This is a simple calculation since I know
mean_input_dna_len__2                    950.0 (+/- 0.1) # how many reads there are and how long the reads are
mean_input_dna_len__3                   1050.0 (+/- 0.1)
mean_input_dna_len__4                    950.0 (+/- 0.1)
mean_input_dna_len__5                   1049.9 (+/- 0.1)
mean_input_dna_len__6                    950.0 (+/- 0.1)
mean_input_dna_len__7                   1050.0 (+/- 0.1)
num_reads_possible_in_time_per_pore__0    10.1 (+/- 1) # The number of reads possible is also a simple calculation
num_reads_possible_in_time_per_pore__1    10.3 (+/- 1) # It depends on the speed of the sequencer and the length of DNA
num_reads_possible_in_time_per_pore__2    10.2 (+/- 1)
num_reads_possible_in_time_per_pore__3    10.5 (+/- 1)
num_reads_possible_in_time_per_pore__4   210.9 (+/- 36)
num_reads_possible_in_time_per_pore__5   207.4 (+/- 35)
num_reads_possible_in_time_per_pore__6   419.8 (+/- 67)
num_reads_possible_in_time_per_pore__7   413.2 (+/- 66)
num_reads_before_blockage_per_run__0     501.3 (+/- 557) # The total number of reads before blockage per run
num_reads_before_blockage_per_run__1     509.8 (+/- 543) # is highly variable when there is no blockage (runs 0-3).
num_reads_before_blockage_per_run__2     501.9 (+/- 512)
num_reads_before_blockage_per_run__3     502.4 (+/- 591)
num_reads_before_blockage_per_run__4     297.2 (+/- 39)  # When there is blockage (runs 4-7), then it's much less variable
num_reads_before_blockage_per_run__5     298.7 (+/- 38)
num_reads_before_blockage_per_run__6     299.6 (+/- 38)
num_reads_before_blockage_per_run__7     301.2 (+/- 38)
num_active_pores_per_run__0                2.8 (+/- 0.4) # The number of active pores per run is estimated correctly
num_active_pores_per_run__1                2.8 (+/- 0.4) # as we expect for a value we've inputted
num_active_pores_per_run__2                2.8 (+/- 0.4)
num_active_pores_per_run__3                2.8 (+/- 0.3)
num_active_pores_per_run__4                0.0 (+/- 0.1)
num_active_pores_per_run__5                0.0 (+/- 0.1)
num_active_pores_per_run__6                0.0 (+/- 0.0)
num_active_pores_per_run__7                0.0 (+/- 0.0)

ADVI Plots

We can plot the values above too. No real surprises here, though num_active_pores_per_run doesn't show the full distribution for some reason.

with pm.Model() as model:
    pm.traceplot(trace[-1000:],
                 figsize=(12,len(trace.varnames)*1.5),
                 lines={k: v['mean'] for k, v in pm.df_summary(trace[-1000:]).iterrows()})
nanopore simulation

Conclusions

It's a pretty complex model. Is it actually useful? I think it's almost useful, but in its current form it suffers from a lack of robustness.

The results I get are sensitive to changes in several independent areas:

  • the sampler/inference method: for example, Metropolis returns different answers to ADVI
  • the constraint parameters: changing the "b" parameter can lead to numerical problems
  • the priors I selected: changes in my priors that I do not perceive as important can lead to different results (more data would help here)

A big problem with my current model is that scaling it up to 512 pores seems to be difficult numerically. Metropolis just fails, I think because it can't sample efficiently; NUTS fails for reasons I don't understand (it throws an error about a positive definite matrix); ADVI works best, but starts to get nans as the number of pores grows, unless I loosen the constraints. It's possible that I should need to begin with loose constraints and tighten them over time.

Finally, the model currently expects the same number of pores to be available for every run. I haven't addressed that yet, though I think it should be pretty straightforward. There may be nice theano trick I am overlooking.

More Conclusions

Without ADVI, I think I would have failed to get an answer here. I'm pretty ignorant of how it works, but it seems like a significant advance and I'll definitely use it again. In fact, it would be interesting to try applying Edward, a variational inference toolkit with PyMC3 support, to this problem (this would mean swapping theano for tensorflow).

The process of modeling your data with a PyMC3/Stan/Edward model forces you to think a lot about what is really going on with your data and model (and even after spending quite a bit of time on it, my model still needs quite a bit of work to be more than a toy...) When your model has computational problems, as I had several times, it often means the model wasn't described correctly (Gelman's folk theorem).

Although it is still a difficult, technical process, I'm excited about this method of doing science. It seems like the right way to tackle a lot of problems. Maybe with advances like ADVI and theano/tensorflow; groups like the Blei and Gelman labs developing modeling tools like PyMC3, Stan and Edward; and experts like Kruschke and the Stan team creating robust models for us to copy, it will become more common.

Comment

This is a short list of interesting projects, companies and technologies I've been following.

Biotech

Transcriptic

Transcriptic is still the only programmable cloud lab currently open for business. My previous blogpost has a lot more on this company. They're continuing to add capabilities, including magnetic bead operations, which enables a lot of new kinds of experiments.

Vium

After several years in stealth, Vium (formerly Mousera) just launched this week.

Essentially, Vium have "smart home" mouse cages that stream data directly to you. You can control the mouse's environment remotely and get video, humidity and temperature readings in real-time. Perhaps most importantly, you can start to analyze the data almost immediately, whereas with a regular CRO, you might have to wait months for the trial to finish. That just speeds everything up.

The advantages of this model are potentially enormous. I'm especially excited about the potential for adaptive trials: imagine administering 10 candidate drugs to 50 mice on day one, and discontinuing ineffectual drugs when your Bayes Factor drops below some threshold. The speed and cost advantage could be great.

IndieBio

IndieBio is an accelerator/VC fund that funds a lot of different kinds of lean biotech. A lot of it is hard-to-categorize stuff that neither typical biotech VCs nor tech VCs fund (probably most have no idea what you are talking about).

IndieBio especially focuses on sectors like food tech, agriculture and cosmetics, (or perhaps more broadly, biotech that's not a drug or a diagnostic). These industries are enormous though, and the opportunities are huge.

Their demo day videos (2015, 2016) have an extremely varied group of pitches, from 3D printed rhino horns to neuron-based computers, and are definitely worth checking out.

Oxford Nanopore

There's really too much going on with Oxford Nanopore to effectively summarize here. I might have to write a whole post or something.

Oxford's CTO, Clive Brown, gave a talk at London Calling that that describes their roadmap, and it includes incredible things like sequencers that attach to your iPhone, portable all-in-one sample prep, and DNA synthesis.

The next year or two is going to be very interesting for Oxford.

MinION *The MinION is very portable*

Data Analysis

Continuum Analytics

Continuum Analytics is Travis Oliphant's new company (of numpy fame), and it produces some of the most useful software for scientific Python.

The products I use actively are:

  • anaconda Python distribution: this saves so many headaches it's unbelievable, and it has MKL support now
  • conda: this has replaced virtualenv for me all the time, and pip most of the time
  • numba: this comes up less but replaces Cython or numpy sometimes, and can even automatically use the GPU for you via CUDA

Continuum also have plotting libraries (bokeh), big data libraries (dask) and more. I haven't used these much, but I'd presume they are all high quality.

Tensorflow

Tensorflow is a library that is hard to explain, but basically takes numerical expressions that you define in Python, and does matrix math for you, including things like calculating gradients. Because of its design, it can distribute the work across computers, and across CPUs and GPUs.

Theano and Tensorflow are very similar: you can even use Tensorfuse as a common interface to both. Theano was first, but Tensorflow seems to be winning for the moment. Tensorflow has some advantages like integration with mobile devices (I don't think that really exists yet, but it's coming).

Although Theano and Tensorflow are best known as deep learning libraries, they do a lot more than that. One nice feature of building on Theano/Tensorflow is that you can write in pure Python, and let the library figure out the best way to compute everything.

Keras

Deep learning requires a lot of matrix multiplications and gradient calculations. Hence there are lots of deep learning libraries built on Theano/Tensorflow, and new ones pop up all the time. My favorite is keras, because it's in Python, high-level, well documented, and works with both Theano and Tensorflow.

PyMC3

PyMC3 is a "probabilistic programming" library similar to Stan (an MCMC workhorse from Andrew Gelman's lab), but in Python. Frankly, it's not nearly as polished or popular as Stan, but because it's built on Theano and scipy, the code is very short and readable Python, which is a big plus for me. It's extensible in ways Stan just can't be.

Like Stan, PyMC3 now supports faster-than-MCMC Variational Inference (ADVI). This blogpost, from @twiecki, one of the PyMC3's core developers, shows the power of building on Theano. In just a few dozen lines of Python, he builds a Bayesian neural net and solves it with ADVI. It's not super-practical, but a very interesting result.

advi *ADVI Bayesian neural net*

Edward

Edward is a very interesting project that I won't claim to fully understand. It's a probabilistic modeling library built on Tensorflow, that somehow manages to include the ability to use models defined in PyMC3, Stan, or keras.

It's a pretty amazing project, and I like their explanation of Box's loop (due to Edward Box) of modeling, reasoning, and criticism. It's also notable for how it shows the convergence of multiple related inference tools into simple, high-level code that sits on top of industrial-strength libraries like Tensorflow.

Comment
Brian Naughton | Mon 05 January 2015 | data | statistics bayesian

BEST is a Bayesian t-test

Read More

Boolean Biotech © Brian Naughton Powered by Pelican and Twitter Bootstrap. Icons by Font Awesome and Font Awesome More