archives –––––––– @btnaughton

There are a lot of bioinformatics workflow languages and systems these days. Albert Vilella maintains a comprehensive spreadsheet and it's massive. A little confusingly, some of these are actual languages (CWL), some are engines for running workflows (Cromwell), and some are self-contained systems (reflow, nextflow, snakemake).

The point of these tools is to take a multi-part process — usually a combination of scripts and binaries — and run it many times, keeping a log of what succeeded and what failed, and managing the resources required.

Among the most popular (or at least ones I've heard of people using) are snakemake, toil, nextflow, luigi, bpipe, scipipe, and Cromwell/WDL.

For a comprehensive review, see this 2017 paper by Jeremy Leipzig. Here, I'm just going to compare an example script from the nextflow website to the equivalent in snakemake and reflow. These are the three workflow systems I am most familiar with, and I think the differences between them are pretty interesting.

Nextflow

Nextflow is probably the most popular workflow system right now. It is powerful, simple to get started with, and very well supported by Paolo Di Tommasso, with plenty of example scripts. It supports running pipelines locally, on a local cluster, or on the cloud. It can use containers (Docker or Singularity) for reproducibility. Nextflow uses AWS Batch as its main cloud backend (apparently everyone is these days).

The basic script below, from the nextflow site, is pretty self-explanatory.

script

#!/usr/bin/env nextflow

params.in = "$baseDir/data/sample.fa"
sequences = file(params.in)

/*
 * split a fasta file in multiple files
 */
process splitSequences {
    input:
    file 'input.fa' from sequences

    output:
    file 'seq_*' into records

    """
    awk '/^>/{f="seq_"++d} {print > f}' < input.fa
    """
}

/*
 * Simple reverse the sequences
 */
process reverse {
    input:
    file x from records

    output:
    stdout result

    """
    cat $x | rev
    """
}

/*
 * print the channel content
 */
result.subscribe { println it }

output

N E X T F L O W  ~  version 19.04.1
Launching basic.rf [hungry_celsius] - revision: c3e3b933c7
[warm up] executor > local
executor >  local (1)
[ea/b4bcb4] process > splitSequences [  0%] 0 of 1
1FDSA>
ACCAAACACAGTA
2FDSA>
ACCAAACACAGTA
3FDSA>
ACCAAACACAGTA
executor >  local (2)
[ea/b4bcb4] process > splitSequences [100%] 1 of 1 
[ef/0377ba] process > reverse        [100%] 1 of 1 
Completed at: 19-May-2019 10:17:39
Duration    : 951ms
CPU hours   : (a few seconds)
Succeeded   : 2

The output is quite comprehensive and easy to understand.

Snakemake

Snakemake, developed by Johannes Köster (also the founder of the bioconda project!) is the oldest of the three workflow systems here. As the name implies, it's sort of like make, but in Python.

Snakemake existed before containers and cloud computing were common. It is filename-focused, which makes it simple to get started with, and pretty easy to debug. In contrast, nextflow and reflow try to abstract away the names of files, sometimes at the cost of simplicity.

Even though I have used snakemake quite a bit, I found it pretty difficult to port the nextflow script above, mainly because it involves splitting one file into multiple. I think there are at least three ways to handle file splitting in snakemake: dynamic files (soon to be deprecated), the new checkpoints (I did not look into this much), and just using directories.

The directory option is the simplest to me, but the result is probably far from ideal snakemake. In previous snakemake pipelines I often relied on something similar: when one step of the pipeline is done, instead of relying on the appearance of a specific filename, I would just output a "sentinel" file, e.g., echo done > {output.sentinel}, and use that as the trigger for the next step.

Since Snakemake is based on Python, it is easy to extend and integrate with other scripts. To copy the nextflow script, here I just use the bash though.

script

rule all:
    input: "rev"

rule split_fasta:
    input: "input.fa"
    output: directory("split")
    run: shell("""awk '/^>/{{f="{output}/seq_"++d}} {{print > f}}' < {input}""")

rule reverse_fasta:
    input: "split"
    output: directory("rev")
    run: shell("""for f in {input}/*; do cat $f |rev > {output}/rev_$(basename $f); done""")

output

Snakemake outputs a directory, rev, containing the files rev_seq_1, rev_seq_2, rev_seq_3.

Reflow

Reflow is a relatively new workflow language developed by at Grail by @marius and colleagues. I've been using Reflow for a year or so, and it's the system I am most familiar with.

One big difference between reflow and the other systems is how constrained it is: simplifying a bit, every function takes (a) a docker container, (b) a file or directory stored in s3, and (c) some bash; and outputs another file or directory stored in s3.

Everything gets run on AWS. You cannot run it on a regular cluster. You can run a reflow pipeline locally but it's mostly for debugging, and feels more like emulating the real AWS pipeline.

script

val files = make("$/files")
val dirs = make("$/dirs")

input := file("s3://booleanbiotech/input.fa")

func Split(input file) dir =
    exec(image := "ubuntu") (out dir) {"
        echo Split {{input}};
        awk '/^>/{f="{{out}}/seq_"++d} {print > f}' < {{input}}
    "}

func Reverse(input file) file =
    exec(image := "ubuntu") (out file) {"
        echo Reverse {{input}};
        cat {{input}} | rev > {{out}}
    "}

split_dir := Split(input)
rev_files := [Reverse(f) | f <- dirs.Files(split_dir)]

val Main = rev_files

output

[file(sha256=sha256:15a48aae8b19682a84dcf39f9260901e0b8331e8f9f974efd11a9a15c950320f, size=16),
 file(sha256=sha256:adad5843ce952755cd5b5b71fd92c7a67be3063d047350cf7fffdd6031ee5c2e, size=16),
 file(sha256=sha256:441ff0551203c350251abc5782698371116cca12a735fa2dbed66de4820b9201, size=16)]

There is logging too, but overall the output of reflow is pretty opaque!

To see the contents of the generated files, you have to use reflow cat 15a48aae8b19682a84dcf39f9260901e0b8331e8f9f974efd11a9a15c950320f. Generally, the last step in your pipeline should copy the outputs to S3, but debugging this can be tough.

Comparing the Three

Snakemake has been great for local pipelines. Its "reports" — html output from stages of the pipeline — are especially nice. It also now supports containers (and conda), so it is far from stagnant. However, I haven't tried to run snakemake pipelines on the cloud, and I suspect since it is so file-focused it might be less suited to this.

I only seriously experimented with nextflow before it had AWS Batch support, which made it unsuitable for my purposes. At that time running pipelines on the cloud required quite a lot of setup, but now this issue seems to be solved. Nextflow development also seems to be the most active of the three. There are even Nextflow meetings, periodically.

Reflow is the least popular, least flexible, and least documented of the three. Nevertheless, as a user who exclusively runs pipelines on the cloud, I still like the reflow model best. It gets closest to my ideal for a workflow system, which is basically just annotating code with the resources required to run it.

This is because to reflow, everything is a hash: containers, code, inputs, outputs. For example, if you have genome A (sha256(genome_sequence_bytes)=<hash1>), and you run adapter trimming function B (sha256(script_bytes)=<hash2>), using container C (sha256(docker_container_bytes)=<hash3>), then you "know" the output is the hash of sha256(<hash1>+<hash2>+<hash3>)=<hash4>. If the file s3://my-reflow-cache/<hash4> exists, even if it's from an old workflow, you don't need to recalculate! (The above is the gist of how it works, anyway. As usual with caches, this system can also result in tricky bugs...)

For some technical details on differences between nextflow and reflow, this Twitter thread may also be of interest, since it includes some discussion from the authors of the tools.

Conclusions

If I were starting today, I would probably choose nextflow as the safe option, mainly due to its new AWS Batch support. Snakemake still works great, though it may be showing its age a bit, while reflow is excellent if you are AWS-exclusive and can deal with its lack of documentation and support.

I think my ideal workflow language would have reflow-style decorators on Python functions (or a subset of python). I'll be interested to see if the reflow model inspires some new workflow systems like this.

Comment

These days most everything is on the cloud. However, probably the most common mode of working is to develop locally on a laptop, then deploy to the cloud when necessary. Instead, I like to try to run everything remotely on an instance on the cloud.

Why?

  • All your files are together in one place.
  • You can backup your cloud instance very easily (especially on GCP), and even spawn clone machines as necessary with more CPU etc.
  • You can work with large datasets. Laptops usually max out at 16GB RAM, but on a cloud instance you can get 50GB+. You can also expand the disk size on your cloud instance as necessary.
  • You can do a lot of computational work without making your laptop fans explode.
  • Your laptop becomes more like a dumb terminal. When your laptop dies, or is being repaired, it's easy to continue work without interruption.

The big caveats here are that this requires having an always-on cloud instance, which is relatively expensive, and you cannot work without an internet connection. As someone who spends a lot of time in jupyter notebooks munging large dataframe, I find the trade-offs are worth it.

Here are some tools I use that help make working on the cloud easier.

Mosh

Mosh is the most important tool here for working on the cloud. Instead of sshing into a machine once or more a day, now my laptop is continuously connected to a cloud instance, essentially until the instance needs to reboot (yearly?). Mosh also works better than regular ssh on weak connections, so it's handy for working on the train, etc. It makes working on a remote machine feel like working locally. If you use ssh a lot, try mosh instead.

Tmux

Since I just have one mosh connection at a time, I need to have some tabs. Most people probably already use screen or tmux anyway. I have a basic tmux setup, with ten or so tabs, each with a two-letter name just to keep things a bit neater.

tmux resurrect is the only tmux add-on I use. It works ok: if your server needs to restart, tmux resurrect will at least rememeber the names of your tabs.

Autossh

Mosh works amazingly well for a primary ssh connection, but to run everything on the cloud I also need a few ssh tunnels. Mosh cannot do this, so instead I need to use autossh. Like mosh, autossh tries to keep a connection open indefinitely. It seems to be slightly less reliable and fiddlier to set up than mosh, but has been working great for me recently.

Here's the command I use, based on this article and others. It took a while to get working via trial and error, so there may well be better ways. The ssh part of this autossh command comes from running gcloud compute ssh --dry-run.

autossh -M 0 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -f -t -i $HOME/.ssh/google_compute_engine -o CheckHostIP=no -o HostKeyAlias=compute.1234567890 -o IdentitiesOnly=yes -o StrictHostKeyChecking=yes -o UserKnownHostsFile=/Users/briann/.ssh/google_compute_known_hosts brian@12.345.67.890 -N -L 2288:localhost:8888 -L 2280:localhost:8880 -L 8385:localhost:8384 -L 2222:localhost:22 -L 8443:localhost:8443

The tunnels I set up:

  • 2288->8888 : to access jupyter running on my cloud instance (I keep 8888 for local jupyter)
  • 2280->8880 : to access a remote webserver (e.g., if i run python -m http.server on my cloud instance)
  • 8385->8384 : syncthing (see below)
  • 2222->22 : sshfs (see below)
  • 8443->8443 : coder (see below)

So to access jupyter, I just run jupyter notebook in a tmux tab on my cloud box, and go to https://localhost:2288.

To view a file on my cloud box I run python -m http.server on my cloud box, and go to https://localhost:2280.

Syncthing

Syncthing is a dropbox-like tool that syncs files across a group of computers. Unlike dropbox, the connections between machines are direct (i.e., there is no centralized server). It's pretty simple: you run syncthing on your laptop and on your cloud instance, they find each other and start syncing. Since 8384 is the default syncthing port, I can see syncthing's local and remote dashboards on https://localhost:8384 and https://localhost:8385 respectively. In my experience, syncthing works pretty well, but I recently stopped using it because I've found it unnecessary to have files synced to my laptop.

Sshfs

sshfs is a tool that lets you mount a filesystem over ssh. Like syncthing, I also don't use sshfs much any more since it's pretty slow and can fail on occasion. It is handy if you want to browse PDFs or similar files stored on your cloud instance though.

Coder

I recently started using Coder, which is Visual Studio Code (my preferred editor), but running in a browser. Amazingly, it's almost impossible to tell the difference between "native" VS Code (an Electron app) and the browser version, especially if it's running in full-screen mode.

It's very fast to get started. You run this on your cloud instance: docker run -t -p 127.0.0.1:8443:8443 -v "${PWD}:/root/project" codercom/code-server code-server --allow-http --no-auth then go to http://localhost:8443 and that's it!

Coder is new and has had some glitches and limitations for me. For example, I don't know how you are supposed to install extensions without also updating the Docker image, which is less than ideal, and the documentation is minimal. Still, the VS Code team seems to execute very quickly, so I am sticking with it for now. It think it will improve and stabilize soon.

Annoyances

One annoyance with having everything on the cloud is viewing files. X11 is the typical way to solve this problem, but I've never had much success with X11. Even at its best, it's ugly and slow. Most of my graphing, etc. happens in jupyter, so this is usually not a big issue.

However, for infrequent file viewing, this python code has come in handy.

def view(filename):
    from pathlib import Path
    from flask import Flask, send_file
    app = Flask(__name__)
    def get_view_func(_filename):
        def fn(): return send_file(filename_or_fp=str(Path(_filename).resolve()))
        return fn
    print(f'python -m webbrowser -t "http://localhost:2280"')
    app.add_url_rule(rule='/', view_func=get_view_func(filename))
    app.run("127.0.0.1", FLASK_PORT_LOCAL, debug=False)

Appendix: GCP activation script

This is the bash script I use to set up the above tools from my mac for my GCP instance. People using GCP might find something useful in here.

_gcp_activate () {
    # example full command: _gcp_activate myuserid myuserid@mydomain.com my-instance my-gcp-project us-central1-c $HOME/gcp/

    clear
    printf "#\n#    [[ gcp_activate script ]]   \n#\n"
    printf "# mosh: on mac, 'brew install mosh'\n"
    printf "# autossh: on mac, 'brew install autossh'\n"
    printf "# sshfs: on mac, download osxfuse and sshfs from https://osxfuse.github.io/\n"
    printf "#     https://www.everythingcli.org/ssh-tunnelling-for-fun-and-profit-autossh/\n"
    printf "#     sshfs may need a new entry in $HOME/.ssh/known_hosts if logging in to a new host\n"
    printf "#     The error is \"remote host has disconnected\"\n"
    printf "#     to achieve that, delete the localhost:2222 entry from $HOME/.ssh/known_hosts\n"
    printf "#\n"

    [ $# -eq 0 ] && printf "No arguments supplied\n" && return 1

    user=$1
    account=$2
    instance=$3
    gcpproject=$4
    zone=$5
    mountpoint=$6

    printf "#\n# 1. set gcp project, if it's not set \n#\n"

    # automatically set project
    echo gcloud config set account ${account};
    echo gcloud config set project ${gcpproject};

    # unmount sshfs
    printf "#\n# 2. unmount sshfs (this command fails if it's already unmounted, which is ok)\n#\n"
    echo umount -f ${mountpoint};

    # commands
    ssh_cmd=$(gcloud compute ssh ${user}@${instance} --zone=${zone} --dry-run) && \
    external_ip=$(printf "${ssh_cmd}" | sed -E 's/.+@([0-9\.]+)/\1/') && \
    autossh_cmd=$(printf "${ssh_cmd}" | sed s/'\/usr\/bin\/ssh'/'autossh -M 0 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -f'/) && \
    fullssh_cmd=$(printf "${autossh_cmd} -N -L 2288:localhost:8888 -L 2222:localhost:22 -L 2280:localhost:8000 -L 8385:localhost:8384 -L 8443:localhost:8443") && \
    printf "#\n# 3. run autossh to set up ssh tunnels for jupyter (2288), web (2280), and sshfs (2222)\n#\n" && \
    echo "${fullssh_cmd}" && \
    printf "#\n# 4. run sshfs to mount to ${mountpoint}\n#\n" && \
    echo sshfs -p 2222 -o reconnect,compression=yes,transform_symlinks,defer_permissions,IdentityFile=$HOME/.ssh/google_compute_engine,ServerAliveInterval=30,ServerAliveCountMax=0 -f \
    ${user}@localhost:. ${mountpoint} && \
    printf "#\n# 5. run mosh\n#\n" && \
    echo mosh -p 60000 --ssh=\"ssh -i $HOME/.ssh/google_compute_engine\" ${user}@${external_ip} -- tmux a
    printf "#\n# 6. if mosh fails run this\n#\n"
    echo gcloud compute ssh ${user}@${instance} -- killall mosh-server
}

gcp_activate () {
  _gcp_activate myuserid myuserid@mydomain.com my-instance my-project us-central1-c $HOME/gcp/
}
Comment

Boolean Biotech © Brian Naughton Powered by Pelican and Twitter Bootstrap. Icons by Font Awesome and Font Awesome More