ipywidgets demo

This example has bitrotted – see the Bokeh slider instead.

My partner Nadiah has developed
an eco-evolutionary model describing the response of a migratory
bird’s arrival time and prelaying period to climate change. The Octave code is here:

The final plots look like this:

A natural question is: how does the shape of the arrival time change as the main model parameter is varied? A nice way to visualise this is using ipywidgets and nbviewer. Here is an ipython notebook with a slider for the main model parameter:

Getting this to work was surprisingly straightforward. In short:

1. Make a notebook using ipython. I used oct2py to call the
Octave code from Python.

2. Install nbviewer on a publicly accessible host.

3. Run nbviewer like so:

cd $HOME/nbviewer # this is the nbviewer repository from github
python -m nbviewer --debug --no-cache --localfiles=$HOME/phenology-two-trait-migratory-bird

This runs the server on port 5000. You could run it in a screen or tmux session, or use
supervisord or angel to keep the process alive.

4. Point nginx to the nbviewer proces:

# /etc/nginx/sites-available/amazonaws.com

server {

        listen   80;

        server_name ec2-xyz.amazonaws.com;

        access_log  /var/log/nginx/amazonaws.com.access.log;

        location / {
            proxy_pass; # Reverse proxy to nbviewer


5. The notebook doesn’t appear on nbviewer’s front page, so just naviate to a URL of the form


to see the notebook foo.ipynb.

Here is a HTML iframe containing the nbviewer view of the phenology-two-trait-migratory-bird notebook.
Try out the slider at the bottom.

<!– http://ec2-107-22-54-51.compute-1.amazonaws.com/localfile/arrival_times_notebook.ipynb

IPython development has really taken off recently; check out the SciPy 2013 keynote for more info:

volgenmodel-nipype v1.0

Here is my latest project: https://github.com/carlohamalainen/volgenmodel-nipype. It is a port of the Perl script volgenmodel to Python, using the functionality of Nipype.

A lot of scientific workflow code has a common pattern, something like this: collect some input files, run something to produce intermediate results, and then combine the results into a final result. One way to implement the workflow is to glob the files and set up arrays or dictionaries to keep track of the outputs.

files = glob.glob('/tmp/blah*.dat')

intermediate_result = [None] * len(files)

for (i, f) in enumerate(files):
    intermediate_result[i] = fn1(f, param=0.3)

final_result = fn2(intermediate_result)

The problem with this approach is that it doesn’t scale well nor is it easy to reason about. The equivalent in Nipype is:

import nipype.pipeline.engine as pe
import nipype.interfaces.io as nio

datasource = pe.Node(interface=nio.DataGrabber(sort_filelist=True), name='datasource_dat')
datasource.inputs.base_directory = '/scratch/data'
datasource.inputs.template = 'blah*.dat'

datasink = pe.Node(interface=nio.DataSink(), name="datasink")
datasink.inputs.base_directory = '/scratch/output'

intermediate = pe.MapNode(

final = pe.Node(

workflow = pe.Workflow(name="workflow")

# Apply the fn1 interface to each file in the datasource:
workflow.connect(datasource, 'outfiles', intermediate, 'input_file')

# Apply the fn2 interface to the list of outputs from the intermediate map node:
workflow.connect(intermediate, 'output_file', final, 'input_file')

# Save the final output:
workflow.connect(final, 'output_file', datasink, 'final')

This code is much closer to the actual problem that we are trying to solve, and as a bonus we don’t have to take care of arrays of input and output files, which is pure agony and prone to errors.

Nipype lets us run the workflow using a single core like this:


or we can fire it up using 4 cores using:

workflow.run(plugin='MultiProc', plugin_args={'n_procs' : 4})

Nipype also has plugins for SGE, PBS, HTCondor, LSF, SLURM, and others.

Here is volgenmodel-nipype’s workflow graph (generating this graph is a one-liner with the workflow object). Click the image for the full size version.