The same storage system that gave us problems on Wednesday hung again Friday evening. This time we did not lose any nodes. We know what is causing the issue now and are working with the vendor on a software fix. Running jobs had time added to compensate for the hang. Last Updated Friday, Jan 17 06:50 pm 2020


Python is a dynamic language, filling a similar role to Perl or other scripting languages.

Python is well suited to do preparation and post-processing for jobs, but is not as suited for high-performance computing as a compiled language like C or Fortran. We do not recommend that users use it for a major component of their HPC jobs. We do recommend using it for preparation, post-processing, and proof-of-concept work for defining algorithms.

Available Versions

Using modules we have several full-featured versions available. A current list of Python versions (and other software) can be found by running this command:

module avail python

To load Python using modules:

# load Python 3.7.x
module load python/3.7

# switch to Python 3.6.x
module swap python/3.6

TensorFlow and Keras

At the time of this writing TensorFlow 1.2 and Keras 2.0 are installed via the

. In order to use TensorFlow or Keras you will also need these three modules (loading them in this order is important):
  1. compiler_gnu/5/3
  2. cuda/8.0
  3. cudnn/5.1_cuda-8.0

A GPU is also required. Add --gres=gpu:n to your sbatch or salloc command where n is the number of GPUs you need per node. At the moment we have up to 4 GPUs per node.


If you need to install a library that you anticipate being the only user of, we recommend that it be installed in your own directory somewhere.

With pip you can do this by running the following command after you have loaded the relevant environment module for Python (eg python/3/6):

pip install --user package

You can also install everything relative to an alternate prefix directory, such as one within a group directory:

pip install --prefix=directory-to-install-to package

For example if you have a top level directory: ~/fsl_groups/fslg_somegroup/.local you can use pip to install packages to ~/fsl_groups/fslg_somegroup/.local/bin and ~/fsl_groups/fslg_somegroup/.local/lib. You will then need to add the following two lines to your ~/.bash_profile (or ~/.bashrc, ~/.profile, etc.):

export PATH="$HOME/fsl_groups/fslg_somegroup/.local/bin:$PATH"
export PYTHONPATH="$HOME/fsl_groups/fslg_somegroup/.local/lib/python3.7/site-packages"

Note: replace python3.7 with the appropriate version number for Python, like python3.6.

You can run pip help install for more options and help.

If the project does not use pip, you can tell Python to use your home directory:

python install --user package

If it is a library that a large set of users will use or if it requires some specialized compiling, please open a support ticket.

The following packages are available for each version of Python in the module list:

Jupyter Notebooks

Before continuing, we suggest setting up SSH multiplexing and setting up public-key authentication within the cluster.

To use Jupyter notebooks on our systems, you'll want to install Jupyter via pip, as detailed above:

pip3 install --user jupyter     # Base Juptyer system
pip3 install --user jupyterlab  # Includes updated interface

You will also need to make sure that you've adjusted your PATH environment variable detailed above.

Jupyter works by connecting to your browser. To facillitate this, you'll need to run some commands from both your computer and our systems. We'll use $ to represent commands that should be executed on our systems, and a > to indicate commands that should be run on your computer.

In terminal 1:

# Connect to our systems
> ssh

# Request 1 cpu and 1GB of memory for an hour for an interactive job
$ salloc --mem-per-cpu=10240M --ntasks=1 --nodes=1 --time=1:00:00

# Get the hostname of the compute node you've been allocated, and note it for later commands
$ echo $HOSTNAME

# Start up Juptyer on a random port to avoid possible collisions
$ juptyer notebook --no-browser --port $(($RANDOM+32767))

# Note the port Jupyter uses in its output and replace with $PORT in subsequent commands

In terminal 2:

# Set up port forwarding to Jupyter
> ssh -N -J -L $PORT:localhost:$PORT hostname

From here, you'll want to open your browser per the Jupyter's output.

Voila! Jupyter should be working. When you're done, be sure to kill Juptyer and log out of both terminals to relinquish resources back to Slurm.