Container Images
This documentation is deprecated. We now recommend using Apptainer on the supercomputer.
If you have software that requires root access to build or run (which is improbable), or which is difficult or impossible to compile on our systems, using a container is an option for getting it to work on the supercomputer. Container images can sometimes be built directly on the supercomputer, or if not they can be built on your system with tools such as docker and podman. They can then be saved as a tarball, transferred to Office of Research Computing systems, unpacked, and run.
Docker
Docker is the most ubiquitous set of containerization software. It allows one to essentially use the same kernel as the host operating system to run an instance of another operating system, making it lighter than a virtual machine while maintaining most of the same advantages.
Docker requires escalated privileges to run, thus it is not and will not be installed on the supercomputer. However, containers created by Docker can be run using other, safer container systems that are installed. So Docker (the software) cannot be used, but Docker containers can indeed be run.
The reason for disallowing the Docker software is that being a member of the docker
group is equivalent to having root access, which we emphatically do not give to users. We instead suggest that users install Docker on their machines to create images, then use Charliecloud to run said images on our systems.
Charliecloud
Charliecloud is the container runtime that works the best on our systems. It allows one to run a container without any privilege escalation and is simple to use. To run it on the supercomputer, load the latest Charliecloud module:
module load charliecloud
In its basic form, getting a shell inside of the container image looks like this:
ch-convert "imagedir/imagename.tar.gz" "/tmp/imagename" # unpack the container
ch-run --no-home "/tmp/imagename" -- bash # run bash within the container
Alternately, one can use our custom wrapper script, ch-run-orc
(recommended):
ch-run-orc "imagedir/imagename.tar.gz" -- bash
Any command that exists in the container image can be run, not just bash.
Creating a Container Image
You will first need to create a Charliecloud image, which is essentially a tarball containing a Linux filesystem. With some recent updates to Charliecloud, it is often possible to build the image without root--you can create the container completely on the supercomputer, without having to use a system where you have administrative access. Sometimes root is genuinely required, though, in which case you will need to build the image on your own Linux machine.
There are two main ways of creating an image: pulling an already-existing image from Docker Hub, or building one using a Dockerfile. If a container with the software that you intend to use already exists on Docker Hub, your job is much easier--we strongly recommend looking around to see if a pre-built image exists before trying to create one on your own.
The details of crafting a Dockerfile are beyond the scope of this document; in brief, you'll create a script of sorts that uses a vanilla Linux distro as a starting point and runs commands to install and configure your software. Don't worry if you need several iterations to get the Dockerfile just right--when the creation of an image fails, the state is preserved so that when you fix your Dockerfile and rebuild, Docker doesn't have to start from scratch.
On the Supercomputer
Unless you already have an established Charliecloud image creation workflow, it's worth trying to build an image on the supercomputer before resorting to using your own machine.
To do so, you'll first need to load the Charliecloud module:
module load charliecloud
With it loaded, you can create an image. You will use ch-image
whether pulling from Docker Hub or using a Dockerfile:
# Pull from Dockerhub
ch-image pull imageref
# Use a dockerfile
ch-image build -t mytag /directory/containing/thedockerfile/
ch-image
won't store images persistently--if you log out of and back into the supercomputer, your images will most likely be gone.
Once an image has been built or pulled, you'll need to turn it into a tarball to store it:
ch-convert refortag /directory/to/store/imagename.tar.gz
...where refortag
corresponds to the name/tag of the image, which can be found with ch-image list
.
On your Machine
If your image can't be built on the supercomputer, you will need a Linux machine where you have root access with both Docker and Charliecloud installed.
Unless you already have such a system set up, it's probably best to use a virtual machine. The easiest way to do so is to install VirtualBox and use our custom OVA file with Charliecloud and Docker installed. The OVA file is located at /apps/charliecloud/$ver/*/share/images/charliecloud_orc.ova
, where $ver
is the latest version of Charliecloud (0.23 as of this writing); copy it to your machine, import it into VirtualBox, start it up, and you are ready to create your container.
If you can find a container on DockerHub, creating a Charliecloud image thereof is simple:
ch-image pull imagename
ch-convert imagename imagename.tar.gz
If no such container exists, you'll have to create one yourself. This involves creating a Dockerfile on your system and running the following commands:
docker build /directory/with/the/dockerfile/ # you should eventually see "Successfully created imagename"
ch-convert imagename /directory/to/store/imagename.tar.gz # package up the just-created image
Whether you pull an image or create it with a Dockerfile, you'll end up with a tarball (e.g. imagename.tar.gz
). Once it's created, your local machine's work is done and you can copy it from the VM to your machine then from your machine to the supercomputer.
Running a Container Image
ch-run-orc
ch-run-orc
was designed to make running Charliecloud images on the supercomputer easier, and covers common container use cases. It automatically bind-mounts your home and group directories within the container so that you have access to your storage, and provides a few other niceties such as automatic image extraction. Most of the time, all you need to do to run your container is:
ch-run-orc imagename.tar.gz
Check out the help message (ch-run-orc --help
) for its arguments and how it operates.
Manually
If your use case isn't covered by ch-run-orc
, you'll need to extract and run the container manually. On the supercomputer, you'll unpack the image tarball into a directory (usually /tmp
) with ch-convert imagename.tar.gz /tmp/imagename/
; this results in an entire operating system residing in /tmp/imagename/
. Once you've unpacked the image, you'll probably want to make sure that you can access your home and compute directories from within the container. This is as simple as creating the correspnding directories in /tmp/imagename
(and bind mounting them when you run):
# By default, your compute symlink is root-owned and indrect, which causes problems within the container
# You can fix this by relinking your compute directory so that you own the direct symlink
# Note that you only need to do this once
COMPUTE="$(readlink -f $HOME/compute)"
unlink "$HOME/compute"
ln -s "$COMPUTE" "$HOME/compute"
# Now create your home and compute directories within the container (do this each time)
mkdir -p "/tmp/imagename/$HOME"
mkdir -p "/tmp/imagename/$COMPUTE"
If you don't need to access your home or compute directories, you can safely skip this step and omit the --bind
flags when you run ch-run
.
To run a command (we'll call it foo
) in the container, you'll want to use something along the lines of:
COMPUTE="$(readlink $HOME/compute)"
ch-run --no-home \
--unset-env='*' \
--set-env='/tmp/imagename/ch/environment' \
--private-tmp \
--bind="$HOME:$HOME" \
--bind="$COMPUTE:$COMPUTE" \
"/tmp/imagename" \
foo # if you just want a shell, replace foo with bash
# Other options you might want:
# --uid=0
# --gid=0
# --write
Here's a brief explanation of what each of these flags does (also see ch-run --help
):
Flag | Purpose |
---|---|
--no-home |
Don't create and mount /home/username (which doesn't exist on the supercomputer anyway) |
--unset-env |
Don't bring the current environment in to the container (see Environment) |
--set-env |
Set the environment to that delineated in the specified file (see Environment) |
--private-tmp |
Create a private /tmp directory within the container (recommended) |
--bind |
Bind mount the specified directory in the container (see Writing Within a Container) |
--uid |
Run with the specified UID in the container; specify 0 to run as root |
--gid |
Run with the specified GID in the container; if you use --uid=0 , you should probably use --gid=0 as well |
--write |
Write to system directories within the container (see Writing Within a Container) |
Environment
Charliecloud preserves one's environment by default. Most of the time environment variables like PATH
shouldn't be preserved, so it's usually best to unset most environment variables and bring only what you need into the container. ch-run-orc
does this with its --env
flag, unsetting all other environment variables automatically:
ch-run-orc --env VAR1=something --env MYKEY=myval
If using ch-convert
and ch-run
manually, you will need to modify the environment file then run ch-run
. After running ch-convert
, edit imagename/ch/environment
and enter key value pairs with equal signs, e.g.:
cat >> /tmp/imagename/ch/environment << EOF
VAR1=something
MYKEY=myval
EOF
With this done, use ch-run
roughly like this:
ch-run --no-home --unset-env='*' --set-env="/tmp/imagename/ch/environment" "/tmp/imagename" -- bash
Writing Within a Container
To enable access to your home and compute directories from within the container, the corresponding directories need to be created therein, then bind-mounted when the container is run (this is the purpose of the --bind
flag). A similar sequence can be done for group directories. If you use ch-run-orc
this process happens automatically.
If you want to write to system directories within the container, adding the -w
flag to ch-run
may allow you to. Note that since you don't truly have root in the container even with --uid=0 --gid=0
, you may not be able to write to every system directory within the container. This means that tools requiring root, such as apt-get
, may not work therein. It's best to do everything that needs to be done as root while building the image with Docker.
A Worked Example
The Easy Way
Downloading and running sage
within the official SageMath container is extremely simple using ch-run-orc
:
# Pull from Docker Hub
ch-image pull sagemath/sagemath:latest
# Flatten into tarball in /tmp
ch-convert sagemath/sagemath:latest /tmp/sagemath
# Run sage in the container
ch-run-orc /tmp/sagemath sage
With the image now downloaded and flattened, when you want to use the container subsequently you can run the last command alone.
The Other Way
Alternately, you can build on your machine and run manually. This is useful if you have unsuccessfully tried building the image on the supercomputer, or if your use case is not covered by ch-run-orc
.
On the machine where you have Docker and Charliecloud installed (and are either root or a member of the docker
group), run ch-image pull
then ch-convert
to download the container:
ch-image pull sagemath/sagemath:latest
ch-convert sagemath/sagemath:latest sagemath.tar.gz
Once this is finished, copy the resulting file to the supercomputer:
scp /path/to/images/sagemath.tar.gz yournetid@ssh.rc.byu.edu:~/
Now, log in to the supercomputer, load Charliecloud with module load charliecloud
, and extract the image to /tmp
with ch-tar2dir
:
ch-convert ~/images/sagemath.tar.gz /tmp/sagemath
In order to access one's files from within the container, one needs to bind mount home and compute directories, so create the equivalent directories within the container:
mkdir -p "/tmp/sagemath/$HOME"
mkdir -p "/tmp/sagemath/$(readlink $HOME/compute)"
Finally, run sage
in the container:
ch-run --no-home \
-b "$HOME":"$HOME" \
-b "$(readlink $HOME/compute)":"$(readlink $HOME/compute)" \
/tmp/sagemath/ \
sage
Last changed on Tue Aug 6 11:26:08 2024