Basics of Singularity
Overview
Teaching: 20 min
Exercises: 10 minQuestions
Objectives
Download container images
Run commands from inside a container
Discuss what are the most popular image registries
Get ready for the hands-on
Before we start, let us ensure we have got the required files to run the tutorials.
If you haven’t done it already, download the following Github repo. Then cd
into it, and save the current directory into a variable named TUTO
for later use.
$ cd ~
$ git clone https://github.com/qcif-training/singularity-containers.git
$ cd singularity-containers
$ export TUTO=$(pwd)
The workshop files are also available as a zip archive if the Github repo download causes any problems.
$ cd ~
$ wget -O singularity-containers.tgz https://drive.google.com/uc?id=1Vr6aXClyo2y4IBP0cxj0c3abldCPHwhF
$ tar -xf singularity-containers.tgz
$ cd singularity-containers
$ export TUTO=$(pwd)
Singularity: a container engine for HPC
As of June 2021 (update!), Singularity is now two distinct projects:
- Singularity, maintained by HPCng on their GitHub;
- SingularityCE, maintained by Sylabs on their GitHub.
As of November 2021 (update!), Singularity is now Apptainer.
These two variants are equivalent up until version 3.7.4, released on May 2021. This tutorial was developed with Singularity 3.5.x, therefore both variants and Apptainer can be used for the hands-on.
Singularity was designed from scratch as a container engine for HPC applications, which is clearly reflected in some of its main features:
-
unprivileged runtime: Singularity containers do not require the user to hold root privileges to run (the Singularity executable needs to be installed and owned by root, though);
-
integration, rather than isolation, by default: same user as host, same shell variables inherited by host, current directory bind mounted, communication ports available; as a result, launching a container requires a much simpler syntax than Docker;
-
interface with job schedulers, such as PBS or Slurm;
-
ability to run MPI enabled containers using host libraries;
-
native execution of GPU enabled containers;
-
unfortunately, root privileges are required to build container images: users can build images on their personal laptops or workstations, on the cloud, or via a Remote Build service.
This tutorial assumes Singularity version 3.0 or higher. Version 3.5.0 or higher is recommended as it offers a smoother, more bug-free experience.
Loading Singularity on your HPC
Is Singularity automatically available?
On some HPC systems, Singularity is automatically available, and some systems need a module load command.
Please test you have Singularity available by running:
singularity --version
What you will see if Singularity is NOT available:
-bash: singularity: command not found
Finding Singularity Module
If you need to load Singularity via a module:
# first find the versions available module avail singularity # then load module load singularity/3.5.0
Extra Setup for Bunya Users
Bunya users need to run additional commands to ensure Apptainer is configured:
apptainer remote add --no-login SylabsCloud cloud.sycloud.io apptainer remote use SylabsCloud apptainer remote list
Our First Container
singularity run library://sylabsed/examples/lolcow
Singularity downloads the image and runs the default command.
INFO: Downloading library image
79.9MiB / 79.9MiB [=======================================================================================] 100 % 7.6 MiB/s 0s
_____________________________________
/ Q: What is purple and conquered the \
\ world? A: Alexander the Grape. /
-------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
The output of the command shows the container is first downloaded, then the default command has the cow proving a quote.
Executing a simple command in a Singularity container
For these first exercises, we’re going to use a plain Ubuntu container image. It’s small and quick to download, and will allow use to get to know how containers work by using common Linux commands.
Within the tutorial directory, let us cd into demos/singularity
:
$ cd $TUTO/demos/singularity
Running a command is done by means of singularity exec
:
$ singularity exec library://ubuntu:20.04 cat /etc/os-release
INFO: Downloading library image
NAME="Ubuntu"
VERSION="20.04 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
Here is what Singularity has just done:
- downloaded a Ubuntu image from the Sylabs Cloud Library (this would be skipped if the image had been downloaded previously);
- stored it into the default cache directory;
- launched a container from that image;
- executed the command
cat /etc/os-release
.
Container images have a name and a tag, in this case ubuntu
and 20.04
. The tag can be omitted, in which case Singularity will default to a tag named latest
.
Using the latest tag
The practice of using the latest
tag can be handy for quick typing, but is dangerous when it comes to reproducibility of your workflow, as under the hood the latest tag could point to different images over time. If you don’t specify a tag, latest
is assumed:
$ singularity exec library://ubuntu cat /etc/os-release
INFO: Downloading library image
PRETTY_NAME="Ubuntu 22.04 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
The output would be the same if we specified library://ubuntu:latest
Singularity full URL
Singularity pulled the image from an online image registry, as represented in this example by the prefix library://
, that corresponds to the Sylabs Cloud Library. Images in there are organised as: <user>/<project>/<name>:<tag>
.
In the example above we didn’t specify the user, library
, and the project, default
. Why? Because the specific case of library/default/
can be omitted. The full specification is used in the next example:
$ singularity exec library://library/default/ubuntu:20.04 echo "Hello World"
INFO: Using cached image
Hello World
Here we are also experiencing image caching in action: the output has no more mention of the image being downloaded.
Executing a command in a Docker container
Singularity is able to download and run Docker images as well. It does this by downloading the docker image and converting to Singularity format.
Let’s try and download a Ubuntu container from the Docker Hub, i.e. the main registry for Docker containers:
$ singularity exec docker://ubuntu:20.04 cat /etc/os-release
INFO: Converting OCI blobs to SIF format
INFO: Starting build...
Getting image source signatures
Copying blob sha256:22e816666fd6516bccd19765947232debc14a5baf2418b2202fd67b3807b6b91
25.45 MiB / 25.45 MiB [====================================================] 1s
Copying blob sha256:079b6d2a1e53c648abc48222c63809de745146c2ee8322a1b9e93703318290d6
34.54 KiB / 34.54 KiB [====================================================] 0s
Copying blob sha256:11048ebae90883c19c9b20f003d5dd2f5bbf5b48556dabf06c8ea5c871c8debe
849 B / 849 B [============================================================] 0s
Copying blob sha256:c58094023a2e61ef9388e283026c5d6a4b6ff6d10d4f626e866d38f061e79bb9
162 B / 162 B [============================================================] 0s
Copying config sha256:6cd71496ca4e0cb2f834ca21c9b2110b258e9cdf09be47b54172ebbcf8232d3d
2.42 KiB / 2.42 KiB [======================================================] 0s
Writing manifest to image destination
Storing signatures
INFO: Creating SIF file...
NAME="Ubuntu"
VERSION="20.04.6 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.6 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
Rather than just downloading a SIF file, now there’s more work for Singularity, as it has to both:
- download the various layers making up the image, and
- assemble them into a single SIF image file.
Note that, to point Singularity to Docker Hub, the prefix docker://
is required.
Docker Hub organises images only by users (also called repositories), not by projects: <repository>/<name>:<tag>
. In the case of the Ubuntu image, the repository was library
and could be omitted.
What is the latest Ubuntu image from Docker Hub?
Write down a Singularity command that prints the OS version through the latest Ubuntu image from Docker Hub.
Solution
$ singularity exec docker://ubuntu cat /etc/os-release
[..] NAME="Ubuntu" VERSION_ID="24.04" VERSION="24.04 LTS (Noble Numbat)" [..]
It’s version 24.04.
Open up an interactive shell
Sometimes it can be useful to open a shell inside a container, rather than to execute commands, e.g. to inspect its contents.
Achieve this by using singularity shell
:
$ singularity shell docker://ubuntu:20.04
Singularity>
Remember to type exit
, or hit Ctrl-D
, when you’re done!
Download and use images via SIF file names
All examples so far have identified container images using their registry name specification, e.g. docker://ubuntu:20.04
or similar.
An alternative option to handle images is to download them to a known location, and then refer to their full directory path and file name.
Let’s use singularity pull
to save the image to a specified path (output might differ depending on the Singularity version you use):
$ singularity pull docker://ubuntu:20.04
By default, the image is saved in the current directory:
$ ls
ubuntu_20.04.sif
Then you can use this image file by:
$ singularity exec ./ubuntu_20.04.sif echo "Hello World"
Hello World
You can specify the storage location with the --dir
flag:
$ mkdir -p $TUTO/sif_lib
$ singularity pull --dir $TUTO/sif_lib docker://library/ubuntu:20.04
Being able to specify download locations allows you to keep the local set of images organised and tidy, by making use of a directory tree. It also allows for easy sharing of images within your team in a shared resource. In general, you will need to specify the location of the image upon execution, e.g. by defining a dedicated variable:
$ export image="$TUTO/sif_lib/ubuntu_20.04.sif"
$ singularity exec $image echo "Hello Again"
Hello Again
Manage the image cache
When pulling images, Singularity stores images and blobs in a cache directory.
The default directory location for the image cache is $HOME/.singularity/cache
(or $HOME/.apptainer/cache
). This location can be inconvenient in shared resources such as HPC centres, where often the disk quota for the home directory is limited. You can redefine the path to the cache dir by setting the variable SINGULARITY_CACHEDIR
.
If you are running out of disk space, you can inspect the cache with this command (omit -v
before Singularity version 3.4):
$ singularity cache list -v
NAME DATE CREATED SIZE TYPE
2abc4dfd83182546da40df 2024-05-22 12:15:24 2.24 KiB blob
49b384cc7b4aa0dfd16ff7 2024-05-22 12:12:31 27.53 MiB blob
bf3dc08bfed03118282788 2024-05-22 12:12:31 2.24 KiB blob
cc9cc8169c9517ae035cf2 2024-05-22 12:15:24 0.41 KiB blob
d21429c4635332e96a4baa 2024-05-22 12:12:31 0.41 KiB blob
d4c3c94e5e10ed15503bda 2024-05-22 12:10:44 26.24 MiB blob
sha256.7a63c14842a5c9b 2024-05-22 12:07:50 28.44 MiB library
sha256.cfb23cc09dd3b45 2024-05-22 12:06:55 26.47 MiB library
sha256.e37e11f101a9db8 2024-05-22 12:04:38 79.91 MiB library
165cd1641b6b38b827d13c 2024-05-22 12:11:01 26.47 MiB oci-tmp
57007e861979788d15e95d 2024-05-22 12:12:49 27.78 MiB oci-tmp
6de4e3f1b72fba6dc0cd1d 2024-05-22 12:15:42 26.47 MiB oci-tmp
There are 6 container file(s) using 215.53 MiB and 6 oci blob file(s) using 53.77 MiB of space
Total space used: 269.31 MiB
we are not going to clean the cache in this tutorial, as cached images will turn out useful later on. Let us just perform a dry-run using the -n
option:
$ singularity cache clean -n
User requested a dry run. Not actually deleting any data!
INFO: Removing blob cache entry: blobs
INFO: Removing blob cache entry: index.json
INFO: Removing blob cache entry: oci-layout
INFO: Removing library cache entry: sha256.7a63c14842a5c9b9c0567c1530af87afbb82187444ea45fd7473726ca31a598b
INFO: Removing library cache entry: sha256.cfb23cc09dd3b4570110dd6f13886fe95cc31ef2d81f7abab6796751f5700fa0
INFO: Removing library cache entry: sha256.e37e11f101a9db82a08bf63f816219da0d4da0e19f5323761d92731213c9e751
INFO: Removing oci-tmp cache entry: 165cd1641b6b38b827d13c7a80757f91dc4b7d0d2f443ee699133a901fa44e78
INFO: Removing oci-tmp cache entry: 57007e861979788d15e95d5010ddc104806a009e3e04c312fa6a789597d67303
INFO: Removing oci-tmp cache entry: 6de4e3f1b72fba6dc0cd1d2cfc4021a9ba5961fda97fa69c5d1b79f704d06c60
INFO: No cached files to remove at /home/ubuntu/.apptainer/cache/shub
INFO: No cached files to remove at /home/ubuntu/.apptainer/cache/oras
INFO: No cached files to remove at /home/ubuntu/.apptainer/cache/net
If we really wanted to wipe the cache, we would need to use the -f
flag instead.
Contextual help on Singularity commands
Use
singularity help
, optionally followed by a command name, to print help information on features and options.
Popular registries (aka image libraries)
At the time of writing, Docker Hub hosts a much wider selection of container images than Sylabs Cloud. This includes Linux distributions, Python and R deployments, as well as a big variety of applications.
Bioinformaticians should keep in mind another container registry, Red Hat Quay by Red Hat, that hosts thousands of applications in this domain of science. These mostly come out of the BioContainers project, that aims to provide automated container builds of all of the packages made available through Bioconda.
Nvidia maintains the Nvidia NGC Cloud, hosting an increasing number of containerised applications optimised to run on Nvidia GPUs.
AMD has recently created AMD Infinity Hub, to host containerised applications optimised for AMD GPUs.
Right now, the Sylabs Cloud Library does not contain a large number of images. Still, it can turn useful for storing container images requiring features that are specific to Singularity.
Pull and run a Python container
How would you pull the following container image from Docker Hub,
python:3-slim
?Once you’ve pulled it, enquire the Python version inside the container by running
python --version
.Solution
Pull:
$ singularity pull docker://python:3-slim
Get Python version:
$ singularity exec ./python_3-slim.sif python --version
Python 3.12.3
Key Points
Singularity can run both Singularity and Docker container images
Execute commands in containers with
singularity exec
Open a shell in a container with
singularity shell
Download a container image in a selected location with
singularity pull
You should not use the
latest
tag, as it may limit workflow reproducibilityThe most commonly used registries are Docker Hub, Red Hat Quay and BioContainers