Singularity Container¶
Singularity is a container solution for compute driven workloads. It is allowing to pack your application with all its dependencies such that you can run it out of the box in various different HPC clusters and computers.
- More informations on Singularity:
- Singularity web site: https://www.sylabs.io/singularity/
- CECI formation on Singularity: https://indico.cism.ucl.ac.be/event/41/
MPI¶
The portability of the application of MPI application is not trivial. The code compille within the container need to have the same slurm library and the same OpenMPI routine as the one installed on the cluster. For this reason we have prepared some Singularity image that you can use to create your singularity container.
Those containers can be found in /CECI/soft/src/singularity
Here is an example of an mpi hello-world application package in a container with the help of such container:
BootStrap: localimage
From: openmpi_2.1.1.simg
%runscript
/usr/bin/mytest-mpi
%files
test-mpi.c /opt/test-mpi.c
%post
echo "Hello from inside the container"
mpicc -o /usr/bin/mytest-mpi /opt/test-mpi.c
That you can then run on our clusters via:
srun -n 4 -p debug bash -c "singularity run -B \$LOCALSCRATCH:/localscratch ./mpi-user.simg"
Note the “-B $LOCALSCRATCH/$SLURM_JOB_ID:/localscratch” which is needed to correctly run the mounting point in the singularity container. The use of the “bash -c” allows to postpone the evaluation of $LOCALSCRATCH up to a time where the environment variable is actually defined.
Note
The fakeroot feature of Singularity is not possible on the CECI cluster because it requires, several thousands UIDs to be allocated for every user, which is untractable given the number of CECI users.