Singularity OpenFOAM containers
A common issue with OpenFOAM is that due to a significant amount of forks and releases within each fork, it is rather difficult for HPC administrators to provide all the sorts of flavours of OpenFOAM out there. Typically, only a few versions are installed, which can hinder adoption of new versions of the code.
A way to solve this problem would be adopting container technology, so that OpenFOAM does not actually have to be installed. OpenFOAM is already provided as Docker images, but unfortunately, Docker is not suitable for running wide MPI jobs. A different type of container, developed to be suitable for HPC environments is Singularity. Fortunately, one can build a Singularity container directly from Docker containers, and even pull from Dockerhub. Thus, building Singularity containers of OpenFOAM is very easy.
On my request, the use of OpenFOAM Singularity containers has been investigated at the National Supercomputer Centre at Linköping University, which hosts a large cluster called Tetralith. What follows is copied from a mail I have received from the system administrator
I have been able to get Openfoam to run within a singularity container, including MPI. I have documented how to do it on our webpage: https://www.nsc.liu.se/software/installed/tetralith/OpenFOAM/ There is a section: How to run OpenFOAM within Singularity containers
Unfortunately, it is not so trivial to get singularity images to run in parallel. The MPI version within the image must be exactly the same as the version installed on Tetralith Therefore you have to identify which version is used in the image and then use the same version on Tetralith. I installed two extra Open MPI versions on Tetralith that I found in two dirrerent openfoam versions on docker hub. But there is a good chance that other images use an OpenMPI version that is not available yet. In this case, we have to install this openmpi version ... Even if one uses the same Open MPI version, it still may not work, if the MPI in the image was configured/compiled in a very different way than the version we have on Tetralith. E.g. the version Openfoam 1812 on dockerhub works, but 1806 does not since the MPI in the 1806 image differs from the 1812 image, even they are using the same MPI version, but some components seem to be missing.
So, in serial the containers work out of the box (which is nice!), but for parallel computation the version of MPI and even more minor configuration settings within a single version should be matched. What could perhaps be improved on the part of OpenFOAM developers is consistency with respect to the build environment and OpenMPI versions used within the containers for each release, and careful documentation of the configuration settings used. Also, at least version 2.1 of OpenMPI is desirable, since older ones are not fully supported by Singularity. In the containers from OpenCFD, the versions used seem to be consistent: GCC 4.8.5 and OpenMPI 1.10.4, however, see the bold text in the quote. Also, both GCC and OpenMPI versions are quite old.
- I wanted to make the devs aware of this effort, which I hope can, upon success, make life easier for quite a lot of people.
- Consistency of settings in the images is very important. Any comment on v1806 vs v1812 regarding OpenMPI settings?
- Would it be possible to use a newer environment with OpenMPI >= 2.1 instead?
## Reattaching the author to the issue ticket: @timofeymukha ##