problem with v1806 docker image on Linux OS
The problem is briefly described in the following thread: https://www.cfd-online.com/Forums/openfoam-solving/204881-unable-launch-parafoam.html
there seems to be something wrong with mpirun in the docker image.
-
bash-4.2$ mpirun --help
-
-
It looks like opal_init failed for some reason; your parallel process is
-
likely to abort. There are many reasons that a parallel process can
-
fail during opal_init; some of which are due to configuration or
-
environment problems. This failure appears to be an internal failure;
-
here's some additional information (which may only be relevant to an
-
Open MPI developer):
-
opal_shmem_base_select failed
-
--> Returned value -1 instead of OPAL_SUCCESS
-
bash-4.2$ paraview
-
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-linzh'
-
-
It looks like opal_init failed for some reason; your parallel process is
-
likely to abort. There are many reasons that a parallel process can
-
fail during opal_init; some of which are due to configuration or
-
environment problems. This failure appears to be an internal failure;
-
here's some additional information (which may only be relevant to an
-
Open MPI developer):
-
opal_shmem_base_select failed
-
--> Returned value -1 instead of OPAL_SUCCESS
-
-
-
It looks like orte_init failed for some reason; your parallel process is
-
likely to abort. There are many reasons that a parallel process can
-
fail during orte_init; some of which are due to configuration or
-
environment problems. This failure appears to be an internal failure;
-
here's some additional information (which may only be relevant to an
-
Open MPI developer):
-
opal_init failed
-
--> Returned value Error (-1) instead of ORTE_SUCCESS
-
-
-
It looks like MPI_INIT failed for some reason; your parallel process is
-
likely to abort. There are many reasons that a parallel process can
-
fail during MPI_INIT; some of which are due to configuration or environment
-
problems. This failure appears to be an internal failure; here's some
-
additional information (which may only be relevant to an Open MPI
-
developer):
-
ompi_mpi_init: ompi_rte_init failed
-
--> Returned "Error" (-1) instead of "Success" (0)
-
-
*** An error occurred in MPI_Init
-
*** on a NULL communicator
-
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
-
*** and potentially your MPI job)
-
[ce8b1d83bd53:2271] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed!