Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • O OpenFOAM-plus
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 339
    • Issues 339
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • Deployments
    • Deployments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • Repository
  • Wiki
    • Wiki
  • Activity
  • Graph
  • Create a new issue
  • Commits
  • Issue Boards
Collapse sidebar
  • Development
  • OpenFOAM-plus
  • Issues
  • #971
Closed
Open
Issue created Aug 13, 2018 by Hua Zen@lin

problem with v1806 docker image on Linux OS

The problem is briefly described in the following thread: https://www.cfd-online.com/Forums/openfoam-solving/204881-unable-launch-parafoam.html

there seems to be something wrong with mpirun in the docker image.

  1. bash-4.2$ mpirun --help


  2. It looks like opal_init failed for some reason; your parallel process is

  3. likely to abort. There are many reasons that a parallel process can

  4. fail during opal_init; some of which are due to configuration or

  5. environment problems. This failure appears to be an internal failure;

  6. here's some additional information (which may only be relevant to an

  7. Open MPI developer):

  8. opal_shmem_base_select failed

  9. --> Returned value -1 instead of OPAL_SUCCESS

  10. bash-4.2$ paraview

  11. QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-linzh'


  12. It looks like opal_init failed for some reason; your parallel process is

  13. likely to abort. There are many reasons that a parallel process can

  14. fail during opal_init; some of which are due to configuration or

  15. environment problems. This failure appears to be an internal failure;

  16. here's some additional information (which may only be relevant to an

  17. Open MPI developer):

  18. opal_shmem_base_select failed

  19. --> Returned value -1 instead of OPAL_SUCCESS



  20. It looks like orte_init failed for some reason; your parallel process is

  21. likely to abort. There are many reasons that a parallel process can

  22. fail during orte_init; some of which are due to configuration or

  23. environment problems. This failure appears to be an internal failure;

  24. here's some additional information (which may only be relevant to an

  25. Open MPI developer):

  26. opal_init failed

  27. --> Returned value Error (-1) instead of ORTE_SUCCESS



  28. It looks like MPI_INIT failed for some reason; your parallel process is

  29. likely to abort. There are many reasons that a parallel process can

  30. fail during MPI_INIT; some of which are due to configuration or environment

  31. problems. This failure appears to be an internal failure; here's some

  32. additional information (which may only be relevant to an Open MPI

  33. developer):

  34. ompi_mpi_init: ompi_rte_init failed

  35. --> Returned "Error" (-1) instead of "Success" (0)


  36. *** An error occurred in MPI_Init

  37. *** on a NULL communicator

  38. *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,

  39. *** and potentially your MPI job)

  40. [ce8b1d83bd53:2271] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed!

Edited Aug 13, 2018 by Hua Zen
Assignee
Assign to
Time tracking