redistributePar + dyM + lagrangian leads to creash
Summary
The redistribution of cells for each processor in parallel can be performed via
mpirun -np 4 redistributePar -parallel -overwrite
However, when using this for a case with dynamic Mesh and lagrangian particles, this leads to a crash at the very first time step after redistribution.
Steps to reproduce
Start any lagrangian solver with dynamic Mesh in parallel. Stop at first write out. Use redistributePar in parallel, restart the solver at latest timestep.
Example case
This bug can easily be reproduced using sprayDyMFoam
in combination with the aachenBomb tutorial case. I attached aachenBomb.zip tutorial with minimal changes to include dynamic mesh. Just run the Allrun script.
What is the current bug behaviour?
The solver crashes at first time step after executing redistributePar
What is the expected correct behavior?
The solver should run.
Relevant logs and/or images
The error message looks as following:
[3] [2] #0 #0 Foam::error::printStack(Foam::Ostream&)Foam::error::printStack(Foam::Ostream&) at ??:? at ??:? [3] #1 Foam::sigSegv::sigHandler(int)[2] #1 Foam::sigSegv::sigHandler(int) at ??:? [3] #2 ? at ??:? [2] #2 ? in /usr/lib64/libc.so.6 in /usr/lib64/libc.so.6 [3] #3 [2] #3 Foam::tetIndices::faceTriIs(Foam::polyMesh const&, bool) constFoam::tetIndices::faceTriIs(Foam::polyMesh const&, bool) const at ??:? [3] #4 at ??:? [2] #4 Foam::particle::position() constFoam::particle::position() const at ??:? [3] #5 at ??:? [2] #5 ?? at ??:? [2] #6 at ??:? [3] #6 ?? at ??:? [3] #7 __libc_start_main at ??:? [2] #7 __libc_start_main in /usr/lib64/libc.so.6 [3] #8 in /usr/lib64/libc.so.6 [2] #8 ?? at ??:? [tauruslogin3:22317:0] Caught signal 11 (Segmentation fault) at ??:? [tauruslogin3:22316:0] Caught signal 11 (Segmentation fault)
Environment information
- OpenFOAM version : v1906
- Operating system : ubuntu
- Hardware info : Intel CPU, openmpi
- Compiler : gcc