Skip to content

v1912 over*DyMFoam tutorials fail on multi-node parallel

Do the v1912 tutorial cases overInterDyMFoam/floatingBody and overPimpleDyMFoam/simpleRotor work for you guys for a multi-node parallel run?

They both work fine for me serial and on a single parallel node. Once I try multi-node parallel processing, I am getting either scrambled garbage for the field variables (floatingBody) or returns segFault or sigFpe on a random iteration (simpleRotor). It appears that data is not being properly transferred across compute nodes.

I am using Scotch decomposition and OpenMPI-4.0.0.

PIMPLE: iteration 1
[2] [5] #0  #0  Foam::error::printStack(Foam::Ostream&)Foam::error::printStack(Foam::Ostream&) at  at ??:?
[2] #1  Foam::sigFpe::sigHandler(int)??:?
[5] #1  Foam::sigFpe::sigHandler(int) at ??:?
[2] #2  ? at ??:?
[5] #2  ? in /lib64/libc.so.6
[2] #3  Foam::divide(Foam::Field<double>&, double const&, Foam::UList<double> const&) in /lib64/libc.so.6
[5] #3  Foam::divide(Foam::Field<double>&, double const&, Foam::UList<double> const&) at ??:?
[2] #4  void Foam::divide<Foam::fvPatchField, Foam::volMesh>(Foam::GeometricField<double, Foam::fvPatchField, Foam::volMesh>&, Foam::dimensioned<double> const&, Foam::GeometricField<double, Foam::fvPatchField, Foam::volMesh> const&) at ??:?
[5] #4   at ??:?
Edited by Greg Burgreen