Adjoint Smooth Sensitivities fails under WM_LABEL_SIZE=64
Summary
Running the tutorial: $FOAM_TUTORIALS/incompressible/adjointOptimisationFoam/sensitivityMaps/motorBike
With OpenFOAM compiled with label size 64 (instead of the default 32) leads to an MPI error.
The only modification to the tutorial is running in 2 processors rather than 20.
Steps to reproduce
Using OpenFOAM compiled with label size 64 (export WM_LABEL_SIZE=64
), run the following tutorial:
$FOAM_TUTORIALS/incompressible/adjointOptimisationFoam/sensitivityMaps/motorBike
I used only a 2 processor hierarchical decomposition:
numberOfSubdomains 2;
method hierarchical;
coeffs
{
n (2 1 1);
}
What is the current bug behaviour?
MPI Error. The job does not finish.
What is the expected correct behavior?
Successful job with 64 label size
Relevant logs and/or images
I have attached the full log file: log.adjointOptimisationFoam
Environment information
- OpenFOAM version : v2112
- Operating system : centos 8
- Compiler : gcc 8.4.0
Possible fixes
The issue is somehow related to the inter-processor communication happening in the processorFaPatch.C
By changing the sender call
void Foam::processorFaPatch::initGeometry()
{
if (Pstream::parRun())
{
OPstream toNeighbProc
(
Pstream::commsTypes::blocking,
neighbProcNo() // ,
// 3*(sizeof(label) + size()*sizeof(vector)) <- Use automatically computed message size
);
...
...
}
And changing the recipient call
void Foam::processorFaPatch::calcGeometry()
{
if (Pstream::parRun())
{
{
IPstream fromNeighbProc
(
Pstream::commsTypes::blocking,
neighbProcNo() //,
// 3*(sizeof(label) + size()*sizeof(vector)) <- Use automatically computed message size
);
...
...
}
Appears to fix the issue. However I think this is only hiding the issue.