Development issueshttps://develop.openfoam.com/groups/Development/-/issues2024-02-29T15:35:33Zhttps://develop.openfoam.com/Development/openfoam/-/issues/3109Odd use of ListListOps inplaceRenumber2024-02-29T15:35:33ZMark OLESENOdd use of ListListOps inplaceRenumberSighted in EnsightCellsIO.C - not clear what the compiler has even found.Sighted in EnsightCellsIO.C - not clear what the compiler has even found.Mark OLESENMark OLESENhttps://develop.openfoam.com/Development/openfoam/-/issues/3108Add globalIndex info to globalMeshData2024-03-09T18:31:08ZMark OLESENAdd globalIndex info to globalMeshDataRequested by @Mattijs - replace the total point/face/cell information with globalIndex to allow reuse in various other places without requiring communication each subsequent time.Requested by @Mattijs - replace the total point/face/cell information with globalIndex to allow reuse in various other places without requiring communication each subsequent time.v2406Mark OLESENMark OLESENhttps://develop.openfoam.com/Development/openfoam/-/issues/3107ENH: replace raw pointers with unique_ptr in faMatrix and fvMatrix2024-03-05T18:19:21ZKutalmış BerçinENH: replace raw pointers with unique_ptr in faMatrix and fvMatrixPlaceholder to replace raw pointers residing in faMatrix and fvMatrix with the `std::unique_ptr`.
The motivation is to get the benefits that the `std::unique_ptr` offer over raw pointers such as automatic memory management, unshared own...Placeholder to replace raw pointers residing in faMatrix and fvMatrix with the `std::unique_ptr`.
The motivation is to get the benefits that the `std::unique_ptr` offer over raw pointers such as automatic memory management, unshared ownership, resource management and safety, and expressiveness.Kutalmış BerçinKutalmış Berçinhttps://develop.openfoam.com/Development/openfoam/-/issues/3106snappyHexMesh parallel feature attraction2024-02-22T16:08:39ZMattijs Janssens4-Mattijs@users.noreply.develop.openfoam.comsnappyHexMesh parallel feature attraction<!--
*** Please read this first! ***
Before opening a new issue, make sure to search for keywords in the issues
filtered by the "bug" label and check to see if it has already been reported
You can see how your report will be re...<!--
*** Please read this first! ***
Before opening a new issue, make sure to search for keywords in the issues
filtered by the "bug" label and check to see if it has already been reported
You can see how your report will be rendered on the platform by using the
"preview" tab above
-->
<!--
All text between these markers are comments and will not be present in the
report
-->
### Summary
<!-- Summarize the bug encountered concisely -->
snappyHexMesh is not parallel consistent
### Steps to reproduce
<!-- How one can reproduce the issue - this is very important -->
Not sure. Visual inspection of code.
### What is the expected *correct* behavior?
<!-- What you should see instead -->
Ideally same behaviour parallel/non-parallel
### Environment information
<!--
Providing details of your set-up can help us identify any issues, e.g.
OpenFOAM version : v2312|v2306|v2212|v2206|v2112 etc
Operating system : ubuntu|openSUSE|centos etc
Hardware info : any info that may help?
Compiler : gcc|intel|clang etc
-->
- OpenFOAM version : v2312
- Operating system :
- Hardware info :
- Compiler :
### Possible fixes
<!--
If you can, link to the line of code that might be responsible for the
problem
The "/label ~bug" text is a gitlab flag that will add the "bug" label to this
issue
-->
Use mesh faces to access mesh face informationMattijs Janssens4-Mattijs@users.noreply.develop.openfoam.comMattijs Janssens4-Mattijs@users.noreply.develop.openfoam.comhttps://develop.openfoam.com/Development/openfoam/-/issues/3105redistributePar (with cyclicAMI) to more processors2024-03-18T12:15:41ZMattijs Janssens4-Mattijs@users.noreply.develop.openfoam.comredistributePar (with cyclicAMI) to more processors<!--
*** Please read this first! ***
Before opening a new issue, make sure to search for keywords in the issues
filtered by the "bug" label and check to see if it has already been reported
You can see how your report will be re...<!--
*** Please read this first! ***
Before opening a new issue, make sure to search for keywords in the issues
filtered by the "bug" label and check to see if it has already been reported
You can see how your report will be rendered on the platform by using the
"preview" tab above
-->
<!--
All text between these markers are comments and will not be present in the
report
-->
### Summary
<!-- Summarize the bug encountered concisely -->
Changing decomposeParDict to be more processors and running `redistributePar -parallel` hangs.
### Steps to reproduce
<!-- How one can reproduce the issue - this is very important -->
```
cd tutorials/incompressible/pimpleFoam/laminar/mixerVesselAMI2D/mixerVesselAMI2D
decomposePar (into e.g. 4 processors)
```
In `system/controlDict` change to `startFrom latestTime` and `stopAt writeNow`. Change decomposeParDict to use more processors (e.g. 8) and `mpirun -np 8 redistributePar -parallel`. This hangs - processors that already have mesh get stuck in `Foam::fvMesh::init`. Processors that are new get stuck in earlier code:
```
15 0x00007f2c1eecb930 in Foam::IOobject::readAndCheckHeader(bool, Foam::word const&, bool, bool, bool) () from develop/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so
#16 0x00007f2c215fae69 in Foam::fvMesh::init(bool) () from develop/platforms/linux64GccDPInt32Opt/lib/libfiniteV
```
### Example case
See above.
<!--
If possible, please create a SMALL example and attach it to your report
If you are using an older version of OpenFOAM this will also determine
whether the bug has been fixed in a more recent version
-->
### What is the current *bug* behaviour?
See above.
<!-- What actually happens -->
### What is the expected *correct* behavior?
Write redistributed mesh to new time directory, including on the new processors.
<!-- What you should see instead -->
### Relevant logs and/or images
<!--
Paste any relevant logs - please use code blocks (```) to format console
output, logs, and code as it's very hard to read otherwise.
-->
### Environment information
<!--
Providing details of your set-up can help us identify any issues, e.g.
OpenFOAM version : v2312|v2306|v2212|v2206|v2112 etc
Operating system : ubuntu|openSUSE|centos etc
Hardware info : any info that may help?
Compiler : gcc|intel|clang etc
-->
- OpenFOAM version :v2312
- Operating system :
- Hardware info :
- Compiler :
### Possible fixes
<!--
If you can, link to the line of code that might be responsible for the
problem
The "/label ~bug" text is a gitlab flag that will add the "bug" label to this
issue
-->
@markhttps://develop.openfoam.com/Development/openfoam/-/issues/3104Molecular diffusion in icoReactingMultiphaseInterFoam2024-02-21T13:26:23ZPhil NamesnikMolecular diffusion in icoReactingMultiphaseInterFoam### Summary
I'm working on evaporation of water and diffusion of water vapour in air and encountered a problem when using the icoReactingMultiphaseInterFoam solver with laminar flows. As a testcase I use a cuboidal 1D rod with dimension...### Summary
I'm working on evaporation of water and diffusion of water vapour in air and encountered a problem when using the icoReactingMultiphaseInterFoam solver with laminar flows. As a testcase I use a cuboidal 1D rod with dimensions 500 mm x 1 mm x 1 mm and a discretization of 2000 x 1 x 1 (simpleGrading 1,1,1). To keep it simple this rod is completely filled with air (at first no water phase is initialized). On the left end of the rod a fixedValue boundary condition is applied with a vapour mass fraction of 0.01, whereas on the right end a fixedValue of 0 is applied. All other boundaries are of type empty.
The expected behaviour is a diffusive transport of vapour species through air governed by Fick's second law. What actually happens is no transport at all.
### Steps to reproduce
Use a domain without any inflows or outflows and zero velocity leading to a dominant molecular diffusion. Set one vapour mass fraction boundary condition to a higher value than the rest of the domain. Switch to laminar model in turbulenceProperties and use `addDiffusion true;` in thermophysicalProperties.gas.
### Example case
[1D_diffusion.zip](/uploads/192c6df0d7b2a42a60e620bd89746a54/1D_diffusion.zip)
### What is the current _bug_ behaviour?
Molecular diffusion is not working in laminar cases in icoReactingMultiphaseInterFoam.
### What is the expected _correct_ behavior?
Water vapour diffuses from regions/boundaries with high vapour mass fraction to regions of lower mass fraction according to Fick's second law.
### Environment information
- OpenFOAM version : v2212
- Operating system : ubuntu
- Compiler : gcc
### Possible fixes
In `OpenFOAM-v2212/src/phaseSystemModels/multiphaseInter/phasesSystem/phaseModel/MultiComponentPhaseModel/MultiComponentPhaseModel.C:418` the mass diffusivity for the diffusion equation is calculated only using turbulent viscosity `nut()`. For laminar cases `nut()` is set to 0 leading to a deactivation of diffusion. Changing `nut()` to `nuEff()` solves the problem by calculating the diffusion coefficient based on both molecular and turbulent viscosity.https://develop.openfoam.com/Development/openfoam/-/issues/3103unable to use mpirun in precomplied windows version2024-03-04T06:18:45Z加成 唐unable to use mpirun in precomplied windows versionHi,It's maybe a stupid question and I'm new to openfoam. but I can't use mpi to run job in paraell in precomplied openfoam at windows platform.
I'm using `mpirundebug -normal -np 6 potentialFoam -parallel`
and I'm getting
```
Error enc...Hi,It's maybe a stupid question and I'm new to openfoam. but I can't use mpi to run job in paraell in precomplied openfoam at windows platform.
I'm using `mpirundebug -normal -np 6 potentialFoam -parallel`
and I'm getting
```
Error encountered:
Unsupported WM_MPLIB setting : MSMPI
```
and I already installed msmpi like
![image](/uploads/108b30df08e09bf572264bd473138d4e/image.png)
Thanks for helping!Pawan GhildiyalPawan Ghildiyalhttps://develop.openfoam.com/Development/openfoam/-/issues/3102BUG: wallHeatFlux: inconsistent handling of 'useNamePrefix'2024-02-14T13:40:52ZKutalmış BerçinBUG: wallHeatFlux: inconsistent handling of 'useNamePrefix'When the `useNamePrefix` option is enabled for the `wallHeatFlux` function object, the expected naming convention for the registered `volScalarField` is 'function-object-name:wallHeatFlux' (applicable for Linux).
During initialization (...When the `useNamePrefix` option is enabled for the `wallHeatFlux` function object, the expected naming convention for the registered `volScalarField` is 'function-object-name:wallHeatFlux' (applicable for Linux).
During initialization (`ctor`), the `volScalarField` is registered using the name `scopedName(typeName)`. However, because `read(dict)` is invoked after the field registration, the default name for `scopedName(typeName)` becomes simply `typeName`.
Consequently, downstream processes, such as within the `execute()` function, encounter a lookup error when attempting to find a field named `scopedName(typeName)`.
To address this issue, the `read(dict)` function should be executed prior to the field registration.Kutalmış BerçinKutalmış Berçinhttps://develop.openfoam.com/Development/openfoam/-/issues/3101AOCC link errors for utilities depending on CGAL in OpenFOAM 2306/23122024-02-27T13:17:33ZNing LiAOCC link errors for utilities depending on CGAL in OpenFOAM 2306/2312<!--
*** Please read this first! ***
Before opening a new issue, make sure to search for keywords in the issues
filtered by the "bug" label and check to see if it has already been reported
You can see how your report will be re...<!--
*** Please read this first! ***
Before opening a new issue, make sure to search for keywords in the issues
filtered by the "bug" label and check to see if it has already been reported
You can see how your report will be rendered on the platform by using the
"preview" tab above
-->
<!--
All text between these markers are comments and will not be present in the
report
-->
### Summary
<!-- Summarize the bug encountered concisely -->
When compiling OpenFOAM 2306 or 2312 with the AMD AOCC 4.1 compiler, there are link errors with a few OpenFOAM utilities that depend on CGAL. Specifically, paths to the dependent gmp and mpfr libraries are somehow not properly injected into the build system resulting in failure at link time.
### Steps to reproduce
<!-- How one can reproduce the issue - this is very important -->
We are seeing this from a Spack build:
> spack install -v openfoam@2306%aocc@4.1.0
I am undecided if this is an OpenFOAM issue or if this is an issue with its Spack recipe (so forgive me if this turns out to be the wrong place to file an issue). However, what I can say for sure is this problem did not exist in v2212 and earlier, and it was likely to be introduced by code changes in https://develop.openfoam.com/Development/openfoam/-/commit/74d65ed018b067065beb9353cc06cc35e52572ee. I'd like to have the developers' opinions on this.
### Relevant logs and/or images
<!--
Paste any relevant logs - please use code blocks (```) to format console
output, logs, and code as it's very hard to read otherwise.
-->
```
ld.lld: error: undefined symbol: __gmpq_clear
>>> referenced by surfaceBooleanFeatures.C
>>> /tmp/root/spack-stage/spack-stage-openfoam-2306-temgrqjteus5nqfko72pqyrpz7hu5nre/spack-src/build/li
nux64AmdDPInt32-spack/applications/utilities/surface/surfaceBooleanFeatures/surfaceBooleanFeatures.o:(CGAL::Cartesian
KernelFunctors::Construct_point_3>::operator()(CGAL::Return_base_tag, CGAL::Gmpq c
onst&, CGAL::Gmpq const&, CGAL::Gmpq const&) const)
>>> referenced by surfaceBooleanFeatures.C
>>> /tmp/root/spack-stage/spack-stage-openfoam-2306-temgrqjteus5nqfko72pqyrpz7hu5nre/spack-src/build/li
nux64AmdDPInt32-spack/applications/utilities/surface/surfaceBooleanFeatures/surfaceBooleanFeatures.o:(CGAL::Cartesian
KernelFunctors::Construct_point_3>::operator()(CGAL::Return_base_tag, CGAL::Gmpq c
onst&, CGAL::Gmpq const&, CGAL::Gmpq const&) const)
>>> referenced by surfaceBooleanFeatures.C
>>> /tmp/root/spack-stage/spack-stage-openfoam-2306-temgrqjteus5nqfko72pqyrpz7hu5nre/spack-src/build/li
nux64AmdDPInt32-spack/applications/utilities/surface/surfaceBooleanFeatures/surfaceBooleanFeatures.o:(CGAL::PointC3>::~PointC3())
>>> referenced 549 more times
```
and repeated many times for other gmp/mpfr symbols
### Environment information
<!--
Providing details of your set-up can help us identify any issues, e.g.
OpenFOAM version : v2312|v2306
Operating system : tested on rocky 8 linux but should apply to other OS
Hardware info : any info that may help?
Compiler : AOCC 4.1 (based on clang 16.0)
-->
- OpenFOAM version : v2312|v2306
- Operating system : tested on rocky 8 linux but should apply to other OS
- Hardware info : not relevant
- Compiler : AOCC 4.1 (based on clang 16.0)
### Possible fixes
<!--
If you can, link to the line of code that might be responsible for the
problem
The "/label ~bug" text is a gitlab flag that will add the "bug" label to this
issue
-->
I don't have a proper fix yet. My workaround is to patch the affected OpenFOAM utilities' `Make/options` files and replacing the `$(CGAL_LIBS)` there by an actual path containing the proper -L and -l options pointing to cgal/gmp/mpfr installations. Ideally the fix should be done at locations where the value of `$(CGAL_LIBS)` gets populated but I don't know how.https://develop.openfoam.com/Development/openfoam/-/issues/3100misleading doc typo2024-02-21T13:25:59Zzah pmisleading doc typoon this page:
https://www.openfoam.com/documentation/user-guide/3-running-applications/3.2-running-applications-in-parallel
it gives example of coeffs `n (2 2 1);`, and then then next line in simple method explains "Simple geometric...on this page:
https://www.openfoam.com/documentation/user-guide/3-running-applications/3.2-running-applications-in-parallel
it gives example of coeffs `n (2 2 1);`, and then then next line in simple method explains "Simple geometric decomposition in which the domain is split into pieces by direction, e.g. 2 pieces in the x direction, 1 in y etc."
but its actually 2 pieces in x, 2 in y, and 1 in z.
also would be nice to mention that the numbers should match the total number of subdomains: 2x2x1 = 4https://develop.openfoam.com/Development/openfoam/-/issues/3099Allow time-dependent diskDir in actuationDiskSource2024-02-13T20:25:51ZPete BachantAllow time-dependent diskDir in actuationDiskSourceWanted to throw this out there because I am going to start working on it, in case anyone can point me to a quick way to do this. My best guess thus far is to convert the `diskDir_` member from a `vector` to a `Function1`.Wanted to throw this out there because I am going to start working on it, in case anyone can point me to a quick way to do this. My best guess thus far is to convert the `diskDir_` member from a `vector` to a `Function1`.Mark OLESENMark OLESENhttps://develop.openfoam.com/Development/openfoam/-/issues/3098Dynamic linker environment variables setup on Darwin2024-03-09T18:31:27ZAlexey MatveichevDynamic linker environment variables setup on Darwin### Summary
Folder existence check was introduced on Darwin in `_foamAddLib` function. The function fails, when multiple folders separated by colon are passed as an argument. Due to this bug `FOAM_USER_LIBBIN` and `FOAM_SITE_LIBBIN` are...### Summary
Folder existence check was introduced on Darwin in `_foamAddLib` function. The function fails, when multiple folders separated by colon are passed as an argument. Due to this bug `FOAM_USER_LIBBIN` and `FOAM_SITE_LIBBIN` are not added to `DYLD_LIBRARY_PATH`.
### Steps to reproduce
1. Setup OpenFOAM(R) environment.
2. Look at `DYLD_LIBRARY_PATH`: neither `FOAM_USER_LIBBIN`, nor `FOAM_SITE_LIBBIN` are there.
### What is the current *bug* behaviour?
`FOAM_USER_LIBBIN` and `FOAM_SITE_LIBBIN` are missing from `DYLD_LIBRARY_PATH`, so user-compiled libraries are not found by dynamic linker.
### What is the expected *correct* behavior?
Libraries located in `FOAM_USER_LIBBIN` or `FOAM_SITE_LIBBIN` should be found by dynamic linker.
### Environment information
- OpenFOAM version : v2312
- Operating system : macOS
- Compiler : clang
### Possible fixes
There are two possible approaches to fix the issue.
1. Remove `-e` check from `_foamAddLib` function. I.e. change this (`etc/config.sh/functions:92`)
```sh
_foamAddLib()
{
case "$1" in (/?*)
if [ -e "$1" ]
then
export FOAM_LD_LIBRARY_PATH="${1}${FOAM_LD_LIBRARY_PATH:+:}${FOAM_LD_LIBRARY_PATH}"
export DYLD_LIBRARY_PATH="$FOAM_LD_LIBRARY_PATH"
fi
esac
}
```
to
```sh
_foamAddLib()
{
case "$1" in (/?*)
export FOAM_LD_LIBRARY_PATH="${1}${FOAM_LD_LIBRARY_PATH:+:}${FOAM_LD_LIBRARY_PATH}"
export DYLD_LIBRARY_PATH="$FOAM_LD_LIBRARY_PATH"
esac
}
```
The same should be applied to `csh` functions.
2. Do not pass multiple folders to `_foamAddLib` function, effectively splitting single function call to multiple. I.e. instead of this (`etc/config.sh/setup:233`):
```sh
_foamAddLib "$FOAM_USER_LIBBIN:$FOAM_SITE_LIBBIN"
```
use this
```sh
_foamAddLib "$FOAM_SITE_LIBBIN"
_foamAddLib "$FOAM_USER_LIBBIN" # User library folder has higher priority
```
`grep -r` in `etc` folder shows, that this line is a single invocation of `_foamAddLib` with multi-folder argument. So, maybe, this approach is easier.https://develop.openfoam.com/Development/openfoam/-/issues/3097dubious cumulative area calculation in triangulatedPatch2024-02-07T12:49:06ZMark OLESENdubious cumulative area calculation in triangulatedPatchBy code inspection this appears incorrect:
```c
const scalar offset = procSumWght[Pstream::myProcNo()];
forAll(triWght, i)
{
if (i)
{
// Convert to cumulative
triWght[i] += triWght[i-1...By code inspection this appears incorrect:
```c
const scalar offset = procSumWght[Pstream::myProcNo()];
forAll(triWght, i)
{
if (i)
{
// Convert to cumulative
triWght[i] += triWght[i-1];
}
// Apply processor offset
triWght[i] += offset;
}
// Normalise
const scalar sumWght = procSumWght.back();
for (scalar& w : triWght)
{
w /= sumWght;
}
```
Although the proc offset is being added _after_ the local accumulation, that value will be again in the following steps of `triWght[i] += triWght[i-1]`
I think the correct code would look like this:
```c
// Convert to cumulative
for (label i = 1; i < triWght.size(); ++i)
{
triWght[i] += triWght[i-1];
}
const scalar offset = procSumWght[Pstream::myProcNo()];
const scalar totalArea = procSumWght.back();
// Apply processor offset and normalise
for (scalar& w : triWght)
{
w = (w + offset) / totalArea;
}
```Mark OLESENMark OLESENhttps://develop.openfoam.com/Development/openfoam/-/issues/3096replace listCombineReduce with allGatherList etc.2024-03-12T07:46:11ZMark OLESENreplace listCombineReduce with allGatherList etc.In places such as CloudIO.C, patchInjectionBase.C, turbulentDFSEMInletFvPatchVectorField.C (probably more), there is use of listCombineReduce to collect values from each processor into a list. This particular usage is directly equivalent...In places such as CloudIO.C, patchInjectionBase.C, turbulentDFSEMInletFvPatchVectorField.C (probably more), there is use of listCombineReduce to collect values from each processor into a list. This particular usage is directly equivalent to using allGatherList, which also corresponds to an MPI intrinsic for primitive types.Mark OLESENMark OLESENhttps://develop.openfoam.com/Development/openfoam/-/issues/3095ensightCloud function object2024-02-02T15:22:10ZMark OLESENensightCloud function objectAdditional function-object to generate EnSight output for lagrangian. cross-ref EP2316Additional function-object to generate EnSight output for lagrangian. cross-ref EP2316Mark OLESENMark OLESENhttps://develop.openfoam.com/Development/openfoam/-/issues/3094include cloudFunction results in vtkCloud2024-02-02T15:21:59ZMark OLESENinclude cloudFunction results in vtkCloudIssue raised in [cfd-online](https://www.cfd-online.com/Forums/openfoam-post-processing/253820-write-cloudfunction-fields-vtk-during-runtime.html) - makes sense to also write things like Reynolds number etc.Issue raised in [cfd-online](https://www.cfd-online.com/Forums/openfoam-post-processing/253820-write-cloudfunction-fields-vtk-during-runtime.html) - makes sense to also write things like Reynolds number etc.Mark OLESENMark OLESENhttps://develop.openfoam.com/Development/openfoam/-/issues/3093Restart of lagrangian particles fails for reactingParcelFoam dependent on dom...2024-01-29T07:45:01ZUwe JanoskeRestart of lagrangian particles fails for reactingParcelFoam dependent on domain decomposition**Case particles impining on cube:**
Particles are impinging on a cube and form a liquid film with the solver reactingParcelFoam (V22/12).
The duration of the spray is 1000 s. The simulation (on single processor) shows the impingment and...**Case particles impining on cube:**
Particles are impinging on a cube and form a liquid film with the solver reactingParcelFoam (V22/12).
The duration of the spray is 1000 s. The simulation (on single processor) shows the impingment and the formation of a film and after approx. 1.5 the re-entrainment of particles back in the flow. The simulation is splitted in two simulations 0-1s and 1-2s which is a restart of the results for 1s (starting from latestTime).
The results show (see massfilm.jpg) different results based on single / parallel simulation:
- single processor: Mass of film increases after the restart (which was expected)
- 4 proc. (simple 4/1/1) which is a distribution where the particle positons for the injection in reactingCloudProperties are on one processor shows the same results, like on a single processor. The mass of film increases after the restart
- 4 proc. (simple 1/2/2) which is a distribution where the particle injection positions are not on one procesor fails. After the restart no particles are injected and the mass drops.
![massfilm](/uploads/842fe65009eb81231dfca4e2b4599866/massfilm.jpg)
This is a reproducible behaviour for the steps
1. Use endtime = 1 s in controlDict. Run reactingParcelFoam > log1
2. Change endtime = 2 s in controlDict. Run reactingParcelFoam > log2
3. Evaluate result: cat log1 log2 > log
4. ./auswert_particles and look at the mass in the system
This can be run for three different cases, single, parallel (1/2/2) and parallel (4/1/1) distribution in simple decomposition.
Summary:
I.e. dependent on domain decomposition the restart works or works not.
### Testcase
[testcase_cube.tar.gz](/uploads/51ff3d806fe99e599e02b7407b53a230/testcase_cube.tar.gz)
### Environment information
OpenFOAM version : v2212
Operating system : ubuntu
Hardware info : different systems
Compiler : gcc
### Possible fixes
If one removes uniform directory in processor?/1 in all directories and sets start of injection SOI in reactingCloudProperties equal 1 the particles seem to be calculated again independent on decompositionhttps://develop.openfoam.com/Development/openfoam/-/issues/3092MPI_Send MPI_ERR_COUNT: invalid count argument 140 million cells (bug)2024-01-29T18:57:45ZIlya ElizarovMPI_Send MPI_ERR_COUNT: invalid count argument 140 million cells (bug)### Summary
I have encountered a problem with MPI_Send routine while running a 140 million cells case on the cluster: Virgo cluster at GSI Helmholtz Centre for Heavy Ion Research https://hpc.gsi.de/virgo/
I'm using chtMultiRegionSimple...### Summary
I have encountered a problem with MPI_Send routine while running a 140 million cells case on the cluster: Virgo cluster at GSI Helmholtz Centre for Heavy Ion Research https://hpc.gsi.de/virgo/
I'm using chtMultiRegionSimpleFoam to solve a heat transfer problem for a multilayer PCB with vias in great detail. The solver goes through a few regions with no problem, but when it proceeds to a very big region with 141 211 296 cells, it crashes with the error below.
I have tried to increase number of subdomains, e.g. 1024, 2048, 4096 CPUs (all hierarchical method) and 1024 (with ptscotch), but the error persists. This makes me think, that the error is caused by the absolute number of cells in the region and independent of decomposition. A few more things were tried as a solution, but with no success: https://www.cfd-online.com/Forums/openfoam-solving/253681-mpi_send-mpi_err_count-invalid-count-argument.html
### Steps to reproduce
Basically, any solver with with comparable mesh size case should fail. Surely, to run such a case, a lot of computational resources are required that's why it's difficult to reproduce.
Feel free to contact me if you need more details on the Virgo cluster that I'm using. Meanwhile, I will ask OpenFOAM community if anybody could test it on a cluster with comparable size.
### Example case
My case can be found at https://sf.gsi.de/f/4db522c9b39b4125855f/?dl=1 (24,2 Mb)
_Requirements: 1024 CPUs (it's with multithreading), 4 Gb RAM per processor, Slurm workload manager, OpenFOAM installed with WM_LABEL_SIZE=64_
Simply run ./Allrun script
The case uses collated file format.
### What is the current _bug_ behaviour?
It seems that MPI_Send is called with a negative count argument. The count argument is a signed int https://www.open-mpi.org/doc/v4.1/man3/MPI_Send.3.php of 32 bit size, so it is likely overflowing (MPI_Send with a count \> INT_MAX).
### What is the expected _correct_ behavior?
Basically, the solver should proceed with the very big region if there's no hardware limitations like in this situation.
### Relevant logs and/or images
chtMultiRegionSimpleFoam fails with the following error message:
```plaintext
<...>
[lxbk1164:3797445] *** An error occurred in MPI_Send
[lxbk1164:3797445] *** reported by process [710282087,0]
[lxbk1164:3797445] *** on communicator MPI_COMM_WORLD
[lxbk1164:3797445] *** MPI_ERR_COUNT: invalid count argument
[lxbk1164:3797445] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[lxbk1164:3797445] *** and potentially your MPI job)
slurmstepd: error: *** STEP 19265579.0 ON lxbk1164 CANCELLED AT 2024-01-19T18:02:26 ***
srun: Job step aborted: Waiting up to 32 seconds for job step to finish.
<...>
```
Full logs that were generated by the case can be downloaded at https://sf.gsi.de/f/66935ac60645422da948/?dl=1 log.\* files are ordinary logs that OpenFOAM generates, Slurm-\*.out are logs from the workload manager.
### Environment information
- OpenFOAM version : v2306
- Operating system : CentOS-based
- Hardware info : https://hpc.gsi.de/virgo/user-guide/overview/hardware.html
- OpenMPI : 3.1.6, 4.1.2 (from ThirdParty-v2306), 5.0.0 (tried with three of them)
- Slurm: 21.08.8-2
- Compiler : gcc-toolset-13, gcc 10.2.0 (tried with the two)
### Possible fixes
The PStream interface accepts a std::streamsize and implicitely casts it to the int argument on the mpi interfaces, performing a narrowing conversion https://develop.openfoam.com/Development/openfoam/-/blob/master/src/Pstream/mpi/UOPstreamWrite.C#L56
```plaintext
<source>:3:27: error: static assertion failed
3 | static_assert(sizeof(int) == sizeof(std::streamsize));
| ~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~
<source>:3:27: note: the comparison reduces to '(4 == 8)'
```
(from https://godbolt.org/)
OpenFOAM would have plenty of options to deal with this situation by e.g. issuing multiple MPI_Send or choose a larger MPI_Datatype.
P.S. It seems that there's a problem adding a bug label: /label ~bughttps://develop.openfoam.com/Development/openfoam/-/issues/3091OpenFOAM v2306 Sigma Turbulence Model Wrong Behavior2024-02-14T17:33:18ZJan GärtnerOpenFOAM v2306 Sigma Turbulence Model Wrong Behavior<!--
*** Please read this first! ***
Before opening a new issue, make sure to search for keywords in the issues
filtered by the "bug" label and check to see if it has already been reported
You can see how your report will be re...<!--
*** Please read this first! ***
Before opening a new issue, make sure to search for keywords in the issues
filtered by the "bug" label and check to see if it has already been reported
You can see how your report will be rendered on the platform by using the
"preview" tab above
-->
<!--
All text between these markers are comments and will not be present in the
report
-->
### Summary
In OpenFOAM v2306 a new turbulence model, the Sigma model, has been introduced. However, using OpenFOAM v2306, compiled on Ubuntu 22.04 LTS with GCC 11.4, results in wrong turbulent viscosity fields. At our institute, we had been using the Sigma model before it was introduced in the main OpenFOAM branch and had already experienced problems in prior OpenFOAM versions if the compiler GCC 11.4 was used. This could be fixed in two ways:
1. Compile with Gcc 9.5
2. Modify the calculation of the Ssigma() term
We would encourage to modify the Ssigma() function to avoid this bug with the Gcc 11.3 compiler. Also it would be great to know why this error appears for newer OpenFOAM versions.
![nutComparison](/uploads/f28570b5b79d5824787953df0392819f/nutComparison.png)
### Steps to reproduce
We can provide a simple jet case where you can clearly see the issue in the turbulent viscosity.
Alternatively, use a case with a shear layer or jet flow and use the provided program, which calculates the turbulent viscosity and writes out the result.[src.tar.gz](/uploads/fdef1eaf5e3b77c9dc64d04a35da9468/src.tar.gz)
### What is the current *bug* behaviour?
As visible in the attached image, the turbulent viscosity is not smooth and does have significantly larger outliers.
### What is the expected *correct* behavior?
The turbulent viscosity should look like it is displayed on the right hand side of the attached image.
### Environment information
<!--
Providing details of your set-up can help us identify any issues, e.g.
OpenFOAM version : v2312|v2306|v2212|v2206|v2112 etc
Operating system : ubuntu|openSUSE|centos etc
Hardware info : any info that may help?
Compiler : gcc|intel|clang etc
-->
- OpenFOAM version : v2306
- Operating system : Ubuntu 22.04 LTS
- Hardware info : AMD processor
- Compiler : GCC 11.4
### Possible fixes
In the Ssigma() function located in `DESModel::Ssigma()` (DESModel.C line 88) split the calculation of G in line 110 up into two parts:
```c++
const volTensorField gradUT = gradU.T();
const volTensorField G = gradUT & gradU;
```
For some reason, it works then correctly.
<!--
If you can, link to the line of code that might be responsible for the
problem
The "/label ~bug" text is a gitlab flag that will add the "bug" label to this
issue
-->https://develop.openfoam.com/Development/openfoam/-/issues/3090mapped boundary conditions in combination with partial mesh motion2024-01-31T13:13:24ZMattijs Janssens4-Mattijs@users.noreply.develop.openfoam.commapped boundary conditions in combination with partial mesh motion### Functionality to add/problem to solve
When there is any mesh motion (even if the points are kept the same) all the inter-region mapping information is thrown away. Instead keep the mapping information if the actual coupling location...### Functionality to add/problem to solve
When there is any mesh motion (even if the points are kept the same) all the inter-region mapping information is thrown away. Instead keep the mapping information if the actual coupling locations stay the same.
### Target audience
moving mesh cases with mapping where the mapping is inside a static portion of the mesh.
### Proposal
Check actual locations
### What does success look like, and how can we measure that?
Speed up.Mattijs Janssens4-Mattijs@users.noreply.develop.openfoam.comMattijs Janssens4-Mattijs@users.noreply.develop.openfoam.com