Potential degradation/issue/bug in snappyHexMesh when creating cyclic patches
I came across this (potential) change in behavior in snappyHexMesh when trying to mesh a pipe with some components inside where inlet and outlet patches have to be periodic.
The original mesh was created some years ago with the 2.1.1 release (not available on our machine anymore so I can't recreate it or double check), by specifying the inlet and outlet patches of the pipe as cyclic in blockMeshDict. The mesh was then created successfully by snappy forcing the inlet and outlet patches to have the same mesh (see Fig. 1).
If the same procedure is used with the 2.3.0 or with the latest v1912, the mesh is created distorted (see Fig. 2).
If the inlet and outlet patches in blockMeshDict are set back to "type patch", the grid also goes back to nice and smooth (see Fig. 3) but at this point the two faces do not match and "type cyclic" can't be used (cyclicAMI works but not an option here for solver's reasons).
I've then tried to mesh a longer pipe with the inlet and outlet patches away from the components inside the pipe and set to "type patch" to end up with a regular surface mesh at both ends of the grid (see Figs. 4 and 5), but when I try to use the createPatch utility to switch from "type patch" to "cyclic", OF complains about matching the two patches even if the separation vector is provided and regardless of the matchTolerance.
Any idea?
(input dictionaries are all the same in the different tests performed).
No child items are currently assigned. Use child items to break down this issue into smaller parts.
Link issues together to show that they're related. Learn more.
Activity
- Riccardo Rossi changed the description
changed the description
- Riccardo Rossi changed the description
changed the description
- Author
Just found out the separation vector has to be negative in createPatchDict for the outlet patch.
The issue with specifying the cyclic type directly in blockMesh (preferred and more versatile method) still remains.
Edited by Riccardo Rossi - Owner
- Maintainer
This is probably to do with some of the mesh motion (to do the surface and feature snapping) not using boundary conditions so the point motion is not synchronised on both sides. Ideally it would construct 'cyclicPointPatchField' boundary conditions on the cyclic patches and those would guarantee the consistent motion on either side. If you run without -overwrite (so keep the castellated mesh separate from the snapped mesh) does the castellated mesh obey the cyclics? Run e.g. checkMesh. Probably the topological information (refinement level of cells, faces and points) is correctly synchronised and it is just the geometric information.
- Author
Yes, in two steps the castellated looks fine (i.e. obeys the cyclics) and the snapped one distorted as in the picture.
checkMesh for Time=2 is as follows:
Time = 2 Mesh stats points: 1206548 faces: 3049482 internal faces: 2860990 cells: 933429 faces per cell: 6.332 boundary patches: 5 point zones: 0 face zones: 0 cell zones: 0 Overall number of cells of each type: hexahedra: 757713 prisms: 16880 wedges: 0 pyramids: 0 tet wedges: 199 tetrahedra: 0 polyhedra: 158637 Breakdown of polyhedra by number of faces: faces number of cells 4 18663 5 18518 6 42769 7 338 8 193 9 43527 10 49 11 244 12 20441 13 28 14 95 15 12850 16 3 17 29 18 884 21 6 Checking topology... Boundary definition OK. Cell to face addressing OK. Point usage OK. Upper triangular ordering OK. Face vertices OK. Number of regions: 1 (OK). Checking patch topology for multiply connected surfaces... Patch Faces Points Surface topology z0 14546 15813 ok (non-closed singly connected) z1 14546 15813 ok (non-closed singly connected) pipe 22168 26883 ok (non-closed singly connected) support 7429 8823 ok (non-closed singly connected) turbulator 129803 154641 ok (non-closed singly connected) Checking faceZone topology for multiply connected surfaces... No faceZones found. Checking basic cellZone addressing... No cellZones found. Checking geometry... Overall domain bounding box (-11.05 -11.05 -0.625003) (11.0878 11.05 11.9512) Mesh has 3 geometric (non-empty/wedge) directions (1 1 1) Mesh has 3 solution (non-empty) directions (1 1 1) Boundary openness (2.3083e-16 -1.46874e-16 -8.96945e-16) OK. Max cell openness = 4.19123e-16 OK. Max aspect ratio = 11.4067 OK. Minimum face area = 0.000155659. Maximum face area = 0.790458. Face area magnitudes OK. Min volume = 4.13284e-05. Max volume = 0.260785. Total volume = 4534.6. Cell volumes OK. Mesh non-orthogonality Max: 64.965 average: 11.5921 Non-orthogonality check OK. Face pyramids OK. ***Max skewness = 6.20935, 1 highly skew faces detected which may impair the quality of the results <<Writing 1 skew faces to set skewFaces Coupled point location match (average 5.97917e-10) OK. Failed 1 mesh checks. End
So there is no chance to get the mesh back as in 2.1.1?
Edited by Riccardo Rossi - Author
As a workaround to the use of cyclic directly in blockMesh and as mentioned at the beginning of the post, I'm using some separation between the inlet and outlet sections of the pipe from the object within to end up with identical grids and use createPatch to make the mesh cyclic between inlet and outlet.
The same case I'm running used to work with a coarser mesh but after refining it, OF crashes while creating the mesh with the error:
[1] --> FOAM FATAL ERROR: [1] face 0 area does not match neighbour by 0.00800801% -- possible face ordering problem. patch:procBoundary1to16throughcyclic0 my area:6.02228e-08 neighbour area:6.0218e-08 matching tolerance:3.1005e-12 Mesh face:904083 vertices:4((-0.00223007 0.0096668 -0.0224) (-0.00222837 0.00991044 -0.0224) (-0.0019824 0.00991684 -0.0224) (-0.00198335 0.00967149 -0.0224)) If you are certain your matching is correct you can increase the 'matchTolerance' setting in the patch dictionary in the boundary file. Rerun with processor debug flag set for more information. [1] [1] From function void Foam::processorPolyPatch::calcGeometry(Foam::PstreamBuffers &) [1] in file meshes/polyMesh/polyPatches/constraint/processor/processorPolyPatch.C at line 284. [1] FOAM parallel run exiting [1] application called MPI_Abort(MPI_COMM_WORLD, 1) - process 1
However, as you can see in the picture, the grid overlaps almost perfectly between the two sections and even increasing the matchTolerance to 10 didn't work.
Also it appears the matchTolerance in the error is not the same as specified in the boundary file.
Last but not least, when running in serial the solver starts with no issues.
Note I'm using the v1606 here for sake of customized old solver.
Edited by Riccardo Rossi - Maintainer
-
workaround: use cyclicAMI (one-to-many, area-weighted interpolation, should revert to cyclic behaviour for 1:1 matching)
-
The message comes from the 'processorCyclic' so probably that is where the matchTolerance should be adapted (in the decomposed polyMesh/boundary dictionaries). See also #1704 (fixed in 7a063c58 very recently).
But still - the distortion is not as should be, irrelevant of whether the cyclics work. Is there a simple testcase (simpler the better)?
-
- Author
Thanks for the update.
I'll try the workaround and I'll also put together the simplest possible test case.
Meanwhile, I've recompiled the 2.1.1 on the cluster and:
1 - Everything works in serial (see resulting mesh at inlet/outlet patches in the picture)
2 - In parallel, with the same setup (i.e. cyclic specified in blockMeshDict) I get:
Surface refinement iteration 0 ------------------------------ Marked for refinement due to surface intersection : 3058 cells. Determined cells to refine in = 0 s Selected for refinement : 3058 cells (out of 33880) [1] Cannot find point in pts1 matching point 0 coord:(0.257289 0.257289 58.05) in pts0 when using tolerance 3.62299e-05 [1] Searching started from:16 in pts1 [1] Compared coord:(0.513473 10.7608 -12.4) with difference to point 71.2292 [1] Cannot find point in pts1 matching point 11 coord:(0.769658 0.257289 58.05) in pts0 when using tolerance 3.62299e-05
Edited by Riccardo Rossi - Author
Concerning the workarounds you have suggested @Mattijs:
1 - Not an option for sake of the customized solver, as mentioned at the beginning of the post
2 - I'm a little confused. I've just checked and it looks like the matchTolerance isn't inherited by the processors folders from the constant/polyMesh/boundary file, and that's why the solver is trying to use the default 1e-4 tolerance instead (set by default I guess for each processor folder). Why is it so? And do I have to chance it manually processor by processor? I've tried to include the desired tolerance in the decomposeParDict file but it seems not to be passed to the processors
Edited by Riccardo Rossi - Maintainer
- processor patches and processorCyclic patches get created out-of-nothing (since depend on the actual decomposition). There is unfortunately no template. You'll have to change manually in -all- processor*/constant/polyMesh/boundary files. Or maybe there is some clever use of foamDictionary possible.
- Author
Yes, shouldn't be a hassle to come up with a bash script for it but is it correct after all?
If the mesh is created with a certain tolerance shouldn't the parallel run use the same tolerance as the serial execution does?
Am I missing something?
Edited by Riccardo Rossi - Author
- Author
While waiting for the debugging of how new releases of snappy handle the cyclic patch from blockMesh, I managed to change the match tolerance in all processor* folders on the fly when running snappy in parallel in the 2.1.1.
However, I still get:
[0] Cannot find point in pts1 matching point 0 coord:(-0.511263 9.7361 57.05) in pts0 when using tolerance 0.724598 [0] Searching started from:4 in pts1 [0] Compared coord:(-0.767448 11.017 -11.4) with difference to point 68.4625
Running in serial works like a charm instead.
Edited by Riccardo Rossi - Author
Since the error above is given with the the cyclic patch handled correctly in the 2.1.1, I'm assuming it will occur in the latest releases as well once fixed when running parallel.
Therefore, may I ask you if you have any suggestion toward trying to solve it?
- Mattijs Janssens assigned to @Mattijs
assigned to @Mattijs
- Maintainer
The problem seems to be related to the parallel operation of the code. This requires any face/edge/point data to be correctly synchronised (faces are shared between two processors/patches, edges&points between two or more). In adding this synchronisation it does not account for the separation vector of parallel cyclics. (the separation vector can only be applied to positions, not to vectors)
Looking into synchronising the positions taking into account the separation vectors. Hopefully can push something which works better soon.
- Mattijs Janssens mentioned in commit c06c63f4
mentioned in commit c06c63f4
- Maintainer
Above commit makes your case work (though tested only serial). Now all the smoothing&attraction is done using correctly transformed locations. In parallel there should be no problem as long as owner and neighbour of a cyclic are on the same processor (this can be enforced using the decomposition constraints). If not it introduces special
processorCyclic
patches which might not have the correct behaviour (it should use the communication as aprocessor
patch but any transformation as acyclic
patch). Feedback welcome. - Author
Thanks a lot for the effort and commit.
If it's ok, I'd like to ask you a bit of guidance here to test it: the OF installation I'm using is on a cluster where I don't have root access.
Would it work to recompile snappy on a local folder using the 7 modified files in the commit also referenced locally?
- Maintainer
Sure, they should easily fit onto an existing installation. Just make a copy of snappyHexMesh (found in $FOAM_UTILITIES) and add the changed files. Add to Make/files, change the resulting executable name (EXE =). The only problem with these local fixes is that they generally not exit cleanly (the static destructor of the debug switches gets into problems). If this is a problem make sure to not include any local file that defines a runTime selection table or debug switch. Or go one step further and include rest of src/mesh/snappyHexMesh library into your executable so you don't need to link to the original at all.
- Mattijs Janssens mentioned in commit 8594fb43
mentioned in commit 8594fb43
- Mattijs Janssens mentioned in issue #1784 (closed)
mentioned in issue #1784 (closed)
- Author
I've managed to compile snappyHexMesh using the modified files under the new v2006, using all sources to avoid the local files problem you've mentioned.
In serial mode, everything goes fine up to the end of the layer addition phase but I get then the first error below. The mesh is wrote after the snap phase and the cyclic patches look ok.
If I turn the layer phase off, the same error occur but this time while the mesh is snapped and only the refined mesh is wrote to disk.
In parallel mode, the modified snappy crashes with the second error.
#0 Foam::error::printStack(Foam::Ostream&) at ??:? #1 Foam::sigSegv::sigHandler(int) at ??:? #2 ? in /lib64/libpthread.so.0 #3 Foam::globalPoints::reverseMeshPoints(Foam::cyclicPolyPatch const&) at ??:? #4 Foam::globalPoints::receivePatchPoints(bool, Foam::Map<int> const&, Foam::List<int> const&, Foam::PstreamBuffers&, Foam::HashSet<int, Foam::Hash<int> >&) at ??:? #5 Foam::globalPoints::calculateSharedPoints(Foam::Map<int> const&, Foam::List<int> const&, bool, bool) at ??:? #6 Foam::globalPoints::globalPoints(Foam::polyMesh const&, Foam::PrimitivePatch<Foam::IndirectList<Foam::face>, Foam::Field<Foam::Vector<double> > const&> const&, bool, bool) at ??:? #7 Foam::globalMeshData::calcGlobalPointSlaves() const at ??:? #8 Foam::globalMeshData::globalPointSlaves() const at ??:? #9 Foam::syncTools::getMasterPoints(Foam::polyMesh const&) at ??:? #10 Foam::meshRefinement::printMeshInfo(bool, Foam::string const&) const at ??:? #11 ? at ??:? #12 ? at ??:? #13 __libc_start_main in /lib64/libc.so.6 #14 ? at ??:?
[12] [12] [12] From const T &Foam::HashTable<T, Key, Hash>::at(const Key &) const [with T = int, Key = int, Hash = Foam::Hash<int>] [12] in file lnInclude/HashTableI.H at line 89. [12] FOAM parallel run exiting [12] application called MPI_Abort(MPI_COMM_WORLD, 1) - process 12 #0 Foam::error::printStack(Foam::Ostream&) at ??:? #1 Foam::sigSegv::sigHandler(int) at ??:? #2 ? in /lib64/libpthread.so.0 #3 Foam::UOPstream::write(int) at ??:? #4 Foam::operator<<(Foam::Ostream&, int) at ??:? #5 Foam::UList<Foam::Vector<double> >::writeList(Foam::Ostream&, int) const at ??:? #6 Foam::processorCyclicPolyPatch::initOrder(Foam::PstreamBuffers&, Foam::PrimitivePatch<Foam::SubList<Foam::face>, Foam::Field<Foam::Vector<double> > const&> const&) const at ??:? #7 Foam::polyTopoChange::reorderCoupledFaces(bool, Foam::polyBoundaryMesh const&, Foam::UList<int> const&, Foam::UList<int> const&, Foam::Field<Foam::Vector<double> > const&) at ??:? #8 Foam::polyTopoChange::compactAndReorder(Foam::polyMesh const&, bool, bool, bool, int&, Foam::Field<Foam::Vector<double> >&, Foam::List<int>&, Foam::List<int>&, Foam::List<Foam::objectMap>&, Foam::List<Foam::objectMap>&, Foam::List<Foam::objectMap>&, Foam::List<Foam::objectMap>&, Foam::List<Foam::objectMap>&, Foam::List<Foam::objectMap>&, Foam::List<Foam::objectMap>&, Foam::List<Foam::objectMap>&, Foam::List<Foam::Map<int> >&, Foam::List<int>&, Foam::List<int>&, Foam::List<Foam::Map<int> >&) at ??:? #9 Foam::polyTopoChange::changeMesh(Foam::polyMesh&, bool, bool, bool, bool) at ??:? #10 ? at ??:? #11 __libc_start_main in /lib64/libc.so.6 #12 ? at ??:?
- Maintainer
It might have to do with it extruding an edge into a (boundary) face when generating layers. Seems the boundary face gets put in the 'wrong' patch since it assumes there is only one patch to communicate with a remote processor. In your case there are two : a normal processor patch and then the additional processorCyclic.
See #1796
- Author
Any chance to fix it?
- Author
Do you see any direction to try to move forward with this topic @Mattijs?
- Maintainer
There is no easy fix. The locations of points on the processor boundaries seem already to differ after the
Writing patch smoothed mesh to time
message (when running with debug flag). Somewhere in the patch smoothing + undoing it does not use the correct transformation. This requires more digging.