- 12 Jul, 2019 1 commit
-
-
Mark Olesen authored
- now catch these and emit a warning. Still need to investigate the root cause in the caller(s) or regionSplit.
-
- 15 Feb, 2019 1 commit
-
-
Mark Olesen authored
- reduced clutter when iterating over containers
-
- 13 Feb, 2019 2 commits
-
-
Mark Olesen authored
- reduced clutter when iterating over containers
-
Mark Olesen authored
-
- 06 Feb, 2019 3 commits
-
-
OpenFOAM bot authored
-
OpenFOAM bot authored
-
OpenFOAM bot authored
-
- 21 Jan, 2019 1 commit
-
-
Mark Olesen authored
-
- 27 Sep, 2018 1 commit
-
-
Mark Olesen authored
- nBoundaryFaces() is often used and is identical to (nFaces() - nInternalFaces()). - forward the mesh nInternalFaces() and nBoundaryFaces() to polyBoundaryMesh as nFaces() and start() respectively, for use when operating on a polyBoundaryMesh. STYLE: - use identity() function with starting offset when creating boundary maps. labelList map ( identity(mesh.nBoundaryFaces(), mesh.nInternalFaces()) ); vs. labelList map(mesh.nBoundaryFaces()); forAll(map, i) { map[i] = mesh.nInternalFaces() + i; }
-
- 09 Aug, 2018 1 commit
-
-
Mark Olesen authored
STYLE: use initial hash size 128 instead of 100 in a few places
-
- 26 Jul, 2018 1 commit
-
-
mattijs authored
-
- 30 Apr, 2018 1 commit
-
-
Mark Olesen authored
-
- 27 Apr, 2018 1 commit
-
-
Mark Olesen authored
- the algorithm was last used in OpenFOAM-2.4, after which it was replaced with a FaceCellWave version. Whereas the original (2.4.x) version exhibited performance degradation on very large meshes (with explicit constraints), the FaceCellWave version exhibited performance issues with large numbers of blocked faces. With large numbers of blocked faces, the FaceCellWave regionSplit could take between 10 to 100 times longer due to the slow propagation speed through blocked faces. The 2.4 regionSplit has been revamped to avoid local memory allocations, which appears to have been the source of the original performance issues on large meshes. For additional performance, intermediate renumbering is also avoided during the consolidation of regions over processor domains.
-
- 24 Apr, 2018 1 commit
-
-
Mark Olesen authored
- The bitSet class replaces the old PackedBoolList class. The redesign provides better block-wise access and reduced method calls. This helps both in cases where the bitSet may be relatively sparse, and in cases where advantage of contiguous operations can be made. This makes it easier to work with a bitSet as top-level object. In addition to the previously available count() method to determine if a bitSet is being used, now have simpler queries: - all() - true if all bits in the addressable range are empty - any() - true if any bits are set at all. - none() - true if no bits are set. These are faster than count() and allow early termination. The new test() method tests the value of a single bit position and returns a bool without any ambiguity caused by the return type (like the get() method), nor the const/non-const access (like operator[] has). The name corresponds to what std::bitset uses. The new find_first(), find_last(), find_next() methods provide a faster means of searching for bits that are set. This can be especially useful when using a bitSet to control an conditional: OLD (with macro): forAll(selected, celli) { if (selected[celli]) { sumVol += mesh_.cellVolumes()[celli]; } } NEW (with const_iterator): for (const label celli : selected) { sumVol += mesh_.cellVolumes()[celli]; } or manually for ( label celli = selected.find_first(); celli != -1; celli = selected.find_next() ) { sumVol += mesh_.cellVolumes()[celli]; } - When marking up contiguous parts of a bitset, an interval can be represented more efficiently as a labelRange of start/size. For example, OLD: if (isA<processorPolyPatch>(pp)) { forAll(pp, i) { ignoreFaces.set(i); } } NEW: if (isA<processorPolyPatch>(pp)) { ignoreFaces.set(pp.range()); }
-
- 19 Apr, 2018 1 commit
-
-
Mark Olesen authored
- in debug, also report the first 10 cell ids - format header documentation
-
- 17 Apr, 2018 1 commit
-
-
Andrew Heather authored
-
- 14 Mar, 2018 1 commit
-
-
- now only seed boundary faces and an internal face of cell that itself has a blocked face.
-
- 08 Mar, 2018 1 commit
-
-
mattijs authored
-
- 07 Mar, 2018 2 commits
-
-
Mark Olesen authored
-
Mark Olesen authored
Improve alignment of its behaviour with std::unique_ptr - element_type typedef - release() method - identical to ptr() method - get() method to get the pointer without checking and without releasing it. - operator*() for dereferencing Method name changes - renamed rawPtr() to get() - renamed rawRef() to ref(), removed unused const version. Removed methods/operators - assignment from a raw pointer was deleted (was rarely used). Can be convenient, but uncontrolled and potentially unsafe. Do allow assignment from a literal nullptr though, since this can never leak (and also corresponds to the unique_ptr API). Additional methods - clone() method: forwards to the clone() method of the underlying data object with argument forwarding. - reset(autoPtr&&) as an alternative to operator=(autoPtr&&) STYLE: avoid implicit conversion from autoPtr to object type in many places - existing implementation has the following: operator const T&() const { return operator*(); } which means that the following code works: autoPtr<mapPolyMesh> map = ...; updateMesh(*map); // OK: explicit dereferencing updateMesh(map()); // OK: explicit dereferencing updateMesh(map); // OK: implicit dereferencing for clarity it may preferable to avoid the implicit dereferencing - prefer operator* to operator() when deferenced a return value so it is clearer that a pointer is involve and not a function call etc Eg, return *meshPtr_; vs. return meshPtr_();
-
- 22 Feb, 2018 2 commits
-
-
Mark Olesen authored
- in many places can use move construcors or rely on RVO
-
Mark Olesen authored
- subsetList, inplaceSubsetList with optional inverted logic. - use moveable elements where possible. - allow optional starting offset for the identity global function. Eg, 'identity(10, start)' vs 'identity(10) + start'
-
- 28 Oct, 2017 1 commit
-
-
Mark Olesen authored
-
- 17 Jul, 2017 1 commit
-
-
Mark Olesen authored
-
- 22 Jul, 2016 1 commit
-
-
Henry Weller authored
Patch contributed by Mattijs Janssens Resolves bug-report http://bugs.openfoam.org/view.php?id=2159
-
- 18 May, 2016 1 commit
-
-
Henry Weller authored
-
- 25 Apr, 2016 4 commits
-
-
Andrew Heather authored
-
Andrew Heather authored
-
Henry Weller authored
-
Henry Weller authored
-
- 02 Apr, 2016 1 commit
-
-
Henry Weller authored
Contributed by Mattijs Janssens. 1. Any non-blocking data exchange needs to know in advance the sizes to receive so it can size the buffer. For "halo" exchanges this is not a problem since the sizes are known in advance but or all other data exchanges these sizes need to be exchanged in advance. This was previously done by having all processors send the sizes of data to send to the master and send it back such that all processors - had the same information - all could work out who was sending what to where and hence what needed to be received. This is now changed such that we only send the size to the destination processor (instead of to all as previously). This means that - the list of sizes to send is now of size nProcs v.s. nProcs*nProcs before - we cut out the route to the master and back by using a native MPI call It causes a small change to the API of exchange and PstreamBuffers - they now return the sizes of the local buffers only (a labelList) and not the sizes of the buffers on all processors (labelListList) 2. Reversing the order of the way in which the sending is done when scattering information from the master processor to the other processors. This is done in a tree like fashion. Each processor has a set of processors to receive from/ send to. When receiving it will first receive from the processors with the least amount of sub-processors (i.e. the ones which return first). When sending it needs to do the opposite: start sending to the processor with the most amount of sub-tree since this is the critical path.
-
- 04 Jan, 2016 1 commit
-
-
mattijs authored
Determine sparse receive sizes instead of full matrix
-
- 19 Dec, 2015 1 commit
-
-
mattijs authored
-
- 08 Dec, 2015 2 commits
-
-
Andrew Heather authored
-
mattijs authored
-
- 26 Nov, 2015 1 commit
-
-
mattijs authored
The old version of regionSplit would hand out regions one by one. This is a big problem when there are lots of regions - the extreme being in the decompositionMethods, where it is used to cluster cells and most clusters being only one cell. This rewrite uses a mesh wave to determine disconnected regions in one go. This produced non-compact numbering which is then compacted in a second phase. On a 14M cell case with cyclic constraints this reduced decompose time from 40 mins down to 5.
-
- 23 Nov, 2015 1 commit
-
-
mattijs authored
-
- 17 Nov, 2015 3 commits
-
-
mattijs authored
- redistributePar to have almost (complete) functionality of decomposePar+reconstructPar - low-level distributed Field mapping - support for mapping surfaceFields (including flipping faces) - support for decomposing/reconstructing refinement data
-
mattijs authored
-
mattijs authored
- redistributePar to have almost (complete) functionality of decomposePar+reconstructPar - low-level distributed Field mapping - support for mapping surfaceFields (including flipping faces) - support for decomposing/reconstructing refinement data
-