Skip to content
  • Henry Weller's avatar
    ENH: Improvements to the fileHandler and collated IO · 8959b8e0
    Henry Weller authored
    Improvements to existing functionality
    --------------------------------------
      - MPI is initialised without thread support if it is not needed e.g. uncollated
      - Use native c++11 threading; avoids problem with static destruction order.
      - etc/cellModels now only read if needed.
      - etc/controlDict can now be read from the environment variable FOAM_CONTROLDICT
      - Uniform files (e.g. '0/uniform/time') are now read only once on the master only
        (with the masterUncollated or collated file handlers)
      - collated format writes to 'processorsNNN' instead of 'processors'.  The file
        format is unchanged.
      - Thread buffer and file buffer size are no longer limited to 2Gb.
    
    The global controlDict file contains parameters for file handling.  Under some
    circumstances, e.g. running in parallel on a system without NFS, the user may
    need to set some parameters, e.g. fileHandler, before the global controlDict
    file is read from file.  To support this, OpenFOAM now allows the global
    controlDict to be read as a string set to the FOAM_CONTROLDICT environment
    variable.
    
    The FOAM_CONTROLDICT environment variable can be set to the content the global
    controlDict file, e.g. from a sh/bash shell:
    
        export FOAM_CONTROLDICT=$(foamDictionary $FOAM_ETC/controlDict)
    
    FOAM_CONTROLDICT can then be passed to mpirun using the -x option, e.g.:
    
        mpirun -np 2 -x FOAM_CONTROLDICT simpleFoam -parallel
    
    Note that while this avoids the need for NFS to read the OpenFOAM configuration
    the executable still needs to load shared libraries which must either be copied
    locally or available via NFS or equivalent.
    
    New: Multiple IO ranks
    ----------------------
    The masterUncollated and collated fileHandlers can now use multiple ranks for
    writing e.g.:
    
        mpirun -np 6 simpleFoam -parallel -ioRanks '(0 3)'
    
    In this example ranks 0 ('processor0') and 3 ('processor3') now handle all the
    I/O.  Rank 0 handles 0,1,2 and rank 3 handles 3,4,5.  The set of IO ranks should always
    include 0 as first element and be sorted in increasing order.
    
    The collated fileHandler uses the directory naming processorsNNN_XXX-YYY where
    NNN is the total number of processors and XXX and YYY are first and last
    processor in the rank, e.g. in above example the directories would be
    
        processors6_0-2
        processors6_3-5
    
    and each of the collated files in these contains data of the local ranks
    only. The same naming also applies when e.g. running decomposePar:
    
    decomposePar -fileHandler collated -ioRanks '(0 3)'
    
    New: Distributed data
    ---------------------
    
    The individual root directories can be placed on different hosts with different
    paths if necessary.  In the current framework it is necessary to specify the
    root per slave process but this has been simplified with the option of specifying
    the root per host with the -hostRoots command line option:
    
        mpirun -np 6 simpleFoam -parallel -ioRanks '(0 3)' \
            -hostRoots '("machineA" "/tmp/" "machineB" "/tmp")'
    
    The hostRoots option is followed by a list of machine name + root directory, the
    machine name can contain regular expressions.
    
    New: hostCollated
    -----------------
    
    The new hostCollated fileHandler automatically sets the 'ioRanks' according to
    the host name with the lowest rank e.g. to run simpleFoam on 6 processors with
    ranks 0-2 on machineA and ranks 3-5 on machineB with the machines specified in
    the hostfile:
    
        mpirun -np 6 --hostfile hostfile simpleFoam -parallel -fileHandler hostCollated
    
    This is equivalent to
    
        mpirun -np 6 --hostfile hostfile simpleFoam -parallel -fileHandler collated -ioRanks '(0 3)'
    
    This example will write directories:
    
        processors6_0-2/
        processors6_3-5/
    
    A typical example would use distributed data e.g. no two nodes, machineA and
    machineB, each with three processes:
    
        decomposePar -fileHandler collated -case cavity
    
        # Copy case (constant/*, system/*, processors6/) to master:
        rsync -a cavity machineA:/tmp/
    
        # Create root on slave:
        ssh machineB mkdir -p /tmp/cavity
    
        # Run
        mpirun --hostfile hostfile icoFoam \
            -case /tmp/cavity -parallel -fileHandler hostCollated \
            -hostRoots '("machineA" "/tmp" "machineB" "/tmp")'
    
    Contributed by Mattijs Janssens
    8959b8e0