Getting started with parallelism with MPI

Installing dependencies

Some dependencies are required for parallel support in Antares.

  • mpi4py >= 2.0.0

  • METIS >= 5.1.0

  • HDF5 >= 1.8.15 parallel

  • h5py >= 2.7.1 parallel

METIS is required when reading a base with unstructured zones. Parallel support in HDF5 and h5py is required for writers.

Installing h5py parallel through pip or conda:

$ conda install mpi4py
$ export HDF5_DIR=/path/to/hdf5/parallel
$ CC=mpicc HDF5_MPI=ON pip install --no-binary=h5py h5py

check with:

h5cc -showconfig

Writing an Antares script

Going parallel is straightforward

Any regular antares script can be executed both in parallel and sequential.

Example:

# file script.py
from antares import Reader, Writer, Treatment

# Read a base:
# - structured zones are distributed between cores
# - unstructured zones are splitted, each proc gets 1 partition
r = Reader('hdf_cgns')
r['filename'] = 'input_base.cgns'
base = r.read()

# Apply a treatment:
# - cores internally synchronise with each other if needed
cut = Treatment('cut')
cut['type'] = 'plane'
cut['origin'] = [0.001, -0.01, 0.2]
cut['normal'] = [0.145, -0.97, -0.2]
new_base = cut.execute()

# Write the result:
# - cores collectively build a single output file
w = Writer('hdf_cgns')
w['base'] = new_base
w['filename'] = 'output_base.cgns'
w.dump()

Sequential execution:

$ python script.py

Parallel execution on NPROC cores:

$ mpirun -n NPROC python script.py

Work In Progress: support of parallelism in Antares is still very sparse - supported I/O format: `hdf_cgns` - supported treatments: `cut`, `integration`, `thermo1`

Parallelism is based on MPI

[MPI](https://en.wikipedia.org/wiki/Message_Passing_Interface) (Message Passing Interface) is a programming standard that allows different processes to communicate with each other by sending data.

One does not need to known the MPI programming model to use antares in a parallel script. Just write your code normally, then execute it as stated above.

Antares uses the python library mpi4py that implements the MPI standard. mpi4py’s function always return a valid result, even in sequential. Antares wraps a small subset of the MPI interface into the antares.parallel.Controller.PARA object. PARA’s wrappers act as a stub when mpi4py is not installed (for sequential execution only !)

# accessing MPI interface
# -----------------------

import antares

para = antares.controller.PARA

# get antares' MPI communicator:
# this is the main entry point for handling communications between processes
comm = para.comm

# get processor rank
# this the identifier of the process
# ranks take their values in the internal [1, NPROC]
rank = para.rank
# or
rank = comm.rank

# playing with MPI
# ----------------

# add all ranks and broadcast results to all cores
result = para.allreduce(rank, para.SUM)
# or
from mpi4py import MPI
result = comm.allreduce(rank, MPI.SUM)

# get number of cores
nproc = para.size
# or
nproc = comm.size

# display result, one core after the other
for p in range(nproc):
    if p == rank:
        print(rank, result)
    comm.Barrier()

# display something by process rank 0 only
if rank == 0:
    print 'done'

An internal function is provided to print a message in a ring communication.

from antares.parallel.controller import ring_print
ring_print("my message")

Troubleshooting

Dumping a base to HDF-CGNS format is too slow !

Collectively creating an HDF file can be very slow if the file is structured with a large number of groups and datasets. HDF-based writers take an optional argument ‘link’ for selecting a different mode for writing the file.

Available modes are:

  • ‘single’: Default mode. All cores collectively build a single file.

  • ‘merged’: cores build their own files, then merge them into a single one.

  • ‘proc’: cores build their own files, then link them into a main index file.

  • ‘zone’: cores build 1 file per zone, then link them into a main index file.

Every auxiliary files built by modes ‘proc’ and ‘zone’ store valid antares bases, but only the main file give access to every zones and store shared information.

‘single’ is optimal in sequential. ‘proc’ is optimal in parallel.

Output file contains many zones with odd names instead of a single one !

By default, readers split every unstructured zone so that every processor get a partition. Every processor have to handle zones with different names, so the original name is suffixed by the processor rank.

Merging back zones is not yet implemented in writers.

I already have multiple unstructured zones and don’t want to have them split.

Readers take an optional ‘split’ argument you can set to ‘never’. Both structured and unstructured zones will be distributed to cores.