Advanced MPI — Spawn & RMA
An advantage of the MPI approach to parallel computing is the possibility it offers to create versatile programming structures corresponding to the physical requirements. This section deals with advanced MPI routines to spawn processes and perform one sided communication.
References: Documentation in MPI Forum
Download
Download the demonstration codes Advanced_MPI.tar.gz.
Expand with
tar -xzvf Advanced_MPI.tar.gz
Spawning Processes
MPI2 allows an MPI program to spawn other programs and establish communication between the spawning program and the spawned programs. This capability reassembles the mode of operation in the Parallel Virtual Machine (PVM) running mode. It is ideal for a client-server paradigm.
The sub-directory spawn
contains manager.c
and launched_code.c
. The
code manager.c
spawns launched_code.c
. The
function MPI_Comm_spawn()
spawns the
processes. MPI_Get_parent()
finds the parent of a spawned
process.
The syntax of the compilation and the running of the code is
described in an enclosed README
file. Note that only
one instance of the manager.c
code can be launched
via mpiexec
.
Shakespeare
The codes in the sub-directory Shakespeare
demonstrate a client-server code in which the master (client)
manager.c
spawns search_text_daemon.c
, the
slave (server). The README
file gives some details about
the compilation and running of the codes.
The codes implement a rudimentary word search and statistical
analysis of Shakespeare's plays. The latter are listed
as .html
files in sub-directory plays
. Each
spawned code searches different play (file).
search_text.c
is a serial code that implements this
word analysis for a single play at a time.
Remote Memory Access
Remote Memory Access (RMA) is implemented in MPI-2. This allows
one-sided communication in which a process is allowed direct access to
a chunk of memory of another process under
the MPI_COMM_WORLD
. This process can then "get"
information from or "put" information into the memory chunk of the
other process.
The concepts (exemplified by the MPI calls) are:
MPI_Create_win()
(run on all processes) creates a memory window for data access and transfer.MPI_Put()
orMPI_Get()
puts (or gets) data from an "origin" process to a "target" process (or vice-versa).- The timing of these calls is crucial.
MPI_Put()
andMPI_Get()
are non-blocking, i.e., they return without waiting for the data transfer to be finished.MPI_Win_fence()
is the simplest synchronization routine. When called (on all processes) before and after theMPI_Put()
orMPI_Get()
functions, it defines a communication "epoch" during which the transfer occurs. That is, the second call toMPI_Win_fence()
blocks until the data transfer has terminated. - The functions
MPI_Win_lock()
andMPI_Win_unlock()
provide an alternate way to establish an epoch during which the RMA data transfer completes. As their names suggest, these functions lock-up the RMA window for the duration of the RMA data transfer. That is, the target process is prohibited access to any memory location within the RMA window for the duration of the RMA data transfer. These functions are blocking. They are called only on the process issuing theMPI_Put()
orMPI_Get()
calls and not on the target process.
The codes rma_1.c
and rma_2.c
in the
sub-directory RemoteMemoryAccess
illustrate the use of
Remote Memory Access.