Message Passing


   Back to course contents

Message Passing under MPI

The advantage of using parallel computers to perform large tasks stems from the division of these tasks into smaller sub-tasks to be handled simultaneously on different nodes. On distributed memory systems, this invariably requires node-to-node communication. MPI provides these services.

Message Content

The user must pass to the MPI library routines the necessary information for MPI to know what to send, where to send the info to, and from the receiving end, what to expect, where from, and where to store the information.

The sender process must specify:
- Memory location of the data to send
- Kind of data to be sent
- How much data is there to send
- To which processor(s) to send data
- An identifying tag

The receiver process must specify:
- Memory location where to receive the data
- Kind of data to receive
- How much data is there to receive
- From which processor(s) to expect data
- Where to store the transaction info

The model is that the sending node must initialize a send command and that the receiving node must initialize a receive command. The MPI routines will bundle some of the information listed above in a message as it is sent from process to process. MPI is prepared to provide information about any message as it transits from process to process upon query. The users' tasks talk to the local MPI daemons, which themselves talk to fellow MPI daemons in any other node within a specified communicator, the latter being able to communicate in turn with the users' local tasks on those nodes.  

The content of a message is an array of MPI datatype. All messages contain data elements of a specified type. The number of such items is specified in the call to the MPI routine. All messages must be of a single data type. This type can be a MPI pre-defined datatype, or of a user defined derived datatype. 

The datatype specified in the receiving process must be the same as that of the sending process. MPI supports heterogeneous parallel systems in which different computers, with different internal data representation, can talk to each other; type conversion is done automatically. Two processes might send and receive data of different types, say int with char but with uncertain results.  


   Back to top of page

MPI Fortran - C datatypes

Basic Fortran MPI datatypes (from the EPCC course).

Basic C MPI datatypes (from the EPCC course).


   Back to top of page

Communication Protocols

E-mail:  drop a message on the network and hope that the message will be relayed by the various computers on the network to its proper address.

Ethernet: drop packets of information of the network and hope that the information will be relayed by the various computers on the network to the proper address.

These two models assume dynamical routing, the fact that different route among the nodes on the network will lead to the destination address. This renders these communication protocols quite robust.

Point-to-Point Communication: Only the sending and the receiving processes need to know about the message content. The MPI parallel implementation uses this protocol.

The sending process establishes a direct communication channel with the receiving process. The analogy is with the phone system. The direct link between the processes is accomplished through the MPI daemons on the nodes where the processes are running.


   Back to top of page

   Back to course contents