Message Passing Interface - Example Program

Example Program

Here is a "Hello World" program in MPI written in C. In this example, we send a "hello" message to each processor, manipulate it trivially, return the results to the main process, and print the messages.

/* "Hello World" MPI Test Program */ #include #include #include #define BUFSIZE 128 #define TAG 0 int main(int argc, char *argv) { char idstr; char buff; int numprocs; int myid; int i; MPI_Status stat; /* MPI programs start with MPI_Init; all 'N' processes exist thereafter */ MPI_Init(&argc,&argv); /* find out how big the SPMD world is */ MPI_Comm_size(MPI_COMM_WORLD,&numprocs); /* and this processes' rank is */ MPI_Comm_rank(MPI_COMM_WORLD,&myid); /* At this point, all programs are running equivalently, the rank distinguishes the roles of the programs in the SPMD model, with rank 0 often used specially... */ if(myid == 0) { printf("%d: We have %d processors\n", myid, numprocs); for(i=1;iWhen run with two processors this gives the following output.

0: We have 2 processors 0: Hello 1! Processor 1 reporting for duty

The runtime environment for the MPI implementation used (often called mpirun or mpiexec) spawns multiple copies of the program, with the total number of copies determining the number of process ranks in MPI_COMM_WORLD, which is an opaque descriptor for communication between the set of processes. A single process, multiple data (SPMD) programming model is thereby facilitated, but not required; many MPI implementations allow multiple, different, executables to be started in the same MPI job. Each process has its own rank, the total number of processes in the world, and the ability to communicate between them either with point-to-point (send/receive) communication, or by collective communication among the group. It is enough for MPI to provide an SPMD-style program with MPI_COMM_WORLD, its own rank, and the size of the world to allow algorithms to decide what to do. In more realistic situations, I/O is more carefully managed than in this example. MPI does not guarantee how POSIX I/O would actually work on a given system, but it commonly does work, at least from rank 0.

MPI uses the notion of process rather than processor. Program copies are mapped to processors by the MPI runtime. In that sense, the parallel machine can map to 1 physical processor, or N where N is the total number of processors available, or something in between. For maximum parallel speedup, more physical processors are used. This example adjusts its behavior to the size of the world N, so it also seeks to scale to the runtime configuration without compilation for each size variation, although runtime decisions might vary depending on that absolute amount of concurrency available.

Read more about this topic:  Message Passing Interface

Famous quotes containing the word program:

    If Los Angeles has been called “the capital of crackpots” and “the metropolis of isms,” the native Angeleno can not fairly attribute all of the city’s idiosyncrasies to the newcomer—at least not so long as he consults the crystal ball for guidance in his business dealings and his wife goes shopping downtown in beach pajamas.
    —For the State of California, U.S. public relief program (1935-1943)

    The structure was designed by an old sea captain who believed that the world would end in a flood. He built a home in the traditional shape of the Ark, inverted, with the roof forming the hull of the proposed vessel. The builder expected that the deluge would cause the house to topple and then reverse itself, floating away on its roof until it should land on some new Ararat.
    —For the State of New Jersey, U.S. public relief program (1935-1943)