MSc in Network Centered Computing

Module: PVM and MPI

additional information for the assignment

a possible basic program design in MPI

1. read matrix A and B from file or use defined matrixes in the source code
2. decision: is it possible to calculate with given number of processors ?
3. set number of slaves
4. broadcast dimension of A and B to all slaves
5. scatter a part of matrix A and B to each slave
6. compute matrix
7. shift matrix A to other slave (not for Matrix by Vector multiplication)
8. gather all calculated parts of matrix C from all slaves
9. save matrix C into a file or just print it out

examples

Here is a simple MPI example:
cpi.c

useful functions

It can be useful to use the following MPI functions:
- MPI_Init Initialize the MPI execution environment
- MPI_Comm_rank Determines the rank of the calling process in the communicator
- MPI_Comm_size Determines the size of the group associated with a communictor
- MPI_Get_processor_name Gets the name of the processor
- MPI_Bcast Broadcasts a message from the process with rank "root" to all other processes of the group.
- MPI_Scatter Sends data from one task to all other tasks in a group
- MPI_Send Performs a basic send
- MPI_Recv Basic receive
- MPI_Sendrecv_replace Sends and receives using a single buffer. Note that this should be used while shifting big amount of data, the normal send and receive function can end in a deadlock situation.
- MPI_Gather Gathers together values from a group of processes
- MPI_Finalize Terminates MPI execution environment


Matrix-Matrix Multiplication: a graphical overview

For parallel matrix multiplication it is necessary to split the matrix A and B into blocks and spawn them to the slaves. The following picture can help you, better to understand how this process works.

parallel matrix multiplication
Here you can find a even more detailt description of how the matrix-matrix multiplication works on parallel machiens.

back to the main page

© Oliver Otto - The University of Reading / Salford