Lab05 - Using MPI (Message Passing Interface)
CRAY SV1 Version
Hello Parallelism

  • Lab 05 Report form, .doc

    Objective
    To run an MPI program. Topics include printf, MPI_Init, MPI_Finalize, MPI_Rank, MPI_Size, MPI_COMM_WORLD.

    Background
    Check the page on running MPI programs (this is for the Cray SV1).

    1. On the Cray SV1: cc filename.c, where filename is the name of the .c file you are going to run.
      For Fortran use f90 filename.f90. These both generate a.out as the executable.
      If you want to specify the name of the executable, use cc -o filename filename.c or f90 -o filename filename.f90
      In Lab00, this filename can be lab05.c
    2. mpirun -np 15 filename (-np 15 runs 15 processes)
      ON A WORKSTATION USE: "mpirun -all-local -np 5 filename")
    3. Try timing: time mpirun -np 15 filename, use 2 - 16 processes, see if there's a difference

    Assignment
    Create the files lab05.c or lab05.f90 (or both for extra credit!) that contain one of the programs (C or Fortran) shown below. Run the program with parallel processors using the Cray SV1.

    NOTE THAT WE'RE cheating a little by printing from all the processes. Technically you should only print to the screen from the root process. If not the root process, we should be printing to a file.

    Print the following according to the process ID number:
    If the process number is 0, print 0.
    If the process number is 1 or higher, use a loop to print the numbers from 1 to 30, counting by the process number.
    Here's an example of my output for 5 processes:
    NOTE that the output of the loops will not be in any particular order, they are done "in parallel".

     cc helloWorldMPI.c
     mpirun -np 5 a.out 
     Hello from 0.
     0
     Hello from 1.
       1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30
     Hello from 2.
       1   3   5   7   9  11  13  15  17  19  21  23  25  27  29
     Hello from 3.
     Hello from 4.
       1   5   9  13  17  21  25  29
       1   4   7  10  13  16  19  22  25  28
    

    lab05.c

    #include <stdio.h>
    #include "mpi.h"
    
    int main(int argc, char** argv)
    {
    	int rank, size;
    	char name[80]; // character array to hold the name of each processor
    int len; // length of the name of the processor MPI_Init(&argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &size ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Get_processor_name(name, &len); printf( "Hello from process %d of %d. Name=%s\n", rank, size, name ); MPI_Finalize(); return 0; }

    Sample Output

    cc -o lab05 lab05.c
    mpirun -np 5 lab05
    
    Hello from process 1 of 5. Name=sn3313
    Hello from process 2 of 5. Name=sn3313
    Hello from process 4 of 5. Name=sn3313
    Hello from process 3 of 5. Name=sn3313
    Hello from process 0 of 5. Name=sn3313
    
    mpirun -np 16 lab05
    Hello from process 0 of 16. Name=sn3313
    Hello from process 2 of 16. Name=sn3313
    Hello from process 3 of 16. Name=sn3313
    Hello from process 4 of 16. Name=sn3313
    Hello from process 5 of 16. Name=sn3313
    Hello from process 6 of 16. Name=sn3313
    Hello from process 7 of 16. Name=sn3313
    Hello from process 8 of 16. Name=sn3313
    Hello from process 9 of 16. Name=sn3313
    Hello from process 10 of 16. Name=sn3313
    Hello from process 14 of 16. Name=sn3313
    Hello from process 1 of 16. Name=sn3313
    Hello from process 11 of 16. Name=sn3313
    Hello from process 13 of 16. Name=sn3313
    Hello from process 12 of 16. Name=sn3313
    

    Fortran 90 version

    ! lab05F.f90
    
            program main
            include "mpif.h"
    
            character, dimension(80) :: name
            integer myid, size, length
            integer ierr
    
            call MPI_INIT(ierr)
            call MPI_COMM_RANK(MPI_COMM_WORLD, myid, ierr)
            call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierr)
            call MPI_GET_PROCESSOR_NAME(name, length, ierr)
            print *, 'Hello world.  I am process ', myid, ' out of ', size, &
                            ' processor name: ', name
    
            call MPI_FINALIZE(ierr)
    
            end program