Skip to content

Introduction to MPI Programming

A comprehensive guide to getting started with MPI (Message Passing Interface) for parallel programming on HPC systems.

What is MPI?

MPI (Message Passing Interface) is a standardized message-passing library interface specification. It's the de facto standard for distributed memory parallel programming in HPC.

Key Concepts

1. Basic MPI Operations

  • Process ranks and size
  • Point-to-point communication
  • Collective operations
  • Communicators and groups

2. Common Patterns

  • Master-worker
  • Domain decomposition
  • Pipeline parallelism

Getting Started

Environment Setup

# Load MPI module
module load mpi/openmpi-4.1

# Compile MPI program
mpicc -o hello_world hello_world.c

# Run with 4 processes
mpirun -np 4 ./hello_world

Hello World Example

#include <mpi.h>
#include <stdio.h>

int main(int argc, char** argv) {
    MPI_Init(&argc, &argv);

    int world_size, world_rank;
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);

    printf("Hello from process %d of %d\n", world_rank, world_size);

    MPI_Finalize();
    return 0;
}

Best Practices

  1. Always check error codes
  2. Use non-blocking communication when possible
  3. Minimize communication overhead
  4. Balance workload across processes

Next Steps

  • Learn about advanced MPI topics
  • Explore hybrid MPI+OpenMP programming
  • Study performance optimization techniques

Further Reading

  • MPI Forum: https://www.mpi-forum.org
  • Open MPI: https://www.open-mpi.org
  • MPICH: https://www.mpich.org