Skip to content

Getting Started with OpenFOAM

Learn how to set up, configure, and run OpenFOAM simulations effectively on HPC systems.

Getting Started with OpenFOAM on HPC Systems

Published: April 8, 2025
Author: EPICURE Team
Tags: CFD, OpenFOAM, Tutorial

OpenFOAM (Open Field Operation and Manipulation) is a powerful open-source CFD toolkit that's widely used in academia and industry. In this guide, we'll walk through setting up and running your first OpenFOAM simulation on an HPC system.

Introduction

When I first started using OpenFOAM on HPC systems, I encountered several challenges that weren't well documented. This guide aims to share the knowledge I've gained and help you avoid common pitfalls.

Prerequisites

Before we begin, make sure you have: - Access to an HPC system - Basic knowledge of Linux commands - Understanding of CFD basics

Step 1: Loading the Right Modules

# First, check available OpenFOAM versions
module avail OpenFOAM

# Load OpenFOAM and its dependencies
module load OpenFOAM/10
module load mpi/openmpi-4.1.5

Module Selection

Choose the OpenFOAM version that matches your case requirements. Newer versions might have different solver implementations.

Step 2: Setting Up Your Case

Let's start with the classic cavity tutorial case. I've found this to be an excellent starting point for understanding OpenFOAM's structure.

# Copy the tutorial case to your work directory
cp -r $FOAM_TUTORIALS/incompressible/icoFoam/cavity ~/work/cavity
cd ~/work/cavity

Case Structure

The cavity case has three essential directories:

cavity/
├── 0/          # Initial conditions
├── constant/   # Physical properties
└── system/     # Solution parameters

Step 3: Parallel Decomposition

Here's where HPC comes into play. Let's prepare our case for parallel execution:

# Edit system/decomposeParDict
cat << EOF > system/decomposeParDict
FoamFile
{
    version     2.0;
    format      ascii;
    class       dictionary;
    location    "system";
    object      decomposeParDict;
}

numberOfSubdomains 64;  // Adjust based on your allocation

method          scotch;
EOF

# Decompose the case
decomposePar

Domain Decomposition

The number of subdomains should match your job allocation. Too fine decomposition can lead to communication overhead.

Step 4: Job Submission

Here's a sample SLURM script I use for OpenFOAM jobs:

#!/bin/bash
#SBATCH --job-name=cavity
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=32
#SBATCH --time=02:00:00
#SBATCH --partition=compute

module load OpenFOAM/10
module load mpi/openmpi-4.1.5

# Run in parallel
mpirun icoFoam -parallel > log.icoFoam

Performance Analysis

After running several tests, I found these performance patterns:

import matplotlib.pyplot as plt
import numpy as np

# Sample scaling data
cores = [16, 32, 64, 128]
speedup = [15.8, 30.2, 56.4, 98.7]

plt.plot(cores, speedup, 'bo-')
plt.xlabel('Number of Cores')
plt.ylabel('Speedup')
plt.title('OpenFOAM Cavity Case Scaling')
plt.grid(True)

Common Issues and Solutions

Poor Scaling

If you're seeing poor scaling, check: 1. Domain decomposition balance 2. I/O patterns (use collated file I/O) 3. Network interconnect (InfiniBand vs. Ethernet)

Convergence Issues

In my experience, these steps often help: 1. Adjust relaxation factors 2. Start with a coarser mesh 3. Monitor residuals carefully

Advanced Tips

After running hundreds of simulations, I've learned to:

  1. Use pyFoam for automation:

    from PyFoam.RunDictionary.ParsedParameterFile import ParsedParameterFile
    
    # Modify control dict programmatically
    control = ParsedParameterFile('system/controlDict')
    control['endTime'] = 1.0
    control.writeFile()
    

  2. Implement proper checkpointing:

    # In system/controlDict
    writeControl    adjustableRunTime;
    writeInterval   0.1;
    

Conclusion

OpenFOAM on HPC systems can be challenging, but with proper setup and understanding, you can achieve excellent performance. Start small, validate your results, and scale up gradually.

Further Reading

Comments and Discussion

Have questions or suggestions? Leave a comment below or join our community discussion!