2025 Gordon Bell Prize Finalist - MFC simulates compressible multi-phase flows at exascale, using Fypp metaprogramming in ~40K lines of Fortran. It conducted the largest known public CFD simulation at 200 trillion grid points and 1 quadrillion degrees of freedom, scaling ideally to >43K AMD APUs on El Capitan and >33K AMD GPUs on Frontier.
simulation.mp4
- Exascale GPU performance - Ideal weak scaling to 43K+ GPUs. Near compute-roofline behavior. Compile-time case optimization for up to 10x speedup.
- Compact codebase - ~40K lines of Fortran with Fypp metaprogramming. Small enough to read and modify; powerful enough for Gordon Bell.
- Native multi-phase - 4, 5, and 6-equation models, phase change, surface tension, bubble dynamics, and Euler-Lagrange particle tracking, all built in.
- Portable - NVIDIA and AMD GPUs, CPUs, laptops to exascale. Docker, Codespaces, Homebrew, and 16+ HPC system templates.
- Tested - 500+ regression tests per PR with line-level coverage across GNU, Intel, Cray, and NVIDIA compilers.
- Truly open - MIT license, active Slack, and responsive development team.
If MFC is useful to your work, please ⭐ star the repo and cite it!
| Path | Command |
|---|---|
| Codespaces (fastest) 💨 | Open a Codespace - pre-built, zero install |
| Docker 🐳 | docker run -it --rm --entrypoint bash sbryngelson/mfc:latest-cpu |
| Homebrew (macOS) 🍺 | brew install mflowcode/mfc/mfc |
| From source 💻 | git clone https://github.com/MFlowCode/MFC && cd MFC && ./mfc.sh build -j $(nproc) |
Your first simulation:
./mfc.sh run examples/3D_shockdroplet/case.py -n $(nproc)Visualize the output in examples/3D_shockdroplet/silo_hdf5/ with ParaView, VisIt, or your favorite tool.
For detailed build instructions (Linux, macOS, Windows/WSL, HPC clusters), see the Getting Started guide.
Get in touch with Spencer if you have questions! We have an active Slack channel and development team. MFC has high- and low-level documentation, visualizations, and more on its website.
MFC ships with 137+ example cases.
Here is a high-Mach flow over an airfoil (see examples/2D_ibm_airfoil/):
And a high-amplitude acoustic wave reflecting and emerging through a circular orifice:
| Command | Description |
|---|---|
./mfc.sh build |
Build MFC and its dependencies |
./mfc.sh run case.py |
Run a simulation case (interactive or batch: SLURM/PBS/LSF) |
./mfc.sh test |
Run the test suite |
./mfc.sh validate case.py |
Check a case file for errors before running |
./mfc.sh new my_case |
Create a new case from a template |
./mfc.sh clean |
Remove build artifacts |
./mfc.sh interactive |
Launch interactive menu-driven interface |
Run ./mfc.sh <command> --help for detailed options, or see the full documentation. Tab completion for bash and zsh is auto-installed after you have run ./mfc.sh generate (or any non-new command) at least once. Play with the examples in examples/ (showcased here).
MFC weak scales to the full machines on El Capitan (MI300A), Frontier (MI250X), and Alps (GH200) with near-ideal efficiency. MFC is a SPEChpc benchmark candidate, part of the JSC JUPITER Early Access Program, and used OLCF Frontier and LLNL El Capitan early access systems.
- 1-3D
- Compressible
- Low Mach number treatment available
- Multi- and single-component
- 4, 5, and 6 equation models for multi-component/phase features
- Kapila and Allaire models 5-equation models
- Multi- and single-phase
- Phase change via p, pT, and pTg schemes
- Grids
- 1-3D Cartesian, cylindrical, axisymmetric.
- Arbitrary grid stretching for multiple domain regions.
- Complex/arbitrary geometries via immersed boundary method
- STL geometry files supported
- Surface tension for multiphase cases
- Sub-grid bubble dynamics
- Euler-Euler volume-averaged bubble models
- Euler-Lagrange particle tracking
- Quadrature-based moment methods (QBMM)
- Viscous effects (high-order accurate representations)
- Hypoelastic and hyperelastic material models
- Ideal and stiffened gas equations of state
- Body forces
- Acoustic wave generation (one- and two-way sound sources)
- Chemistry and multi-species transport via Pyrometheus
- Magnetohydrodynamics (MHD)
- Relativistic Magnetohydrodynamics (RMHD)
- Shock and interface capturing schemes
- First-order upwinding
- MUSCL (order 2)
- Slope limiters: minmod, monotonized central, Van Albada, Van Leer, superbee
- WENO reconstructions (orders 3, 5, and 7)
- WENO variants: WENO-JS, WENO-M, WENO-Z, TENO
- Monotonicity-preserving reconstructions
- Reliable handling of large density ratios
- Exact and approximate (e.g., HLL, HLLC, HLLD) Riemann solvers
- Boundary conditions
- Periodic, reflective, extrapolation/Neumann
- Slip and no-slip
- Thompson-based characteristic BCs: non-reflecting sub/supersonic buffers, inflows, outflows
- Generalized characteristic relaxation boundary conditions
- Runge-Kutta orders 1-3 (SSP TVD), adaptive time stepping
- RK4-5 operator splitting for Euler-Lagrange modeling
- Interface sharpening (THINC-like)
- Information geometric regularization (IGR)
- Shock capturing without WENO and Riemann solvers
- GPU compatible on NVIDIA ([P/V/A/H]100, GH200, etc.) and AMD (MI[1/2/3]00+) GPU and APU hardware
- Ideal weak scaling to 100% of the largest GPU and superchip supercomputers
- >43K AMD APUs (MI300A) on LLNL El Capitan
- >3K AMD APUs (MI300A) on LLNL Tuolumne
- >33K AMD GPUs (MI250X) on OLCF Frontier
- >10K NVIDIA GPUs (V100) on OLCF Summit
- Near compute roofline behavior
- Compile-time case optimization (hard-codes parameters for significant speedup)
- RDMA (remote data memory access; GPU-GPU direct communication) via GPU-aware MPI on NVIDIA (CUDA-aware MPI) and AMD GPU systems
- Built-in profiling support (NVIDIA Nsight Compute/Systems, AMD rocprof)
- Optional single-precision computation and storage
- Fypp metaprogramming for code readability, performance, and portability
- Continuous Integration (CI)
- >500 Regression tests with each PR.
- Performed with GNU (GCC), Intel (oneAPI), Cray (CCE), and NVIDIA (NVHPC) compilers on NVIDIA and AMD GPUs.
- Line-level test coverage reports via Codecov and
gcov
- Benchmarking to avoid performance regressions and identify speed-ups
- >500 Regression tests with each PR.
- Continuous Deployment (CD) of website and API documentation
If you use MFC, consider citing it as below. Ref. 1 includes all modern MFC features, including GPU acceleration and many new physics features. If referencing MFC's (GPU) performance, consider citing ref. 1 and 2, which describe the solver and its design. The original open-source release of MFC is ref. 3, which should be cited for provenance as appropriate.
@article{wilfong26,
Author = {Benjamin Wilfong and Henry {Le Berre} and Anand Radhakrishnan and Ansh Gupta and Daniel J. Vickers and Diego Vaca-Revelo and Dimitrios Adam and Haocheng Yu and Hyeoksu Lee and Jose Rodolfo Chreim and Mirelys {Carcana Barbosa} and Yanjun Zhang and Esteban Cisneros-Garibay and Aswin Gnanaskandan and Mauro {Rodriguez Jr.} and Reuben D. Budiardja and Stephen Abbott and Tim Colonius and Spencer H. Bryngelson},
Title = {{MFC 5.0: A}n exascale many-physics flow solver},
journal = {Computer Physics Communications},
year = {2026},
volume = {322},
pages = {110055},
doi = {10.1016/j.cpc.2026.110055},
}
@article{Radhakrishnan_2024,
title = {Method for portable, scalable, and performant {GPU}-accelerated simulation of multiphase compressible flow},
author = {A. Radhakrishnan and H. {Le Berre} and B. Wilfong and J.-S. Spratt and M. {Rodriguez Jr.} and T. Colonius and S. H. Bryngelson},
journal = {Computer Physics Communications},
year = {2024},
volume = {302},
pages = {109238},
doi = {10.1016/j.cpc.2024.109238}
}
@article{Bryngelson_2021,
title = {{MFC: A}n open-source high-order multi-component, multi-phase, and multi-scale compressible flow solver},
author = {S. H. Bryngelson and K. Schmidmayer and V. Coralic and J. C. Meng and K. Maeda and T. Colonius},
journal = {Computer Physics Communications},
year = {2021},
volume = {266},
pages = {107396},
doi = {10.1016/j.cpc.2020.107396}
}Copyright 2021 Spencer Bryngelson and Tim Colonius. MFC is under the MIT license (see LICENSE for full text).
Federal sponsors have supported MFC development, including the US Department of Defense (DOD), the National Institutes of Health (NIH), the Department of Energy (DOE) and National Nuclear Security Administration (NNSA), and the National Science Foundation (NSF).
MFC computations have used many supercomputing systems. A partial list is below
- OLCF Frontier and Summit, and testbeds Wombat, Crusher, and Spock (allocation CFD154, PI Bryngelson).
- LLNL El Capitan, Tuolumne, and Lassen; El Capitan early access system Tioga.
- NCSA Delta and DeltaAI, PSC Bridges(1/2), SDSC Comet and Expanse, Purdue Anvil, TACC Stampede(1-3), and TAMU ACES via ACCESS-CI allocations from Bryngelson, Colonius, Rodriguez, and more.
- DOD systems Blueback, Onyx, Carpenter, Nautilus, and Narwhal via the DOD HPCMP program.
- Sandia National Labs systems Doom and Attaway, and testbed systems Weaver and Vortex.

