Boltzmannomics: A Framework for Agent-Based Economic Simulations Inspired by Statistical Physics
Agent-based economic simulation framework leveraging CUDA Fortran and multicore processing for GPU and CPU parallelism. Models based on analogies with statistical physics such as kinetic exchange, spin flips, magnetization, and more.
- Start right away: Running included models and changing their parameters is as simple as editing two small text files.
- Get results fast: Boltzmannomics uses GPU & CPU parallelized Fortran & Python wherever possible to support millions of agents, during runtime and post-processing.
- Create custom models: This library is great for researchers developing new models in modern Fortran since there are clear examples & instructions for how to slot new ABMs into the existing object-oriented Boltzmannomics framework.
- Implemented in modern Fortran (F2003+ OOP, F95+ dynamic arrays, etc.)
- Uses the F2008 language feature
do concurrentand OpenMP for GPU/CPU parallelism
- Uses the F2008 language feature
- Distributed computing support in a cluster environment via:
- MPI for very large populations and/or complex agents1
- Cluster Workload Manager or Slurm for parallelized studies
- Simplified build system using fpm
- Easy for users to build their own custom models with the framework using existing models as a guide
- Python post-processing tools with optional CPU parallelism
- Histograms for population properties with optional probability distribution fitting
- Line plots of summary stats for population properties over time
- Line plots tracking custom population metrics (Gini coefficient, etc.) over time
- Available simulation models (see MODELS.md for more details):
SimpleExchange- Simple random exchange with configurable debt limitKineticIsing- Extension ofSimpleExchangewhere agents have buy/sell 'spins'CCMExchange- Extension ofSimpleExchangewhere agents have individual propsensities to saveConservativeExchangeMarket- Proximity-based wealth redistribution model - reference (4)StochasticPreferences- Exchange market where agents have stochastic preferences for two goods - reference (5)
- Builds with
gfortran15.1.0+ andnvfortran
1 MPI only implemented for SimpleExchange so far to showcase the capability
The fortran package manager (v0.12.0+) is assumed to be installed at ~/fpm
Tested compiler configurations:
| gfortran 15.1.0 | nvfortran 25.5-0 | |
|---|---|---|
| x64 Ubuntu 24.04 | ✅CPU | 🔵GPU, ✅CPU |
| arm64 Ubuntu 24.04 | ✅CPU | ❓GPU, ➕CPU |
| Key: |
- ✅
do concurrentand MPI parallelism - 🔵
do concurrentparallelism - ➕ MPI parallelism
- ❓ Untested configuration
- ❌ Unsupported configuration
The nvfortran is reccomended, however compatibility is maintained with the open-source gfortran compiler to be in line with JOSS reccomendations.
- To get an updated gfortran, create a
condaenvironment and run:conda install -c conda-forge gfortran=15.1.0conda config --add channels conda-forgeconda config --set channel_priority strictconda install openmpi openmpi-mpifort
Compilation notes:
- To build with CPU std parallelism instead of GPU with
nvfortran, swap the commented out lines in nvfortran-environment-vars.sh - To build with
gfortranwith CPU std parallelism, usesource gfortran-environment-vars.sh && ~/fpm build- Set the number of cores used with GFORT_N_CORES in gfortran-environment-vars.sh
- Set the gfortran 15.1+ compiler path in gfortran-environment-vars.sh
- View
nvfortranoptimization printouts:source nvfortran-environment-vars.sh && ~/fpm clean && rm -f compile.txt && ~/fpm build --verbose &> compile.txt
(Optional) for a better development experience use the Modern Fortran VS Code extension. On the main development machine the fortran language server is located in a conda env here: /home/linuxuser/miniconda3/envs/fortran-fortls along with fprettify for code formatting.
Using MPI
Building with MPI:
source gfortran-environment-vars.sh MPI && ~/fpm buildorsource nvfortran-environment-vars.sh MPI && ~/fpm build
Running with MPI on a single host with 4 cores:
rm out/*; $MPIRUN -n 4 ~/fpm run
Running with MPI on a cluster of nodes with 4 cores: TODO - this will be its own tutorial in DISTRIBUTED.md once I run on my arm64 MPI cluster
Terraform and the AWS CLI are assumed to be installed, with AWS CLI credentials sourced in the current shell.
Configs are located in the terraform/ directory.
Modify the instance type in main.tf as needed for more CPUs and/or to enable GPUs. Changing the disk size is possible too.
A Docker image containing the dependencies needed for running Boltzmannomics is deployed on the EC2 machine for use there. Note that only nvfortran is supported and not gfortran since I ran into issues with building gfortran 15.1 from source.
cdto the terraform/ folder- (One-Time) Generate an SSH key pair with
ssh-keygen -t rsa -b 4096 -f ~/.ssh/my_aws_key - Set the environment variables that hold your AWS CLI secret key credentials
- (One-Time) Initialize with
terraform init - Check with
terraform validate - Look over the plan with
terraform plan - Deploy with
terraform apply- Pulling the Boltzmannomics Docker container takes a few minutes because of the large NVidia HPC toolkit
Obtain the public ip address of the instance with terraform show | grep ip
Connect via SSH to the instance with ssh -i ~/.ssh/my_aws_key ubuntu@THE_PUBLIC_IP
Command to launch the Docker container with the Bolzmannomics framework mounted to /src:
cd && sudo docker run --rm -it -v ${PWD}/economic-simulation:/src -w /src --gpus all ifriedri/economic-simulation-runtime:latest
Note that the --gpus all option does no harm if the EC2 machine has no GPU and nvfortran also works CPU-only.
From top-level of the repo:
- Copy a pair of .nml files from the templates folder into the top-level directory
- Build:
source nvfortran-environment-vars.sh && ~/fpm build - Run:
rm out/*; ~/fpm run
See postprocess/README_analyze.md for more details!
From inside the postprocess folder, with the Python env activated (use python-3.12.11-requirements.txt to create env):
- Unified Analysis:
rm histograms/* stats/*; python analyze.py- Creates histograms with advanced distribution fitting (exponential, log-normal, gamma, etc.)
- Generates statistics plots over time for agent populations
- Produces simulation metrics visualization
- Uses JSON configuration for flexible analysis settings
- Configuration:
- First time:
python analyze.py --create-configto generate default config - Edit
analyze_config.jsonto customize analysis (distributions to fit, data splitting, etc.)
- First time:
- Selective Analysis:
- Histograms only:
python analyze.py --histograms-only - Statistics only:
python analyze.py --stats-only - Metrics only:
python analyze.py --metrics-only
- Histograms only:
- Profiling (basic) adapted from this book:
nsys profile -o nsys.log ~/fpm runnsys stats nsys.log.nsys-rep- look at "CUDA GPU Kernel Summary" which has thedo concurrentloops listed
Due to the abstraction in the code, the process for adding new models is straighforward:
- Subclass the AbstractSimulation class in a new module / .f90 prefixed with "model_"
- Add the new class the factory in to sim_factory.f90
- Subclass the AbstractConfig class within kinds_config.f90
- Add the new class the create_config() factory function in to kinds_config.f90
True OOP is not possible inside of do concurrent loops (at the moment) due to the CUDA Fortran restriction on the type of statements allowed inside of a do concurrent loop. However, steps have still been taken to use abstraction and encapsulation where possible. See this link for more information about the restrictions on do concurrent.
As we move beyond classic exchange-style models with trivial agent structs to true agent objects (strategy-follwing/history-aware/NNs/etc.), it may prove necessary to rely on OpenMP loops for parallelism. See:
- nvfortran: https://docs.nvidia.com/hpc-sdk/archive/25.5/compilers/hpc-compilers-user-guide/index.html#using-openmp
- gfortran:
Classic models:
- CPT model - section II.A of reference (3)
- BM model (possibly discrete version) - section II.B of reference (3)
Continue going through reference (3) and identify more prebuilt models to include in the library
- DISTRIBUTED.md docs on running Boltzmannomics on an MPI cluster
- Parallel 'study' runner using MPI that can run on clusters
- ABMs for financial markets & diverse pool of agents
- Create a development Dockerfile & accompanying devcontainer.json
- Benchmarks: gfortran cpu (serial) vs nvidia cpu (parallel) vs nvidia gpu
- Update the sim_factory_m module with another routine to construct simulators without file I/O
- Add some initial tests. Will likely need to expand the
AbtractSimulationinterface with some "test_" helper routines - Custom HDF5-like binary format instead of CSV for population data (already experimented with and looks promising)
- Folder containing binary files
- Each binary file is directly an unformatted array that was formerly a CSV column
- Should be readable with numpy np.fromfile
- Add support for the lfortran compiler once namelist support is added. It supports
do concurrent!
- Colloquium: Statistical mechanics of money, wealth, and income (2009). https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.81.1703
- Statistical mechanics of money (2000). https://doi.org/10.1007/s100510070114
- Twenty-five years of random asset exchange modeling (2024). https://link.springer.com/article/10.1140/epjb/s10051-024-00695-3
- Wealth redistribution in our small world (2003). https://doi.org/10.1016/S0378-4371(03)00430-8. https://www.if.ufrgs.br/~iglesias/Wealth%20small%20world.pdf
- Statistical Equilibrium Wealth Distributions in an Exchange Economy with Stochastic Preferences (2002) https://doi.org/10.1006/jeth.2001.2897
- Apparently gfortran needs .nml files to end in a newline character... link
- gfortran 15.1 released in April 2025 finally supports locality spec for
do concurrent






