ExaPF is a HPC package implementing a vectorized modeler for power systems. It targets primarily GPU architectures, and provides a portable abstraction to model power systems on upcoming HPC architectures.
Its main features are:
- Portable approach: All expressions (
PowerFlowBalance
,CostFunction
,PowerGenerationBounds
, ...) are evaluated fully on the GPU, without data transfers to the host. - Differentiable kernels: All the expressions are differentiable with ForwardDiff.jl. ExaPF uses matrix coloring to generate efficiently the Jacobian and the Hessian in sparse format.
- Power flow solver: ExaPF implements a power flow solver working fully on the GPU, based on a Newton-Raphson algorithm.
- Iterative linear algebra: ExaPF uses Krylov.jl to solve sparse linear systems entirely on the GPU, together with an overlapping Schwarz preconditioner.
ExaPF leverages KernelAbstractions.jl to generate portable kernels working on different backends. Right now, we support NVIDA's CUDA and AMD's ROCm backends with Intel oneAPI in development.
pkg> add ExaPF
pkg> test ExaPF
ExaPF solves the power flow equations of a power network with a Newton-Raphson algorithm:
# Input file
case = "case57.m"
# Instantiate a PolarForm object on the CPU.
# (Replace CPU() by CUDADevice() to deport computation on a CUDA GPU)
polar = ExaPF.PolarForm(case, CPU())
# Initial variables
stack = ExaPF.NetworkStack(polar)
# Solve power flow
conv = run_pf(polar, stack; verbose=1)
#it 0: 6.18195e-01
#it 1: 8.19603e-03
#it 2: 7.24135e-06
#it 3: 4.68355e-12
Power flow has converged: true
* #iterations: 3
* Time Jacobian (s) ........: 0.0004
* Time linear solver (s) ...: 0.0010
* Time total (s) ...........: 0.0014
For more information on how to solve power flow on the GPU, please refer to the quickstart guide.
- Argos.jl uses ExaPF as a modeler to accelerate the resolution of OPF problems on CUDA GPU.
We welcome any contribution to ExaPF! Bug fixes or feature requests
can be reported with the issue tracker,
and new contributions can be made by opening a pull request on the develop
branch. For more information about development guidelines, please
refer to CONTRIBUTING.md
This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the U.S. Department of Energy’s Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation’s exascale computing imperative.