Constraint Boundary Wandering Framework: Enhancing Constrained Optimization with Deep Neural Networks
This repository is by
Shuang Wu
,
Shixiang Chen,
and Leifei Zhang
and contains the PyTorch source code to
reproduce the experiments in our paper
"[Constraint Boundary Wandering Framework: Enhancing Constrained Optimization with Deep Neural Networks]"
If you find this repository helpful in your publications, please consider citing our paper.
Constrained optimization problems are pervasive in various fields, and while conventional techniques offer solutions, they often struggle with scalability. Leveraging the power of deep neural networks (DNNs) in optimization, we present a novel learning-based approach, the Constraint Boundary Wandering Framework (CBWF), to address these challenges. Our contri- butions include introducing a Boundary Wandering Strategy(BWS) inspired by the active-set method, enhancing equality constraint feasibility, and treating the Lipschitz constant as a learnable parameter. Additionally, we evaluate the regularization term, finding that the L2 norm yields superior results. Extensive testing on synthetic datasets and the ACOPT dataset demonstrates CBWF’s superiority, outperforming existing deep learning-based solvers in terms of both objective and constraint loss.
- Python 3.x
- PyTorch >= 1.8
- numpy/scipy/pandas
- osqp: State-of-the-art QP solver
- qpth: Differentiable QP solver for PyTorch
- ipopt: Interior point solver
- pypower: Power flow and optimal power flow solvers
- argparse: Input argument parsing
- pickle: Object serialization
- hashlib: Hash functions (used to generate folder names)
- setproctitle: Set process titles
- waitGPU (optional): Intelligently set
CUDA_VISIBLE_DEVICES
Datasets for the experiments presented in our paper are available in the datasets
folder. These datasets can be generated by running the Python script make_dataset.py
within each subfolder (simple
, nonconvex
, and acopf
) corresponding to the different problem types we test. And we use the make_dataset_high_ioopt.py
to generate the high-order objective case.
Our method and baselines can be run using the following Python files:
bws_main.py
: Our modified DC3 methodcbwf_method.py
: Our proposed methodbws_second_main.py
: After training DC3, we use BWS to find the better solution
See each file for relevant flags to set the problem type and method parameters. Notably:
--probType
: Problem setting to test (cbwf_method
providessimple
,nonconvex
,acopf57
orhigh_o
andbws_main
only surportsimple
,nonconvex
andacopf57
)--simpleVar
,--simpleIneq
,simpleEq
,simpleEx
: If the problem setting issimple
, the number of decision variables, inequalities, equalities, and datapoints, respectively.--nonconvexVar
,--nonconvexIneq
,nonconvexEq
,nonconvexEx
: If the problem setting isnonconvex
, the number of decision variables, inequalities, equalities, and datapoints, respectively.- Or you can altert this hyperparameter in
default_args.py