This repository presents a comparative study of metaheuristic optimization algorithms applied to the tuning of a PI controller for a nonlinear tank system at laboratory scale.
The objective is to optimize the closed-loop response of the system under realistic operational constraints, minimizing control error while ensuring feasible actuator behavior.
The study evaluates multiple metaheuristic and evolutionary algorithms, comparing:
- Convergence behavior
- Fitness evolution
- Computational cost
- Closed-loop simulation performance
The work follows an industrial-style control engineering approach, prioritizing robustness and practicality over purely theoretical optimality.
- Exhaustive search: Exhaustive grid search implementation used as reference optimum; Main script:
run_exhaustive.m - PSO: Particle Swarm Optimization (PSO) implementation and execution scripts; Main script:
run_pso.m - GA: Genetic Algorithm (GA) implementation and execution scripts; Main script:
run_ga.m - firefly: Firefly Algorithm implementation and execution scripts; Main script:
run_firefly.m - ABC: Artificial Bee Colony (ABC) implementation and execution scripts; Main script:
run_abc.m - results: Result comparison scripts and stored fitness evolution data (
.matfiles); Main script:plot_comparison.m
Each folder contains a self-contained implementation of the corresponding optimization algorithm.
- Process: Single-tank level control (laboratory-scale plant)
- Dynamics: Nonlinear (flowβlevel relationship)
- Configuration: SISO (Single Input β Single Output)
- Control Variable: Tank liquid level
- Manipulated Variable: Pump voltage
- Operating Point: Linearized around nominal steady-state conditions
The nonlinear model is linearized around a chosen operating point to enable classical PI control design, while the optimization algorithms handle the non-idealities and constraints.
- Controller type: PI (ProportionalβIntegral)
- Structure: Unity feedback loop
- Design focus:
- Reference tracking
- Stable transient response
- Limited actuator effort
Derivative action is intentionally excluded due to process characteristics and actuator limitations.
The following diagrams illustrate the closed-loop control structure and the physical tank system considered in this study.
|
Figure 1. Closed-loop PI control architecture. |
![]() Figure 2. Laboratory-scale single-tank system. |
The PI controller tuning problem is formulated as a constrained optimization task using the Integral of Squared Error (ISE) as performance index:
The ISE criterion is selected to penalize transient errors more strongly, making it suitable for evaluating closed-loop dynamic performance.
The optimization is subject to the following physical and operational constraints:
where
Candidate solutions violating these constraints are considered infeasible and assigned zero fitness during optimization.
The following techniques are implemented and compared:
- Exhaustive Search (baseline reference)
- Particle Swarm Optimization (PSO)
- Genetic Algorithm (GA)
- Firefly Algorithm
- Artificial Bee Colony (ABC)
- Bat Algorithm
Each algorithm is evaluated using the same objective function, constraints, and simulation conditions to ensure a fair comparison.
The following diagrams summarize the internal logic and iteration flow of each metaheuristic optimization algorithm implemented in this study.
Particle Swarm Optimization (PSO)
|
Genetic Algorithm (GA)
|
Firefly Algorithm
|
Artificial Bee Colony (ABC)
|
These diagrams are provided for clarity and documentation purposes. They do not alter the numerical results but illustrate the algorithmic flow.
Each optimization algorithm is executed independently.
Each runner:
- executes the selected optimizer
- generates method-specific plots
- stores convergence history as global_fitness in /results/*.mat
Important note (Optimality vs. Best Fitness):
In this study, the Exhaustive Search is used as the reference optimum within the evaluated parameter grid and bounds.
For the metaheuristic methods (PSO, GA, Firefly, ABC, Bat), the reported curves correspond to the best-so-far fitness found during the search, which does not necessarily imply global optimality.Convergence performance is therefore evaluated based on:
- Proximity to the reference optimum, and
- How efficiently each algorithm reaches that region of the search space.
Relative optimality gap:
The vertical axis represents the relative gap with respect to the exhaustive-search reference optimum, defined as
$$ \text{Gap} = \frac{f^* - f}{f^*} $$ where
$$f^*$$ is the best fitness obtained by exhaustive search and$$f$$ is the best-so-far fitness achieved by the metaheuristic at a given iteration. Lower values indicate closer proximity to the reference optimum.Interpretation:
A gap equal to zero indicates convergence to the reference optimum, while nonzero values reflect suboptimal or locally optimal solutions under the given iteration budget.
All metaheuristic algorithms were executed using a fixed iteration budget of 100 iterations under identical simulation conditions.
The exhaustive search result serves exclusively as a benchmark reference, not as a competitor in convergence speed.
To provide a quantitative assessment of convergence behavior, the stored global_fitness histories were analyzed using the following metrics:
- MaxBestFitness: Maximum best-so-far fitness reached during the run
- IterAtMax: First iteration at which the maximum best-so-far fitness is achieved
- IterToRef: First iteration at which the algorithm reaches the reference optimum region (within tolerance)
- TimeToRef: Represents the estimated real execution time required to reach the reference optimum region, computed assuming a fixed iteration budget of 100 iterations.
- ExecTime (s): Total execution time for the run
- Score: Time-weighted performance index (higher is better), combining fitness quality and execution time penalty
Note: Since all metaheuristics were executed with a fixed iteration budget of 100 iterations, the reported values represent the best solution found within that budget, which may correspond to either a global or a local optimum.
| Technique | MaxBestFitness | IterAtMax | IterToRef | TimeToRef (s) | ExecTime (s) | Score |
|---|---|---|---|---|---|---|
| GA | 0.22916 | 7 | 7 | 2.31 | 33 | 0.17230 |
| Firefly | 0.25134 | 79 | 7 | 7.56 | 108 | 0.12084 |
| PSO | 0.22921 | 17 | 10 | 10.70 | 107 | 0.11073 |
| ABC | 0.22921 | 82 | 82 | 297.66 | 363 | 0.04950 |
Including the time-to-reference metric allows distinguishing between algorithms that converge in few iterations but require high computational effort per iteration, and those that reach the reference region earlier in real execution time.
- GA reaches the reference optimum region fastest (IterToRef = 7) and exhibits the best time-weighted score due to its low execution time.
- PSO converges reliably toward the reference region (IterToRef = 10) with stable behavior, albeit at a higher computational cost than GA.
- ABC eventually reaches the reference region but requires significantly more iterations and computation time.
- Firefly achieves the highest maximum best-so-far fitness; however, this occurs late in the run (IterAtMax = 79), indicating convergence toward a different feasible region of the search space rather than superiority with respect to the reference optimum.
This highlights the exploratory nature of Firefly-type algorithms under the current objective function and constraints.
The table below provides a qualitative comparison of the evaluated optimization techniques, based on fitness level, computational cost, and overall convergence behavior.
This summary is intended as a high-level interpretation of the detailed convergence metrics reported in the previous section, including iteration count and time-to-reference performance.
| Algorithm | Fitness Level | Computation Time | Notes |
|---|---|---|---|
| Exhaustive Search | Reference optimum | Medium | Guarantees optimality within the evaluated grid |
| PSO | Near-reference | Medium | Stable convergence toward the reference solution |
| Genetic Algorithm (GA) | Reference-equivalent | Low | Fast convergence in few generations |
| Firefly Algorithm | Higher fitness value | Medium | Converges to a different (local) optimum region |
| ABC | Reference-equivalent | Very High | Achieves reference solution with high computational cost |
Fitness values are not directly comparable across different local optima when constraints and objective scaling differ; therefore, convergence behavior and computational cost are equally relevant evaluation criteria.
The table below reports the final controller parameters and execution times obtained for each method.
All metaheuristic algorithms were executed with 100 iterations, allowing a direct comparison of how efficiently each method approaches the reference optimum region under identical computational constraints.
All simulations were executed on the following hardware:
- Processor: AMD Ryzen 7 5800X (8-Core, 3.80 GHz)
- Installed RAM: 32.0 GB
| Technique | Fitness | Kp | Ti | Execution Time (s) |
|---|---|---|---|---|
| Exhaustive Search | 0.2291 | 0.60 | 15.00 | 82 |
| PSO | 0.2292 | 0.60 | 14.90 | 107 |
| Firefly | 0.2513 | 0.69 | 19.78 | 108 |
| Genetic Algorithm (GA) | 0.2291 | 0.60 | 15.00 | 33 |
| Artificial Bee Colony | 0.2291 | 0.60 | 15.20 | 363 |
Note:
This table reports the final controller parameters and total execution time for each method.
Convergence speed and time-to-reference metrics are analyzed separately in the Convergence Metrics section.
- MATLAB
- Control System Toolbox
- Custom simulation and optimization scripts
All simulations are executed under identical conditions to ensure reproducibility and consistency.
This project was originally developed as an academic control engineering study and has been refactored and documented to serve as:
- A reference example of metaheuristic optimization applied to control
- A practical comparison of optimization strategies for nonlinear processes
- A reusable framework for controller tuning under constraints
The methodology and results remain directly applicable to industrial control and process optimization problems.
This repository is intended for educational and research purposes only.
The control strategies, optimization algorithms, and parameter values presented here are not directly validated for industrial deployment. They must be independently verified, tested, and adapted before being applied to real industrial systems.
The author assumes no responsibility for misuse of the provided material.
Support my work on Patreon:
https://www.patreon.com/c/CrissCCL
MIT License





