Python implemntation of swarm algorithms (Particle Swarm Optimization, Self Propelled Particles and Artificial Algae Algorithm) used for solveing non-convex optimization problems (algorithms are tested on Acukley and Gierwank functions).
In order to check how the implemented algorithms work just run main.py. Defaultly it will use PSO for optimization of the Auckley function. If you want to check either different function or different optimization method just uncomment the codeline. The default training loop for every i=30 steps show the positions of agents on a 2d visualisation. Furthermore it show a 3d visualisation after the end of optimization process.
NUMBER_OF_STEPS = 30
NUMBER_OF_AGENTS = 50
optimised_function = Auckley()
#optimised_function = Gierwank()
if __name__ == '__main__':
swarm = PSO(optimised_function=optimised_function, number_of_agents=NUMBER_OF_AGENTS)
# swarm = SPP(optimised_function=optimised_function, number_of_agents=NUMBER_OF_AGENTS)
# swarm = AAA(optimised_function=optimised_function, number_of_agents=NUMBER_OF_AGENTS)
particles_in_step = []
for i in trange(NUMBER_OF_STEPS):
swarm.step()
if i % 30 == 0:
print(swarm.optimized_function(swarm.best_solution), swarm.best_solution)
optimised_function.plot_2d(points=np.array([particle.position for particle in swarm.particles]), dirs=np.array([particle.velocity for particle in swarm.particles]))
optimised_function.plot_3d(points=np.array([particle.position for particle in swarm.particles]))
Three different optimisation algorithms have been implemented. Each algorithm for both functions is capable of finding the global solution in a finite number of steps.
Each implemented algorithm implements the below interface:
class AbstractSwarmAlgorithm(ABC):
def __init__(self, optimised_function, number_of_agents):
super().__init__()
self.optimized_function = optimised_function
self.number_of_agents = number_of_agents
@abstractmethod
def get_best_global_solution(self):
return NotImplementedError()
@abstractmethod
def step(self):
return NotImplementedError()
PSO is a basic algorithm that uses so-called "collective intelligence" to solve optimization problems. The general idea is to locate the number of particles (agents) in a different part of an input space and in each step change their velocity vector according to three factors: particle's velocity from the previous step (V_{t-1}), the best position the particle ever reach (P_best) and the best position any particle from a swarm ever reached (P_best).
For more details check out the article about PSO on wikipedia
SPP, similarly to PSO, assumes a colony of particles moving in a certain direction through the space of solutions, however, it differs in assumptions regarding the update of their motion parameters. The basic assumption is that the speed of each particle remains constant, and only its direction of movement changes. The algorithm itself in the basic form is very general and does not define the exact way in which the directions are updated
In our implementation, we propose a solution where a velocity vector is changed in the two "substeps"
-
In the first substep, all the particles set their velocity towards exactly the best global solution found in the previous step
-
In the second substep, the velocity vector of each particle is mixed with the average velocities of k its neighbors
Therefore contrary to the PSO in SPP the velocity vector changes only its direction but doesn't change its length.
Method inspired by Artificial algae algorithm (AAA) for nonlinear global optimization, Uymaz et al., which is a combination of a swarm and genetic optimization methods. Algae move in circles, and each move loses some of their energy. When the alga improves its position then it is fed, if it worsens it is not. In each step, the "e" weakest (with the lowest energy level) algae are eliminated. In the proposed implementation, the amount of food delivered to the alga is proportional to its level of improvement.
Two non-convex functions - Auckley and Gierwank has been implemented. Both functions work for any number of dimensions (n).
Auckley is as a classic non-convex function that is used for evaluation of swarm optimization algorithms. It is given by the following equation
And has global minimim at:
Gierwank is another non-convex functon that can be used for evaluation of swarm optimization algorithms. It is given by the following equation
And has global minimim at:
To better understand how the algorithms work we propose two methods of visualization. Both methods work with 2D realizations of optimized functions. The first method scatter the function on a 2D plane and uses color as a function value at a point. Moreover for PSO and SPP, we show velocity vectors (AAA has no such thing). The second method is using the 3rd dimension to show the actual function value. For both methods, we show function values for every point from the domain as well as the positions of every agent.
Both visualisation methods can be found in abstract_testing_function.py
This project is licensed under the MIT License - see the LICENSE file for details