You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The fixed-topology network is done. I will play with some other environments and see how the fixed neuroevolution performs. Will probably hardcode the neural-agent part for each env (cartpole (done) + pendulum (in progress))
The NEAT implementation is almost done, just gotta do a bit of refactoring and gotta move testing code to separate /test folder. The documentation is also done.
Not so good news
Although I have implemented 99% of the NEAT algorithm, I encountered the following obscurities:
The algorithm (i.e. my implementation) seems to force every gene to be disabled. I tried to crossover only when both genes are enabled - this does not seem to help, it just takes longer to make every gene disabled. This causes problems because the genome is not evaluated in the end at all. Node and connection addition methods seem to be implemented correctly since the tests verify the correctness of each respectively. I gotta check how the mutation is done and maybe play with rates.
The program is very slow. There are few bottlenecks:
Everything works on a single thread. Since there are N genomes in the population and M environment evaluation iterations, it takes N*N to get the distances and thus, M*N^2 to evaluate the genomes completely - which is very expensive. The crossover is not that bad - the merge-sort-like approach takes O(KlogK) given K is a length of genome (# of connections). Given that K does not exceed 100, it is not a bottleneck.
Computing distances takes a lot of time (above).
Mutating enabled/disabled genes may also be a bottleneck because for each genome G, we perform probabilistic mutation for each connection gene C of G, thus it takes C*G operations. Though this may be negligible, deeper investigation is needed here.
Future ideas
Make it multi-threaded. Let the algorithm spread the genomes on the threads such that we can leverage full computation power.
Code optimization. Optimize the distances (use data structures such as map or cache to store distances to reduce to constant time retrieval, on average) and other bottlenecks.
Use dynamic code analysis tools to calculate time it takes for certain functions to terminate, e.g. crossover, mutation, distances, etc.
Add unit testing module and re-code tests under that module.
Infrastructure. Containers for automatic installation of packages. Wait, where is the package manager?! 📦
The text was updated successfully, but these errors were encountered:
Huh, the problem with forcing the disabled genes was that I DID NOT SORT IN DECREASING ORDER! It sorted in an increasing and was instead taking all the worst genes 🤦♂️
Deep-copying objects from the list (e.g. deepcopy.copy(gene[0]) instead of gene[0]) seemed to have solved the crossover operation problem. Now, the algorithm performs quite well and manages to converge in under 150 generations.
Good news
/test
folder. The documentation is also done.Not so good news
Although I have implemented 99% of the NEAT algorithm, I encountered the following obscurities:
Future ideas
The text was updated successfully, but these errors were encountered: