Skip to content

Commit

Permalink
Update Documentation based on refactored codebase
Browse files Browse the repository at this point in the history
The documentation was then updated based on the refactored
codebase in the previous commit. The following tasks were
then accomplished:
  - Updated docstrings for some code
  - Updated examples in .rst format
  - Updated examples in .ipynb format

Author: ljvmiranda921
  • Loading branch information
ljvmiranda921 committed Aug 8, 2017
1 parent dd95b67 commit 9984065
Show file tree
Hide file tree
Showing 18 changed files with 187 additions and 265 deletions.
2 changes: 1 addition & 1 deletion README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ built-in sphere function, :code:`pyswarms.utils.functions.sphere_func()`, and th
options = {'c1': 0.5, 'c2': 0.3, 'w':0.9}
# Call instance of PSO
optimizer = ps.single.GBestPSO(n_particles=10, dims=2, **options)
optimizer = ps.single.GlobalBestPSO(n_particles=10, dimensions=2, options=options)
# Perform optimization
stats = optimizer.optimize(fx.sphere_func, iters=100)
Expand Down
2 changes: 1 addition & 1 deletion docs/.gitignore
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
/pyswarms.rst
/pyswarms.*.rst
/modules.rst
/examples/.ipynb_checkpoints
/examples/.ipynb_checkpoints/*.ipynb
.editorconfig
.vscode
4 changes: 2 additions & 2 deletions docs/api/pyswarms.base.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,14 @@ Submodules
pyswarms.base module
--------------------

.. automodule:: pyswarms.base.bs
.. automodule:: pyswarms.base.base_single
:members:
:undoc-members:
:show-inheritance:
:private-members:
:special-members: __init__

.. automodule:: pyswarms.base.dbs
.. automodule:: pyswarms.base.base_discrete
:members:
:undoc-members:
:show-inheritance:
Expand Down
6 changes: 3 additions & 3 deletions docs/api/pyswarms.discrete.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@ pyswarms.discrete package
Submodules
----------

pyswarms.discrete.bn module
---------------------------
pyswarms.discrete.binary module
--------------------------------

.. automodule:: pyswarms.discrete.bn
.. automodule:: pyswarms.discrete.binary
:members:
:undoc-members:
:show-inheritance:
Expand Down
12 changes: 6 additions & 6 deletions docs/api/pyswarms.single.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,19 +6,19 @@ pyswarms.single package
Submodules
----------

pyswarms.single.gb module
--------------------------
pyswarms.single.global_best module
----------------------------------

.. automodule:: pyswarms.single.gb
.. automodule:: pyswarms.single.global_best
:members:
:undoc-members:
:show-inheritance:
:special-members: __init__

pyswarms.single.lb module
--------------------------
pyswarms.single.local_best module
---------------------------------

.. automodule:: pyswarms.single.lb
.. automodule:: pyswarms.single.local_best
:members:
:undoc-members:
:show-inheritance:
Expand Down
22 changes: 11 additions & 11 deletions docs/contributing.optimizer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,9 @@ Inheriting from base classes
Most optimizers in this library inherit its attributes and methods from a set of built-in
base classes. You can check the existing classes in :mod:`pyswarms.base`.

For example, if we take the :mod:`pyswarms.base.bs` class, a base-class for standard single-objective
continuous optimization algorithms such as global-best PSO (:mod:`pyswarms.single.gb`) and
local-best PSO (:mod:`pyswarms.single.lb`), we can see that it inherits a set of methods as
For example, if we take the :mod:`pyswarms.base.base_single` class, a base-class for standard single-objective
continuous optimization algorithms such as global-best PSO (:mod:`pyswarms.single.global_best`) and
local-best PSO (:mod:`pyswarms.single.local_best`), we can see that it inherits a set of methods as
seen below:

.. image:: inheritance.png
Expand All @@ -50,8 +50,8 @@ A short note on keyword arguments

The role of keyword arguments, or kwargs in short, is to act as a container for all other parameters
needed for the optimizer. You can define these things in your code, and create assertions to make all
of them required. However, note that in some implementations, required :code:`kwargs` might include
:code:`c1`, :code:`c2`, and :code:`w`. This is the case in :mod:`pyswarms.base.bs` for instance.
of them required. However, note that in some implementations, required :code:`options` might include
:code:`c1`, :code:`c2`, and :code:`w`. This is the case in :mod:`pyswarms.base.bases` for instance.

A short note on :code:`assertions()`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand All @@ -67,20 +67,20 @@ We make sure that everything can be imported when the whole :code:`pyswarms` lib
please make sure to also edit the accompanying :code:`__init__.py` file in the directory you are working
on.

For example, if you write your optimizer class :code:`MyOptimizer` inside a file called :code:`mo.py`,
For example, if you write your optimizer class :code:`MyOptimizer` inside a file called :code:`my_optimizer.py`,
and you are working under the :code:`/single` directory, please update the :code:`__init__.py` like
the following:

.. code-block:: python
from .gb import GBestPSO
from .lb import LBestPSO
from .global_best import GlobalBestPSO
from .local_best import LocalBestPSO
# Add your module
from .mo import MyOptimizer
from .my_optimizer import MyOptimizer
__all__ = [
"GBestPSO",
"LBestPSO",
"GlobalBestPSO",
"LocalBestPSO",
"MyOptimizer" # Add your class
]
Expand Down
78 changes: 37 additions & 41 deletions docs/examples/basic_optimization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,6 @@ single-objective functions using the global-best optimizer in
``pyswarms.single.LBestPSO``. This aims to demonstrate the basic
capabilities of the library when applied to benchmark problems.

.. code:: ipython3
import sys
sys.path.append('../')
.. code-block:: python
# Import modules
Expand Down Expand Up @@ -55,28 +50,28 @@ several variables at once.
options = {'c1': 0.5, 'c2': 0.3, 'w':0.9}
# Call instance of PSO
gbest_pso = ps.single.GBestPSO(n_particles=10, dims=2, **options)
optimizer = ps.single.GlobalBestPSO(n_particles=10, dimensions=2, options=options)
# Perform optimization
cost, pos = gbest_pso.optimize(fx.sphere_func, print_step=100, iters=1000, verbose=3)
cost, pos = optimizer.optimize(fx.sphere_func, print_step=100, iters=1000, verbose=3)
.. parsed-literal::
Iteration 1/1000, cost: 0.0035824017918
Iteration 101/1000, cost: 1.02538653288e-08
Iteration 201/1000, cost: 9.95696087972e-13
Iteration 301/1000, cost: 8.22034343822e-16
Iteration 401/1000, cost: 3.7188438887e-19
Iteration 501/1000, cost: 1.23935292549e-25
Iteration 601/1000, cost: 6.03016193248e-28
Iteration 701/1000, cost: 3.70755768681e-34
Iteration 801/1000, cost: 2.64385328058e-37
Iteration 901/1000, cost: 1.76488833461e-40
Iteration 1/1000, cost: 0.215476174296
Iteration 101/1000, cost: 5.26998280059e-07
Iteration 201/1000, cost: 1.31313801471e-11
Iteration 301/1000, cost: 1.63948780036e-15
Iteration 401/1000, cost: 2.72294062778e-19
Iteration 501/1000, cost: 3.69002488955e-22
Iteration 601/1000, cost: 3.13387805277e-27
Iteration 701/1000, cost: 1.65106278625e-30
Iteration 801/1000, cost: 6.95403958989e-35
Iteration 901/1000, cost: 1.33520105208e-41
================================
Optimization finished!
Final cost: 0.000
Best value: [-6.5732265560180066e-24, -7.4004230063696789e-22]
Final cost: 0.0000
Best value: [9.4634973546019334e-23, 1.7011045174312443e-22]
Expand All @@ -90,31 +85,31 @@ Now, let's try this one using local-best PSO:
.. code-block:: python
# Set-up hyperparameters
options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9, 'k': 2, 'p': 2}
options = {'c1': 0.5, 'c2': 0.3, 'w':0.9, 'k': 2, 'p': 2}
# Call instance of PSO
lbest_pso = ps.single.LBestPSO(n_particles=10, dims=2, **options)
optimizer = ps.single.LocalBestPSO(n_particles=10, dimensions=2, options=options)
# Perform optimization
cost, pos = lbest_pso.optimize(fx.sphere_func, print_step=100, iters=1000, verbose=3)
cost, pos = optimizer.optimize(fx.sphere_func, print_step=100, iters=1000, verbose=3)
.. parsed-literal::
Iteration 1/1000, cost: 0.190175474818
Iteration 101/1000, cost: 1.14470953523e-06
Iteration 201/1000, cost: 6.79485221069e-11
Iteration 301/1000, cost: 1.00691597113e-14
Iteration 401/1000, cost: 2.98301783945e-18
Iteration 501/1000, cost: 2.13856158282e-20
Iteration 601/1000, cost: 5.49351926815e-25
Iteration 701/1000, cost: 1.7673389214e-29
Iteration 801/1000, cost: 1.83082804473e-33
Iteration 901/1000, cost: 1.75920918448e-36
Iteration 1/1000, cost: 0.0573032190292
Iteration 101/1000, cost: 8.92699853837e-07
Iteration 201/1000, cost: 4.56513550671e-10
Iteration 301/1000, cost: 2.35083665314e-16
Iteration 401/1000, cost: 8.09981989467e-20
Iteration 501/1000, cost: 2.58846774519e-22
Iteration 601/1000, cost: 3.33919326611e-26
Iteration 701/1000, cost: 2.15052800954e-30
Iteration 801/1000, cost: 1.09638832057e-33
Iteration 901/1000, cost: 3.92671836329e-38
================================
Optimization finished!
Final cost: 3.000
Best value: [-8.2344756213578705e-21, -2.6563827831876976e-20]
Final cost: 0.0000
Best value: [1.4149803165668767e-21, -9.9189063589743749e-24]
Expand Down Expand Up @@ -161,18 +156,18 @@ constant.
options = {'c1': 0.5, 'c2': 0.3, 'w':0.9}
# Call instance of PSO with bounds argument
optimizer = ps.single.GBestPSO(n_particles=10, dims=2, bounds=bounds, **options)
optimizer = ps.single.GlobalBestPSO(n_particles=10, dimensions=2, options=options, bounds=bounds)
# Perform optimization
cost, pos = optimizer.optimize(fx.rastrigin_func, print_step=100, iters=1000, verbose=3)
.. parsed-literal::
Iteration 1/1000, cost: 10.3592595923
Iteration 101/1000, cost: 0.00381030608321
Iteration 201/1000, cost: 1.31982446305e-07
Iteration 301/1000, cost: 1.16529008665e-11
Iteration 1/1000, cost: 6.93571097813
Iteration 101/1000, cost: 0.00614705911661
Iteration 201/1000, cost: 7.22876336567e-09
Iteration 301/1000, cost: 5.89750470681e-13
Iteration 401/1000, cost: 0.0
Iteration 501/1000, cost: 0.0
Iteration 601/1000, cost: 0.0
Expand All @@ -181,6 +176,7 @@ constant.
Iteration 901/1000, cost: 0.0
================================
Optimization finished!
Final cost: 0.000
Best value: [8.9869507154871327e-10, -2.7262405947023504e-09]
Final cost: 0.0000
Best value: [-6.763954278218746e-11, 2.4565912679296225e-09]
6 changes: 3 additions & 3 deletions docs/examples/feature_subset_selection.rst
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ function
Inputs
------
x: numpy.ndarray of shape (n_particles, dims)
x: numpy.ndarray of shape (n_particles, dimensions)
The swarm that will perform the search
Returns
Expand Down Expand Up @@ -209,9 +209,9 @@ each particle will see one another).
options = {'c1': 0.5, 'c2': 0.5, 'w':0.9, 'k': 30, 'p':2}
# Call instance of PSO
dims = 15 # dimensions should be the number of features
dimensions = 15 # dimensions should be the number of features
optimizer.reset()
optimizer = ps.discrete.BinaryPSO(n_particles=30, dims=dims, **options)
optimizer = ps.discrete.BinaryPSO(n_particles=30, dimensions=dimensions, options=options)
# Perform optimization
cost, pos = optimizer.optimize(f, print_step=100, iters=1000, verbose=2)
Expand Down
32 changes: 16 additions & 16 deletions docs/examples/train_neural_network.rst
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ compute ``forward_prop()`` to the whole swarm:
Inputs
------
x: numpy.ndarray of shape (n_particles, dims)
x: numpy.ndarray of shape (n_particles, dimensions)
The swarm that will perform the search
Returns
Expand All @@ -181,29 +181,29 @@ parameters arbitrarily.
options = {'c1': 0.5, 'c2': 0.3, 'w':0.9}
# Call instance of PSO
dims = (4 * 20) + (20 * 3) + 20 + 3
optimizer = ps.single.GBestPSO(n_particles=100, dims=dims, **options)
dimensions = (4 * 20) + (20 * 3) + 20 + 3
optimizer = ps.single.GlobalBestPSO(n_particles=100, dimensions=dimensions, options=options)
# Perform optimization
cost, pos = optimizer.optimize(f, print_step=100, iters=1000, verbose=3)
.. parsed-literal::
Iteration 1/1000, cost: 1.11338932053
Iteration 101/1000, cost: 0.0541135752532
Iteration 201/1000, cost: 0.0468046270747
Iteration 301/1000, cost: 0.0434828849533
Iteration 401/1000, cost: 0.0358833340106
Iteration 501/1000, cost: 0.0312474981647
Iteration 601/1000, cost: 0.0150869267541
Iteration 701/1000, cost: 0.01267166403
Iteration 801/1000, cost: 0.00632312205821
Iteration 901/1000, cost: 0.00194080306565
Iteration 1/1000, cost: 1.09858937026
Iteration 101/1000, cost: 0.0516382653768
Iteration 201/1000, cost: 0.0416398234107
Iteration 301/1000, cost: 0.0399519086999
Iteration 401/1000, cost: 0.0396579575634
Iteration 501/1000, cost: 0.0394155032472
Iteration 601/1000, cost: 0.0388702854787
Iteration 701/1000, cost: 0.0386106261126
Iteration 801/1000, cost: 0.0384067695633
Iteration 901/1000, cost: 0.0370548470526
================================
Optimization finished!
Final cost: 0.0015
Best value: -0.356506 0.441392 -0.605476 0.620517 -0.156904 0.206396 ...
Final cost: 0.0362
Best value: 0.170569 -4.586860 -0.726267 -3.602894 0.085438 -3.167099 ...
Expand Down Expand Up @@ -265,6 +265,6 @@ get the mean.
.. parsed-literal::
1.0
0.98666666666666669
6 changes: 3 additions & 3 deletions docs/features.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@ Continuous
Single-objective optimization where the search-space is continuous. Perfect for optimizing various
functions.

* :mod:`pyswarms.single.gb` - classic global-best Particle Swarm Optimization algorithm with a star-topology. Every particle compares itself with the best-performing particle in the swarm.
* :mod:`pyswarms.single.global_best` - classic global-best Particle Swarm Optimization algorithm with a star-topology. Every particle compares itself with the best-performing particle in the swarm.

* :mod:`pyswarms.single.lb` - classic local-best Particle Swarm Optimization algorithm with a ring-topology. Every particle compares itself only with its nearest-neighbours as computed by a distance metric.
* :mod:`pyswarms.single.local_best` - classic local-best Particle Swarm Optimization algorithm with a ring-topology. Every particle compares itself only with its nearest-neighbours as computed by a distance metric.


Discrete
Expand All @@ -25,7 +25,7 @@ Discrete
Single-objective optimization where the search-space is discrete. Useful for job-scheduling, traveling
salesman, or any other sequence-based problems.

* :mod:`pyswarms.discrete.bn` - classic binary Particle Swarm Optimization algorithm without mutation. Uses a ring topology to choose its neighbours (but can be set to global).
* :mod:`pyswarms.discrete.binary` - classic binary Particle Swarm Optimization algorithm without mutation. Uses a ring topology to choose its neighbours (but can be set to global).


Utilities
Expand Down
28 changes: 0 additions & 28 deletions docs/usage.rst

This file was deleted.

Loading

0 comments on commit 9984065

Please sign in to comment.