Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Binary PSO cost problem #33

Closed
ljvmiranda921 opened this issue Sep 25, 2017 · 2 comments
Closed

Binary PSO cost problem #33

ljvmiranda921 opened this issue Sep 25, 2017 · 2 comments
Assignees
Labels
bug Bugs, bug fixes, etc.

Comments

@ljvmiranda921
Copy link
Owner

ljvmiranda921 commented Sep 25, 2017

  • PySwarms version: 0.1.6a

Description

"The cost doesn't seem to monotonically decrease from one iteration to the next. It seems like the cost that should be returned each iteration as the best cost is the one that is the historically best across all particles, yet on some iterations the cost seems to jump up, and the final cost when the algorithm completes isn't the minimum cost it encountered and returned in previous iterations. "

"When I reduce the inertia, the problem reduces (but it's still there). That behavior suggests to me that the global best position is computed from only particles' current positions and not their past positions. That way, as particles become more likely to explore, they are more likely to move into and out of good solutions. Could that be what's going on? That the global / social best positions are computed only on the current iteration and not on the history of found solutions?"

What I Did

import numpy as np
import pyswarms as ps
from sklearn.datasets import make_regression
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split

X, y = make_regression(n_samples=100, n_features=300, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)

# Define cost function
line_model = LinearRegression()
def rmse_particle(pos):
    line_model.fit(X_train[:,pos==1], y_train)
    pred = line_model.predict(X_test[:,pos==1])
    return np.sqrt(mean_squared_error(pred, y_test)) 

def rmse_func(positions):
    n_particles = positions.shape[0]
    j = [rmse_particle(positions[i]) for i in range(n_particles)]
    return np.array(j)

# Perform optimization
options = {'c1': 0.5, 'c2': 0.5, 'w':0, 'k':30 , 'p':2}
optimizer = ps.discrete.BinaryPSO(n_particles=30, dimensions=300, options=options)
optimizer.reset() 
cost, pos = optimizer.optimize(rmse_func, print_step=1, iters=300, verbose=2)
@ljvmiranda921 ljvmiranda921 added the bug Bugs, bug fixes, etc. label Sep 25, 2017
@ljvmiranda921 ljvmiranda921 self-assigned this Sep 25, 2017
@ljvmiranda921
Copy link
Owner Author

ljvmiranda921 commented Sep 25, 2017

Possible culprit is self assigning personal best PSO without keeping a history of positions and cost.
See the relevant code here.

@CPapadim
Copy link
Contributor

I submitted a pull request for review to fix this issue here: #34

ljvmiranda921 pushed a commit that referenced this issue Sep 25, 2017
This commit fixes the best_cost problem as referenced in
Issue #33. It turns out that we're reporting the current best
in the current iteration, but not the one seen throughout
history. This is already fixed by taking the actual swarm history
in itself.

Author: ljvmiranda921
Email: ljvmiranda@gmail.com
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Bugs, bug fixes, etc.
Projects
None yet
Development

No branches or pull requests

2 participants