Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement assertions() in search methods #27

Closed
ljvmiranda921 opened this issue Sep 15, 2017 · 32 comments
Closed

Implement assertions() in search methods #27

ljvmiranda921 opened this issue Sep 15, 2017 · 32 comments
Labels
first-timers-only For first time contributors only! help wanted Help is much appreciated

Comments

@ljvmiranda921
Copy link
Owner

ljvmiranda921 commented Sep 15, 2017

Thanks to @SioKCronin , we're now able to implement hyperparameter search tools such as GridSearch and RandomSearch . These are awesome implementations, but we can maximize its robustness by adding an assertions() method in our classes. Would you like to try?

What you'll do

Simply put, you will just add an assertions() method in the SearchBase class. This should contain various
checks when instantiating any class that inherits from it. You can check the docstrings for each class and create a proper method call for it.

If you want to check previous implementations, see the SwarmBase.assertions method, and check how it is being implemented in GlobalBestPSO (it is not called explicitly because it just inherits from the base) or in LocalBestPSO (additional lines of code were added after calling the super() because of extra attributes).

Go the extra mile?

If you want to go the extra mile, you can add tests to check for edge cases and see if your assertions are working when passed with an invalid input (maybe an invalid type, out-of-bounds, etc.). For this, you just need to add an Instantiation class in the test.utils.search.test_gridsearch and test.utils.search.test_randomsearch modules.

Again, if you want a template, you can check the Instantiation classes in test.optimizers.test_global_best and others. Notice how we try to feed the class with invalid arguments/edge cases and have the unit test capture them. It's a very fun activity!

If you wish to take this on, then feel free to drop a comment here! I'd be glad to help you!

Update: 9/17/2017

Follow the instructions in this quick-start guide to get you started!

Commit Guidelines

I'd appreciate if we lessen the number of commits per issue to 1-2. Of course you can commit often, but before merging, I would ask you to rebase them depending on the changes you've done. A commit format that we follow is shown below:

Short and sweet imperative title  (#27)

Description of the commit. The commit title should be short
and concise, and must be in imperative form. It could be as
simple as `Add tests for optimizer` or `Implement foo`. It describes
the "what" of the commit. You must also reference the issue, if any,
in the title. Here, in the description, we explain the
"why" of the commit. This is more or less free-for-all. So you can
describe as detailed as possible, in whatever tense/form you like.

Author: <your-github-username>
Email: <your-email>
@ljvmiranda921 ljvmiranda921 added first-timers-only For first time contributors only! help wanted Help is much appreciated labels Sep 15, 2017
@ljvmiranda921 ljvmiranda921 changed the title Add assertions() in search methods Implement assertions() in search methods Sep 15, 2017
@jazcap53
Copy link
Contributor

Hi!
I'd like to take this on as my (very) first OSS contribution.
Best,
Andy

@ljvmiranda921
Copy link
Owner Author

Hi @jazcap53, awesome! Just leave a message here if you're stuck or need some help! 👍

@jazcap53
Copy link
Contributor

jazcap53 commented Sep 16, 2017 via email

@jazcap53
Copy link
Contributor

@ljvmiranda921 Making good progress on #27. Just had to overcome a little nervousness :-)

@ljvmiranda921
Copy link
Owner Author

ljvmiranda921 commented Sep 17, 2017 via email

@jazcap53
Copy link
Contributor

jazcap53 commented Sep 17, 2017

Hi @ljvmiranda921
Just checking that:

  • SearchBase should check that param optimizer is of type LocalBestPSO or GlobalBestPSO
  • GridSearch does not need an assertions method since all its params are checked in SearchBase
  • RandomSearch will need an assertions method only if you want me to check that the value of param n_selection_iters falls within a certain range

@ljvmiranda921
Copy link
Owner Author

ljvmiranda921 commented Sep 17, 2017

Hi @jazcap53!

SearchBase should check that param optimizer is of type LocalBestPSO or GlobalBestPSO

Yes you are correct! It should check if optimizer is of type LocalBestPSO or GlobalBestPSO. The SearchBase also accepts other values when initialization, but I think they don't need to be asserted because they can be caught by the PSO implementations. What do you think about that?

GridSearch does not need an assertions method since all its params are checked in SearchBase

Yes you are correct. Since it inherits everything from SearchBase. No need to declare an assertions method.

RandomSearch will need an assertions method only if you want me to check that the value of param n_selection_iters falls within a certain range

Hmmm...not sure if asserting a certain range will limit the implementation. The previous conversation about this would be to implement a logger.warn in the future if the user passed a very large value (because running it will take a very long time). Perhaps check the type if it's an integer?

Thanks a lot and I hope you're having fun!

@jazcap53
Copy link
Contributor

SearchBase also accepts other values when initialization, but I think they don't need to be asserted because they can be caught by the PSO implementations

Oops! I missed that. I put in some unnecessary checks which I'll remove.

I'll do the type check on n_selection_iters.

Yes, this is fun! I'm learning...

@jazcap53
Copy link
Contributor

@ljvmiranda921 Hi! I have two related questions:

  1. I can't figure out where to add my tests.
    I've put in two assertions, one in SearchBase and one in RandomSearch. Both raise TypeError on bad input.
    File tests/utils/search/test_randomsearch.py has classes to test return types, and return values.
    File tests/utils/search/test_gridsearch.py is similar.
    Do I add a new class in each file that will test for TypeError on bad input? Or do my tests belong somewhere else?
  2. Running python setup.py test from my vm produces this result:
    AttributeError: 'module' object has no attribute 'test_randomsearch'
    I'm guessing it's because I haven't written the new tests. If that's not true, and I have some other problem, where should I post the full error message?

@ljvmiranda921
Copy link
Owner Author

ljvmiranda921 commented Sep 18, 2017

Hi @jazcap53! regarding your questions,

For Question 1

Just add another class, Instantiation, which inherits from Base in tests/utils/search/test_randomsearch.py and tests/utils/search/test_gridsearch.py. So all the tests inside that Instantiation class are only concerned for cases when instantiating the RandomSearch and GridSearch classes. 👍 Also, make sure that each test method tests only one particular aspect of instantiation (one method to check this input, another for this, another for wrong type, another for out-of-bounds, etc.)

# Inside tests/utils/search/test_randomsearch.py

class Base(unittest.TestCase):
    def setUp(self):
         """Set-up test fixtures"""
         # If you want to add test fixtures here, feel
         # free to add new lines of code. Don't include
         # the pass
         pass

class Instantiation(Base):
    def test_something_about_instantiation(self):
         """Tests something about instantiation"""
         # Don't include the pass
         pass

For Question 2

Hmmmm, normally when I run tests, I go to the top-most level of the package where I can see both the pyswarms/ and tests/ directory, then I type this in the command-line:

$ python -m unittest discover

This is the same command that is implemented when we're doing the checks in travis-ci. The discover argument runs all the tests seen in the package. The -m argument takes care of Python imports and paths.

If you want to run a specific test, say, test_gridsearch, just type:

$ python -m unittest tests.utils.search.test_gridsearch

Hope it helps and good luck!! 😄

@jazcap53
Copy link
Contributor

@ljvmiranda921 Thanks very much!

@jazcap53
Copy link
Contributor

Hi @ljvmiranda921,
I've pushed a commit to my fork of pyswarms that I think fulfills the requirements of issue #27. I'd like to issue a pull request so that you can review.

The new code adds no new Errors or Warnings to the output of flake8, passes python -m unittest discover, and passes tox for py27, py34, py35, and py36. On my system, the python command runs py27.

I haven't yet done the "Go the Extra Mile?" part; will work on that tomorrow. Also, I'm new to Travis CI, so until I read up on that, Part 3 of the Pull Request Guidelines remains unfulfilled.

I appreciate your patience in helping me, and of course am open to any suggestions you may have.

@ljvmiranda921
Copy link
Owner Author

Awesome @jazcap53!

Many things happened recently, I got quite busy, but I will check your fork this weekend! 👍

Maybe some minor comment: It would be nice if you can rebase and reword your commit title
into something shorter. Maybe Implement assertions() in search would be nice! You can be as explicit as you want in the commit description 😄

Thank you so much for your effort and time as well!

ljvmiranda921 pushed a commit that referenced this issue Sep 21, 2017
Check that optimizer member in SearchBase has an optimize attribute.
Check type of n_selection_iters member in RandomSearch.
Add self.assertions() line to __init__() in each of Searchbase, GridSearch, and RandomSearch.
Add call to super().assertions() in RandomSearch.

Author: jazcap53
Email: andrew.jarcho@gmail.com
@ljvmiranda921
Copy link
Owner Author

Hi @jazcap53, how's it going? I'm wondering how's your progress on this. If you have any problems or stuck, feel free to comment :)

@jazcap53
Copy link
Contributor

jazcap53 commented Sep 28, 2017 via email

@ljvmiranda921
Copy link
Owner Author

Sure sure! No pressure, I was just concerned if you are still interested on working in it. Thanks a lot Andrew @jazcap53! Hope you're fine and well!

@jazcap53
Copy link
Contributor

jazcap53 commented Sep 28, 2017 via email

@jazcap53
Copy link
Contributor

Hi @ljvmiranda921!

I've got a question and a comment.

The question:
I don't want to test anything in tests/utils/search/test_randomsearch.py that is already being tested somewhere else.

For example if I put a test_keyword_check_fail() into test_randomsearch.py, will that duplicate the similar test in tests/optimizers/test_global_best.py?

I think the answer to that question is "no". If I'm wrong, please let me know.

The comment:
The tests that I said were "not useful or correct" actually seem OK. I'll include them in the pull
request when it's ready.

I think I'm getting a handle on this and will proceed. 😄 My apologies for the delay.

@jazcap53
Copy link
Contributor

Hmm! Am looking at the use-cases in your doc to try and answer my own question.

@jazcap53
Copy link
Contributor

Nope. I've still got questions.

  1. It looks like the Instantiation subclass in test_randomsearch.py only needs to test the types of optimizer and n_selection_iters. The Instantiation subclass in test_gridsearch.py only needs to test the type of optimizer. Is this correct?
  2. To check the type assertions I made in class RandomSearch, I need to create instances of RandomSearch. I chose to create the instances within the tests themselves, instead of in the setUp() function. That way the test runner will not have to pay the cost of the instantiations with every test in the file. Is that a correct approach (it looks ugly) ?

@ljvmiranda921
Copy link
Owner Author

ljvmiranda921 commented Oct 1, 2017 via email

@jazcap53
Copy link
Contributor

jazcap53 commented Oct 1, 2017 via email

@ljvmiranda921
Copy link
Owner Author

Hi Andrew @jazcap53,

I am back already, hopefully I can reply to your messages quickly. Just ping me if you have any questions/concerns! 👍

@jazcap53
Copy link
Contributor

jazcap53 commented Oct 3, 2017

Hi Lester @ljvmiranda921,

I've been thinking about testing the instantiations for RandomSearch.

...normally what we do is put the class instantiations in the
setUp(). So we have a self.my_object, but the arguments can be specified
individually in the test methods...

It looks to me like that may not work here. In file tests/optimizers/test_global_best.py, the class Instantiation tests all work by checking for exceptions raised on object creation. So the GlobalBestPSO objects are created in the tests themselves.

My instantiation tests also work by checking for exceptions raised on object creation. So my RandomSearch objects are also created in the tests.

Since we're expecting the object creations to fail, we can't put them in the setUp() methods
or no tests would be run.

Does this seem right to you?

Having a lot of fun here! 👍

@ljvmiranda921
Copy link
Owner Author

Hi Andrew @jazcap53 !

I see, I think what you said is correct. Thank you for catching my mistake. I guess your design is all good, it would be much better then to have the objects be created in the tests themselves. I tried out the search utilities earlier and I figured that your design is the way to go.

Awesome! Thank you so much! 💯

@jazcap53
Copy link
Contributor

jazcap53 commented Oct 3, 2017

Hi Lester @ljvmiranda921 ,

Great! I'll finish up and check my work tomorrow morning!

@ljvmiranda921
Copy link
Owner Author

Great! I'm looking forward to it!

Take care and have a good rest! 👍

@jazcap53
Copy link
Contributor

jazcap53 commented Oct 3, 2017

Hi @ljvmiranda921 !
Travis doesn't like my pull request. It's erroring out for Py 3.4 (only) with the message:
The command "pip install -U tox-travis" failed and exited with 2 during .
I don't think that's under my control. Am I correct, or should I make some change?

@ljvmiranda921
Copy link
Owner Author

Hi @jazcap53 ,

Let me check that one first. Sometimes travis makes a build error randomly and it's weird. I'll get back to you! 😄

@ljvmiranda921
Copy link
Owner Author

It's working now, I just clicked the Restart Build button 👍

ljvmiranda921 pushed a commit that referenced this issue Oct 3, 2017
Tests that a bad type for the optimizer parameter to GridSearch() raises a TypeError.
Tests that a bad type for the optimizer parameter or the n_selection_iters parameter
 to RandomSearch() raises a TypeError.

Author: jazcap53
Email: andrew.jarcho@gmail.com
@ljvmiranda921
Copy link
Owner Author

Hey @jazcap53, I have merged your PR already. Thank you so much for your contribution! It was really nice working with you. Just drop-by anytime you wish to contribute.

Take care and have a nice day! 💯 👍

@jazcap53
Copy link
Contributor

jazcap53 commented Oct 3, 2017 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
first-timers-only For first time contributors only! help wanted Help is much appreciated
Projects
None yet
Development

No branches or pull requests

2 participants