-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simple Progress Bar Inclusion #2334
Simple Progress Bar Inclusion #2334
Conversation
Signed-off-by: GiulioZizzo <giulio.zizzo@yahoo.co.uk>
Signed-off-by: GiulioZizzo <giulio.zizzo@yahoo.co.uk>
Signed-off-by: GiulioZizzo <giulio.zizzo@yahoo.co.uk>
Signed-off-by: GiulioZizzo <giulio.zizzo@yahoo.co.uk>
Signed-off-by: GiulioZizzo <giulio.zizzo@yahoo.co.uk>
Codecov ReportAttention:
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## dev_1.17.0 #2334 +/- ##
==============================================
+ Coverage 76.18% 85.47% +9.29%
==============================================
Files 327 327
Lines 30850 30206 -644
Branches 5716 5591 -125
==============================================
+ Hits 23503 25819 +2316
+ Misses 5914 2948 -2966
- Partials 1433 1439 +6
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @GiulioZizzo Thank you very much for your pull request! I think it would be great to extend the PR to also cover Keras and TensorFlow for the new progress bar.
@@ -389,12 +390,14 @@ def fit( # pylint: disable=W0221 | |||
the batch size. If ``False`` and the size of dataset is not divisible by the batch size, then | |||
the last batch will be smaller. (default: ``False``) | |||
:param scheduler: Learning rate scheduler to run at the start of every epoch. | |||
:param kwargs: Dictionary of framework-specific arguments. This parameter is not currently supported for PyTorch | |||
and providing it takes no effect. | |||
:param kwargs: Dictionary of framework-specific arguments. Currently supports "display_progress_bar" to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's change this new argument display_progress_bar
to verbose
to follow the ART pattern and add it as full argument outside kwargs.
if framework in ["kerastf", "keras"]: | ||
kwargs = {"callbacks": [LearningRateScheduler(get_lr)]} | ||
else: | ||
kwargs = {"callbacks": [LearningRateScheduler(get_lr)], "display_progress_bar": True} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add progress bars also for Keras and TensorFlow v1.
Signed-off-by: GiulioZizzo <giulio.zizzo@yahoo.co.uk>
Signed-off-by: GiulioZizzo <giulio.zizzo@yahoo.co.uk>
Signed-off-by: GiulioZizzo <giulio.zizzo@yahoo.co.uk>
Signed-off-by: GiulioZizzo <giulio.zizzo@yahoo.co.uk>
Signed-off-by: GiulioZizzo <giulio.zizzo@yahoo.co.uk>
…andomized tools Signed-off-by: GiulioZizzo <giulio.zizzo@yahoo.co.uk>
…for kwargs checks Signed-off-by: GiulioZizzo <giulio.zizzo@yahoo.co.uk>
display_pb = self.process_verbose(verbose) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
display_pb = self.process_verbose(verbose) |
for epoch in tqdm(range(nb_epochs), disable=not display_pb, desc="Epochs"): | ||
for i_batch, o_batch in tqdm(generator.iterator, disable=not display_pb, desc="Batches"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for epoch in tqdm(range(nb_epochs), disable=not display_pb, desc="Epochs"): | |
for i_batch, o_batch in tqdm(generator.iterator, disable=not display_pb, desc="Batches"): | |
for epoch in tqdm(range(nb_epochs), disable=not verbose, desc="Epochs"): | |
for i_batch, o_batch in generator.iterator: |
kwargs = {"epochs": 1} | ||
with pytest.raises(TypeError) as exception: | ||
classifier.fit(x_train_mnist, y_train_mnist, batch_size=default_batch_size, nb_epochs=1, **kwargs) | ||
kwargs = {"callbacks": [LearningRateScheduler(get_lr)], "verbose": True} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kwargs = {"callbacks": [LearningRateScheduler(get_lr)], "verbose": True} | |
kwargs = {"callbacks": [LearningRateScheduler(get_lr)]} |
with pytest.raises(TypeError) as exception: | ||
classifier.fit(x_train_mnist, y_train_mnist, batch_size=default_batch_size, nb_epochs=1, **kwargs) | ||
kwargs = {"callbacks": [LearningRateScheduler(get_lr)], "verbose": True} | ||
classifier.fit(x_train_mnist, y_train_mnist, batch_size=default_batch_size, nb_epochs=1, **kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
classifier.fit(x_train_mnist, y_train_mnist, batch_size=default_batch_size, nb_epochs=1, **kwargs) | |
classifier.fit(x_train_mnist, y_train_mnist, batch_size=default_batch_size, nb_epochs=1, verbose=True, **kwargs) |
|
||
# Test failure for invalid parameters: does not apply to many frameworks which allow arbitrary kwargs | ||
if framework not in [ | ||
"tensorflow1", | ||
"tensorflow2", | ||
"tensorflow2v1", | ||
"huggingface", | ||
"pytorch", | ||
]: | ||
kwargs = {"epochs": 1} | ||
with pytest.raises(TypeError) as exception: | ||
classifier.fit(x_train_mnist, y_train_mnist, batch_size=default_batch_size, nb_epochs=1, **kwargs) | ||
|
||
assert "multiple values for keyword argument" in str(exception) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Test failure for invalid parameters: does not apply to many frameworks which allow arbitrary kwargs | |
if framework not in [ | |
"tensorflow1", | |
"tensorflow2", | |
"tensorflow2v1", | |
"huggingface", | |
"pytorch", | |
]: | |
kwargs = {"epochs": 1} | |
with pytest.raises(TypeError) as exception: | |
classifier.fit(x_train_mnist, y_train_mnist, batch_size=default_batch_size, nb_epochs=1, **kwargs) | |
assert "multiple values for keyword argument" in str(exception) |
Signed-off-by: GiulioZizzo <giulio.zizzo@yahoo.co.uk>
Signed-off-by: GiulioZizzo <giulio.zizzo@yahoo.co.uk>
Signed-off-by: GiulioZizzo <giulio.zizzo@yahoo.co.uk>
Description
Currently, the core training routines for many ART estimators do not had progress bars to inform the user of how long models require for training. Although this was not a problem when ART was first developed as models were small and trained quickly, with ever larger models being evaluated for robustness, it Is very useful to have an indication of how long training will take.
For this PR we introduce a simple
tqdm
progress bar in thepytorch
/tensorflow
/huggingface
estimators for thefit
andfit_generate
functions.This feature branches off the current open PR #2300
Fixes (partially) #2288
Type of change
Please check all relevant options.
Testing
Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.
Updated the current CI test
test_fit_kwargs
to include a check for using the progress bar kwargTest Configuration:
Checklist