-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding label check to trades adversarial trainer #2231
Adding label check to trades adversarial trainer #2231
Conversation
Signed-off-by: GiulioZizzo <giulio.zizzo@yahoo.co.uk>
Codecov Report
❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more. @@ Coverage Diff @@
## dev_1.15.1 #2231 +/- ##
==============================================
+ Coverage 84.60% 85.61% +1.00%
==============================================
Files 308 308
Lines 27464 27470 +6
Branches 5045 5046 +1
==============================================
+ Hits 23237 23519 +282
+ Misses 2961 2670 -291
- Partials 1266 1281 +15
|
Hi @GiulioZizzo Thank you very much for you pull request! I have changed the target branch to @Zaid-Hameed Because you have implemented the TRADES trainer, could you please add a review of the proposed changes? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Giulio, Can you please check if adding from_logits=True
makes any changes to tests outcomes i.e., changing line 37 to
classifier, _ = image_dl_estimator(from_logits=True)
This is just to make sure that softmax is not being applied twice in test files.
Signed-off-by: GiulioZizzo <giulio.zizzo@yahoo.co.uk>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot Giulio for making the changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @GiulioZizzo I think you proposal for functionality is good. I'm proposing to follow the ART patterns in solving the issue, what do you think?
@@ -240,7 +245,7 @@ def _batch_process(self, x_batch: np.ndarray, y_batch: np.ndarray) -> Tuple[floa | |||
) | |||
|
|||
# Check label shape | |||
if self._classifier._reduce_labels: # pylint: disable=W0212 | |||
if self._classifier._reduce_labels and y_preprocessed.ndim > 1: # pylint: disable=W0212 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is not needed because ART uses internally the one-hot-encoded format for classification labels.
output = np.argmax(self.predict(x_test), axis=1) | ||
nb_correct_pred = np.sum(output == np.argmax(y_test, axis=1)) | ||
if y_test.ndim > 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea of this change is good. Let's follow the ART pattern by using y = check_and_transform_label_format(y, nb_classes=self._classifier.nb_classes)
for y
and validation_data[1]
imported with from art.utils import check_and_transform_label_format
at the beginning of this method to ensure they are both one-hot-encoded.
Signed-off-by: GiulioZizzo <giulio.zizzo@yahoo.co.uk>
Description
We add a dimensionality check to the labels in Trades adversarial training before applying argmax operations.
Fixes #2230
Type of change
Please check all relevant options.
Testing
Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.
Test Configuration:
Checklist