-
Notifications
You must be signed in to change notification settings - Fork 110
Introducing quantized_linear #118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a very impactful and important contribution to QKeras.
Since this deprecates |
|
||
def test_GetScale_PerChannelScale(): | ||
# Rank1 tensors | ||
x_r1 = tf.ones([4]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since quantized_linear is trying to incorporate binary quantizer as well, I suggest add more scale test here for per channel scales.
- instead of having input all as 1, consider use wider range of input values so we can test better how scale is calculated;
- Add test on quantizer scale during training and eval; to make sure scales don't change between two stages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not a test for quantized_linear. I only made a change here because I changed the name of the _get_scale
function.
However, these are still good points, and I will add tests with specific scale values to the rest of the quantized_linear tests in theqactivations_test.py
file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added both (1) and (2)!
scale_r4_eps_2_ua_3 = quantizers._get_least_squares_scale( | ||
"auto", x_r4, q_r4, elements_per_scale=2, scale_axis=3) | ||
|
||
assert_equal(tf.shape(scale_r4_eps_none_ua_none).numpy(), [1, 1, 1, 16]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
consider add assertion on some specific scale values as well. The same goes for other related tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See discussion above
New quantizer to replace quantized_bits. Summary of improvements from old version: