-
Notifications
You must be signed in to change notification settings - Fork 246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added TensorFlow support to nncf.Tensor
#3106
base: develop
Are you sure you want to change the base?
Added TensorFlow support to nncf.Tensor
#3106
Conversation
44d27af
to
79ded03
Compare
79ded03
to
c977245
Compare
c977245
to
9b6bf86
Compare
Please, update your branch from develop. |
9b6bf86
to
298891b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for contributing. Please address my comments.
PS: Sorry for delay with reviewing.
nncf/tensor/functions/tf_numeric.py
Outdated
keepdims: bool = False, | ||
) -> tf.Tensor: | ||
numpy_a = np.array(a) | ||
numpy_median = np.median(numpy_a, axis=axis, keepdims=keepdims) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest to implement median
using tf.math.top_k
. I hope the better performance can be achieved on GPU using tf.math.top_k
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have any evidence that this implementation is more performant than calling np.median
?
c687e6a
to
bd17d4a
Compare
It looks like the PR is stuck. I see that @olegkkruglov answered all the comments from @alexsu52 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please, update your branch from develop.
axis = (0, 1) | ||
|
||
with tf.device(a.device): | ||
if ord == "nuc" and isinstance(axis, tuple) and len(axis) != 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please, check that keepdims
is supported in all cases. I found that this case does not support for keepdims=True
.
Please also add a test for this case.
from nncf.tensor.functions import linalg | ||
|
||
|
||
@linalg.norm.register(tf.Tensor) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add also implementation for ord=0
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like the pull request is in the home stretch. Please address my comments and add support for as_numpy_tensor
function from develop as well.
with tf.device(a.device): | ||
if driver is not None: | ||
warnings.warn("Driver specifying is not supported in TensorFlow lstsq method") | ||
if tf.rank(b) == 1: | ||
b = tf.expand_dims(b, axis=0) | ||
perm = list(range(tf.rank(b))) | ||
perm[-1], perm[-2] = perm[-2], perm[-1] | ||
b = tf.transpose(b, perm=perm) | ||
|
||
return tf.linalg.lstsq(a, b) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nncf/tensor/functions/tf_numeric.py
Outdated
with tf.device(a.device): | ||
if axis is None: | ||
return tf.reduce_max(a) | ||
return tf.reduce_max(a, axis=axis, keepdims=keepdim) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would ask you to fix torch in this PR as it is a bug. Thanks
@numeric.unstack.register(tf.Tensor) | ||
def _(x: tf.Tensor, axis: int = 0) -> List[tf.Tensor]: | ||
with tf.device(x.device): | ||
if not list(x.shape): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AlexanderDokuchaev, could you provide a code snippet when this if will be true?
nncf/tensor/functions/tf_numeric.py
Outdated
keepdims: bool = False, | ||
) -> tf.Tensor: | ||
numpy_a = np.array(a) | ||
numpy_median = np.median(numpy_a, axis=axis, keepdims=keepdims) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have any evidence that this implementation is more performant than calling np.median
?
|
||
|
||
@numeric.round.register(tf.Tensor) | ||
def _(a: tf.Tensor, decimals=0) -> tf.Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def _(a: tf.Tensor, decimals=0) -> tf.Tensor: | |
def _(a: tf.Tensor, decimals: int = 0) -> tf.Tensor: |
Changes
tf_numeric.py
andtf_linalg.py
files with implementations of methods needed fornncf.Tensor
support.nncf.Tensor
.__ifloordiv__
operator fornncf.Tensor
.Reason for changes
Currently TensorFlow tensors are not supported by
nncf.Tensor
. It prevents #3041 from being done.Related tickets
#3041
Tests
TestTFNNCFTensorOperators
andTestGPUTFNNCFTensorOperators
classes were added totests/tensorflow/test_tensor.py
. Some changes were necessary fortests/cross_fw/test_templates/template_test_nncf_tensor.py
, mostly related to different device management in TensorFlow.