-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TransposeConv wrong shape? #262
Comments
@BmanClark Could you please tell me which version of Tensorflow (Lite) are you testing against? |
It is indeed a conversion error. Could you please upload a TorchScript model and the shape of the inputs so that we can reproduce this issue. |
TFLite is 2.8.0. The line number is out a few from latest, but the code seems pretty much the same in transpose_conv.cc Error will need fixing, but as a thought, if bias is 0 and optional, it would be more efficient to not include it in tflite model? |
Hmm, I've created the Torchscript version, but it's (just!) too big to attach, so I'm still working out how I can share it with you. |
As for workaround, yeah, it should be easy. But I'm more interested with how it happened.
Google drive, onedrive, either will do if size is an issue. Or you may try out if the model can be traced using our code tracer(usage can be found out in examples/tracer) so that you don't need to upload the weights. |
I've put on my google drive and shared with you. |
Ex
Thanks. I will take a look tomorrow. |
Tinkering further I can get around the problem by specifying |
@BmanClark Hi, I've put up a fix to eliminate the zero bias tensors for the DeConv ops. #263 But I'm not sure if group deconv is supported in TFLite. If that is unsupported, then this fix may be just useless. |
Update: looks like group deconvolution is not supported, at least in XNNPACK delegate. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/delegates/xnnpack/xnnpack_delegate.cc#L6213
So this may be the only way to use grouped deconvolutions in TFLite. You may create a new issue in the TFLite repo. |
I'm not actually looking to target XNNPACK ultimately (although I would like it as a reference), but I've created an issue: tensorflow/tensorflow#62181 |
Actually, I didn't find the changes to enable group deconvolution for the general optimized kernel, either. So it is possibly that it isn't supported by TFLite interpreter. But as for ArmNN, the story may be different since they could support that case since TFLite is only a format for model representation for them. |
ArmNN don't currently support group deconvolution either. They might look at adding it for me, but I might not need them to, as the network creator has kindly changed the group deconvolution to an Upsample for me! I love the OSS community! |
Update: the network happily converts now that its Group Deconvolutions (aka Transpose Convolutions) are replaced with Upsamples. Thank you. I have had a request for more information on the Tensorflow issue raised (specifically on how I used TinyNN to convert), so I'm providing that. |
@BmanClark I have commented on that issue. Glad you solved it the other way. |
The TFlite code doesn't like the shape of the Transpose Convolution node I have in my TinyNN converted network. It seems consistent across Transpose Convolutions that it expects the bias to match the size of the first dimension, but it is in fact the size of the last dimension in my TinyNN-converted network. (The error is on the first TransposeConv, but there are a number of similarly-formed ones in the network which will presumably have the same error).
The network runs fine before conversion, and the tflite looks very sensible in netron, but TFlite runtime doesn't like it...
It gives an error of:
The node in question has (according to netron):
which evidently doesn't match TFLite's rules. Is this a conversion error, or how do I cope with it?
Incidentally, the biases in all the Transpose Convolutions are entirely 0s and the Weights a suspiciously simple arrangement of 1s and 0s. Bias being optional for this TFLite operator, is there a way to not include it?
The text was updated successfully, but these errors were encountered: