Skip to content

Conversation

jwfromm
Copy link
Contributor

@jwfromm jwfromm commented May 11, 2019

In some cases it may be necessary to infer the value of a tensor to determine the shape. For example Fill, Reshape, and Resize have shape arguments that must be known. Reshape already handled this but fill and resize did not. I've added a function that infers the value of an input and sets the appropriate output shape for these classes.

Also added a DepthToSpace operation, a mapping from ResizeBicubic to _resize_bilinear (better than nothing for now), and added flexibility to bias add for non NHWC layers.

@jwfromm jwfromm changed the title Better shape inference in Relay TensorFlow Frontend. [Relay] Better shape inference in TensorFlow Frontend. May 11, 2019
@tqchen
Copy link
Member

tqchen commented May 11, 2019

@srkreddy1238 @Huyuwei @kazum please help review this PR

@srkreddy1238
Copy link
Contributor

@jwfromm Can you add some test cases for new operators and also to cover the infer_value additions.

Thanks

@tqchen tqchen added the status: need test case need test cases to cover the change label May 13, 2019
@jwfromm
Copy link
Contributor Author

jwfromm commented May 13, 2019

Test cases are nearly finished, however I think I've found an unrelated Relay bug on the master branch. It looks like Relay fails to build a graph with a simple fill operation when using opt_level >= 2. Here's a snippet of code that can be run to demonstrate the error.

x = relay.op.full(relay.expr.const(1.), shape=[32], dtype='float32')
xfunc = relay.expr.Function([], x)
with relay.build_config(opt_level=2):
    graph, lib, params = relay.build(xfunc, "llvm")

This triggers the error
TVMError: Check failed: funcs.size() != 0U (0 vs. 0)

The above code works as expected with opt_level 0 or 1. Any thoughts on why this is happening? It makes it difficult to test one of the cases for infer_value.

@zhiics
Copy link
Member

zhiics commented May 13, 2019

@jwfromm I think this is probably because const folding optimized the full op away.

@jwfromm jwfromm force-pushed the tensorflow_parser branch from 2403f6a to f116baa Compare May 13, 2019 22:39
@jwfromm
Copy link
Contributor Author

jwfromm commented May 13, 2019

Just pushed with added tests for depth_to_space, resize_bilinear, and fill. In the latter two tests, the output shape is based on the value of placeholders, which requires infer_value to have a proper output. At high opt_level, relay was trying to optimize the fill operation away, so I added an argument to change the opt level used during testing. Currently fill is the only test that needs this but it's a decent option to have for future tests.

@tqchen
Copy link
Member

tqchen commented May 16, 2019

Copy link
Member

@zhiics zhiics left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It mostly looks good to me. Just left some minor comments. It would be good if @srkreddy1238 or @yongwww can take a look as well.

@jwfromm jwfromm force-pushed the tensorflow_parser branch from f116baa to 95f6830 Compare May 16, 2019 02:20
@jwfromm
Copy link
Contributor Author

jwfromm commented May 16, 2019

I know it's not great to add ops so late in the review, but I've also added the BroadcastTo op to the tensorflow parser and corresponding tests. This op also requires infer_value so it seems like it makes sense to include in this PR. It's actually a very common operation in tensorflow so we definitely want to include it eventually regardless.


__all__ = ['from_tensorflow']

def _infer_value(input_val, params):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could add additional check to make sure all inputs to infer is available in params.
A utility similar to list_input_names() in nnvm.

Not sure if we have similar API in relay.
cc @zhiics @jroesch @tqchen

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think something like this function could help?
https://github.com/dmlc/tvm/blob/master/python/tvm/relay/backend/interpreter.py#L136

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I doubt it fits here.
What we need here is list of Var nodes current Expr is dependent on recursively.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you just take free_vars if it is an expr? Var has name_hint.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

free_vars should work.

@jwfromm can you cross check free_vars with params and add an assert ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added in the latest push.

Copy link
Contributor

@srkreddy1238 srkreddy1238 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @jwfromm . Good to go now.

@srkreddy1238
Copy link
Contributor

@srkreddy1238 srkreddy1238 merged commit 246b410 into apache:master May 17, 2019
@srkreddy1238
Copy link
Contributor

Thanks @zhiics & @jwfromm . This is now merged.

@jwfromm jwfromm deleted the tensorflow_parser branch May 20, 2019 17:11
wweic pushed a commit to wweic/tvm that referenced this pull request Jun 26, 2019
* Some bug fixes in tensorflow graph converter and added DepthToSpace operator.

* Made DepthToSpace better comply with other function syntax.

* Added better shape inference for unusual situations.

* Lint fixes.

* Added depthtospace test.

* Added test cases for value inference and depthtospace.

* Added fill testing.

* Made comment changes and added BroadcastTo op and tests.

* Fixed underlining and unneeded opt_level forcing.

* Added _infer_value assertion that all values to infer are available in passed parameters.
wweic pushed a commit to neo-ai/tvm that referenced this pull request Jun 27, 2019
* Some bug fixes in tensorflow graph converter and added DepthToSpace operator.

* Made DepthToSpace better comply with other function syntax.

* Added better shape inference for unusual situations.

* Lint fixes.

* Added depthtospace test.

* Added test cases for value inference and depthtospace.

* Added fill testing.

* Made comment changes and added BroadcastTo op and tests.

* Fixed underlining and unneeded opt_level forcing.

* Added _infer_value assertion that all values to infer are available in passed parameters.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: need review status: need test case need test cases to cover the change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants