You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello author, I find that graph tracking algorithms may cause some operator pruning to be lost if there are multiple parallel computations in the network.
INFO (tinynn.graph.modifier) Start tracking tensor dimension changes...
INFO (tinynn.graph.modifier) Start dividing subgraphs according to tensor dependencies...
INFO (tinynn.graph.modifier) Start to eliminate dimension change conflicts...
INFO (tinynn.graph.modifier) Start generating new subgraphs without conflicts...
INFO (tinynn.prune.oneshot_pruner) Register a mask for each operator
INFO (tinynn.prune.oneshot_pruner) subgraph [head] compute over
INFO (tinynn.prune.oneshot_pruner) subgraph [attn2] compute over
INFO (tinynn.prune.oneshot_pruner) subgraph [propagate_fusion_7] compute over
INFO (tinynn.prune.oneshot_pruner) subgraph [conv1] compute over
INFO (tinynn.prune.oneshot_pruner) subgraph [conv3] compute over
INFO (tinynn.prune.oneshot_pruner) subgraph [tail] compute over
INFO (tinynn.prune.oneshot_pruner) Apply the mask of each operator
INFO (tinynn.graph.modifier) [CONV] head: output 32 -> 28
INFO (tinynn.graph.modifier) [CONV] head: bias 32 -> 28
INFO (tinynn.graph.modifier) [CONV] attn1: input 32 -> 28
INFO (tinynn.graph.modifier) [CONV] attn2: input 32 -> 28
INFO (tinynn.graph.modifier) [CONV] attn2: output 32 -> 28
INFO (tinynn.graph.modifier) [CONV] attn2: bias 32 -> 28
INFO (tinynn.graph.modifier) [CONV] propagate_fusion: input 96 -> 84
INFO (tinynn.graph.modifier) [CONV] propagate_fusion: output 32 -> 24
INFO (tinynn.graph.modifier) [CONV] propagate_fusion: bias 32 -> 24
INFO (tinynn.graph.modifier) [CONV] conv3: output 64 -> 56
INFO (tinynn.graph.modifier) [CONV] conv3: bias 64 -> 56
INFO (tinynn.graph.modifier) [CONV] tail: input 64 -> 56
INFO (tinynn.graph.modifier) [CONV] tail: output 32 -> 28
INFO (tinynn.graph.modifier) [CONV] tail: bias 32 -> 28
Traceback (most recent call last):
File "d:\doc\code\tmp\tinynn\graph\tracer.py", line 3335, in trace
new_graph.init()
File "d:\doc\code\tmp\tinynn\graph\tracer.py", line 2033, in init
self.module(*actual_input)
File "D:\app\anaconda3\envs\alanosu\lib\site-packages\torch\nn\modules\module.py", line 1148, in _call_impl
result = forward_call(*input, **kwargs)
File "d:\doc\code\tmp\tmp.py", line 52, in forward
corr = torch.sum(emb_neighbor * embedding_ref, 1)
File "d:\doc\code\tmp\tinynn\graph\tracer.py", line 1045, in new_func
result = orig_func(*args, **kwargs)
RuntimeError: The size of tensor a (28) must match the size of tensor b (32) at non-singleton dimension 1
ERROR (tinynn.graph.tracer) inputs: ['input_0_f']
ERROR (tinynn.graph.tracer) forwards: ['head', 'shape_0_f', 'getitem_0_f', 'unsqueeze_0_f', 'attn1', 'attn2', 'getitem_1_f', 'unsqueeze_1_f']
ERROR (tinynn.graph.tracer) outputs: []
ERROR (tinynn.graph.tracer) constants: []
You can see that attn1, conv1, and conv2 are all missed.
The text was updated successfully, but these errors were encountered:
Hello author, I find that graph tracking algorithms may cause some operator pruning to be lost if there are multiple parallel computations in the network.
running above codes gives error below.
You can see that attn1, conv1, and conv2 are all missed.
The text was updated successfully, but these errors were encountered: