Skip to content
This repository has been archived by the owner on Dec 11, 2023. It is now read-only.

shape issue in the dynamic_decode with GreedyEmbeddingHelper #117

Open
Ushiao opened this issue Sep 19, 2017 · 11 comments
Open

shape issue in the dynamic_decode with GreedyEmbeddingHelper #117

Ushiao opened this issue Sep 19, 2017 · 11 comments

Comments

@Ushiao
Copy link

Ushiao commented Sep 19, 2017

Hello,

i'm trying dynamic_decode with GreedyEmbeddingHelper, but i got a error of shape:

ValueError: The shape for decoder_1/decoder/while/Merge_7:0 is not an invariant for the loop. It enters the loop with shape (1, 128), but has shape (?, 128) after one iteration. Provide shape invariants using either the shape_invariants` argument of tf.while_loop or set_shape() on the loop variables.
PS. 128 is embedding size

i tried batch_size with 2, work fine, but not with size 1.

@bob831009
Copy link

hi, i have the same problem. did you get the solutions?

@tohnperfect
Copy link

I got this exact same problem.

@oahziur
Copy link
Contributor

oahziur commented Nov 16, 2017

Can you provide a command that can regenerate the issue with this codebase? Also python version and tensorflow version may also help.

@puzzledTao
Copy link

I use python3 + tensorflow1.4. And i got the same problem. Did you get solutions?

@ziruizhuang
Copy link

python 3.6 + tf 1.5 same here, batch_size = 2 works fine and batch_size = 1 raises the error

@him593
Copy link

him593 commented May 14, 2018

Same issue. Also, how does one perform inference for a single sequence if the greedy embedding helper does not accept batch size 1?

@tavianator
Copy link

This workaround fixes batch size 1 (batch size 0 is still broken):

class FixedHelper(seq2seq.GreedyEmbeddingHelper):
    def sample(self, *args, **kwargs):
        result = super().sample(*args, **kwargs)
        result.set_shape([batch_size])
        return result

tavianator pushed a commit to Maluuba/qgen-workshop that referenced this issue May 25, 2018
@trungd
Copy link

trungd commented Jun 27, 2018

@tavianator Is batch_size = 1 for this case? I tried the value but it was already 1 and got same error

result = Tensor("decoder/decoder/while/BasicDecoderStep/ArgMax:0", shape=(1,), dtype=int32) befor set_shape

@roholazandie
Copy link

Same issue. Also, how does one perform inference for a single sequence if the greedy embedding helper does not accept batch size 1?

I think the best way to do that is to create a dummy sequence, like an empty one and then pass both of the single sentence appended by the empty one to the greedy embedding

@MarcGroef
Copy link

I'm also having exactly this problem, with python 2.7.12 and tf 1.13.1.
I don't seem to get the FixedHelper workaround mentioned by @tavianator to work..
When replacing result = super().sample(*args, **kwargs) with result = super(tf.contrib.seq2seq.GreedyEmbeddingHelper, self).sample(*args, **kwargs),
i get "AttributeError: 'NoneType' object has no attribute 'set_shape'"
I used a batch size of 1.

@roholazandie, how could I best proceed adding this empty embedding? Should it be added to the initial_state of the following BasicDecoder, which is followed by dynamic_decode?

@MarcGroef
Copy link

MarcGroef commented Aug 12, 2019

I found a solution to this problem, at this link.

The way to go is to set the start tokens in the following manner:
batch_size = tf.shape(self.inputs)[0:1]
start_tokens = tf.ones(batch_size, dtype = tf.int32) * <your start token idx>

Here self.inputs refers to the input tensor
[edits: markup]

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests