Skip to content

Feature/lstm #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Dec 5, 2017
Merged

Feature/lstm #2

merged 4 commits into from
Dec 5, 2017

Conversation

dzhwinter
Copy link
Owner

@dzhwinter dzhwinter commented Dec 1, 2017

add static rnn benchmark scripts (book chapter6)
fix PaddlePaddle/Paddle#6156

'--use_cprof', action='store_true', help='If set, use cProfile.')
parser.add_argument(
'--use_nvprof',
action='store_false',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

store_false -> store_ture . disable use_nvprof by defalut.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.


c_pre_init = fluid.layers.fill_constant(
dtype=emb.dtype, shape=[batch_size, emb_dim], value=0.0)
layer_1_out = fluid.layers.lstm(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can add an argument to allow to set multi-lstm?Here

lstm_var = emb
for i in range(arg.lstm_num):
    lstm_var = fluid.layers.lstm(lstm_var)

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed.


c_pre_init = fluid.layers.fill_constant(
dtype=emb.dtype, shape=[batch_size, emb_dim], value=0.0)
layer_1_out = fluid.layers.lstm(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can add an argument to allow to set multi-lstm?Here

lstm  = emb
for i in range(arg.lstm_num):
    lstm = fluid.layers.lstm(lstm)

for i in range(stacked_num):
layer_1_out = fluid.layers.lstm(
layer_1_out, c_pre_init=c_pre_init, hidden_dim=emb_dim)
layer_1_out = fluid.layers.transpose(x=layer_1_out, axis=[1, 0, 2])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fluid.layers.transpose should be in the outside of for loop.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed.

@dzhwinter dzhwinter merged commit 0f93e59 into master Dec 5, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

static rnn benchmark and finetune
2 participants