-
Notifications
You must be signed in to change notification settings - Fork 5.7k
trainer.test() #10453
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
trainer.test() #10453
Conversation
Two problems I have encountered:
|
python/paddle/fluid/trainer.py
Outdated
accumulated_loss += loss[0] | ||
count += 1 | ||
|
||
return accumulated_loss / count |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we return Accuracy as well?
Based on our discussion, we define the 1st element in the returned tuple as (loss, accuracy, f1_score, ...) |
After a discussion with @wangkuiyi and @daming-lu , we have reached two conclusions:
|
order in program | ||
""" | ||
|
||
return self._test_by_executor(reader, feed_order, self.test_outputs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why put all code into a separate function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe in the future, we need to support _test_by_parralle_executor
. Here is a reserved interface for switching between multiple executors.
next_word = fluid.layers.data(name='nextw', shape=[1], dtype='int64') | ||
# The declaration of 'next_word' must be after the invoking of inference_program, | ||
# or the data input order of train program would be [next_word, firstw, secondw, | ||
# thirdw, forthw], which is not correct. | ||
predict_word = inference_program(is_sparse) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can also specify the feed order in the
trainer.train(....., feed_order=['firstw', 'secondw', 'thirdw', 'forthw', 'next_word'])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise, LGTM
|
||
def _test_by_executor(self, reader, feed_order, fetch_list): | ||
with executor.scope_guard(self.scope): | ||
feed_var_list = build_feed_var_list(self.test_program, feed_order) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trainer. train takes in feed_order
, Will it be possible to re-use the feed_order so we don't need to calculate it?
Fixes #10363