Skip to content

Conversation

xinghai-sun
Copy link
Contributor

Resolve #278

@xinghai-sun xinghai-sun requested review from kuke and pkuyym September 18, 2017 13:55
#### Multiple GPU Efficiency
#### Acceleration with Multi-GPUs

We compare the training time with 1, 2, 4, 8, 16 K40 GPUs (with a subset of LibriSpeech samples whose audio durations are between 6.0 and 7.0). And it shows that a **near-linear** acceleration with multiple GPUs has been achieved. In the following figure, the time (in seconds) used for training is plotted on the blue bars.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

K40m?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

--> Tesla K40m

Copy link
Contributor

@pkuyym pkuyym left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

#### Multiple GPU Efficiency
#### Acceleration with Multi-GPUs

We compare the training time with 1, 2, 4, 8, 16 Tesla K40m GPUs (with a subset of LibriSpeech samples whose audio durations are between 6.0 and 7.0). And it shows that a **near-linear** acceleration with multiple GPUs has been achieved. In the following figure, the time (in seconds) used for training is plotted on the blue bars.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

6.0 and 7.0, what is the unit?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@xinghai-sun xinghai-sun merged commit 89dd9ae into PaddlePaddle:develop Sep 18, 2017
@xinghai-sun xinghai-sun deleted the doc_efficiency branch September 18, 2017 15:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants