Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use OpenMP to parallelize learning #1

Open
redpony opened this issue Mar 13, 2013 · 0 comments
Open

use OpenMP to parallelize learning #1

redpony opened this issue Mar 13, 2013 · 0 comments

Comments

@redpony
Copy link
Owner

redpony commented Mar 13, 2013

During learning, computing the loss and its gradient relative to the parameters (especially with large numbers of training instances or features) can be quite expensive. OpenMP (http://openmp.org/wp/), which is supported by default with g++, could easily be used to parallelize this computation. Basically, all the loops of the following form
for (unsigned i = 0; i < training.size(); ++i)
are good candidates for parallelization. Reading about OpenMP such "reductions" will have to be implemented by creating a gradients buffer per thread and then summing them at the end (although this summing could also be parallelized).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant