Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU Training #2

Closed
alexrisman opened this issue Jan 7, 2016 · 3 comments
Closed

GPU Training #2

alexrisman opened this issue Jan 7, 2016 · 3 comments

Comments

@alexrisman
Copy link

Hi, does this library support training on GPU?

@zhongkaifu
Copy link
Owner

Currently, RNNSharp doesn't support training on GPU, but I do have a plan for that. :)

zhongkaifu added a commit that referenced this issue Jan 9, 2016
@BackT0TheFuture
Copy link

Great to hear that. There's one project named "CUDAFY.NET", it allows easy development of high performance CUDA applications in C#. (CUDAFY.NET)[http://cudafy.codeplex.com/] <_>

@zhongkaifu
Copy link
Owner

Thanks for pointing out. I will look at CUDAFY.NET to see if we can use it.

zhongkaifu added a commit that referenced this issue Jan 27, 2016
…running validation

#2. Support model vector quantization reduce model size to 1/4 original
#3. Refactoring code and speed up training
#4. Fixing feature extracting bug
zhongkaifu added a commit that referenced this issue Feb 15, 2016
#2. Improve BiRNN learning process
#3. Support to train model without validated corpus
zhongkaifu added a commit that referenced this issue Feb 24, 2016
#2. Optimize LSTM encoding to improve performance significantly
#3. Apply dynamtic learning rate
zhongkaifu added a commit that referenced this issue Feb 25, 2016
#2. Improve encoding performance by SIMD instructions
zhongkaifu added a commit that referenced this issue Feb 25, 2016
zhongkaifu added a commit that referenced this issue Mar 9, 2016
#2. Execute CRF forward-backward in parallel
zhongkaifu added a commit that referenced this issue Mar 9, 2016
#2. Update readme file
zhongkaifu added a commit that referenced this issue Mar 9, 2016
#2. Normalize LSTM cell value in weights updating
zhongkaifu added a commit that referenced this issue Jul 8, 2016
…m input layer.

#2. Refactoring dropout layer and output layer
#3. Refactoring layer initialization
zhongkaifu added a commit that referenced this issue Nov 30, 2016
#2. Fix bug in softmax output layer when computing hidden layer value
#3. Refactoring code
zhongkaifu added a commit that referenced this issue Dec 22, 2016
#2. Refactor configuration file and command line parameter
#3. use SIMD for backward pass in output layer
zhongkaifu added a commit that referenced this issue Jan 6, 2017
zhongkaifu added a commit that referenced this issue Jan 24, 2017
…o encoder is used.

#2. For seq2seq autoencoder, concatenate first top hidden layer and last top hidden layer as final encoder output for decoder.
zhongkaifu added a commit that referenced this issue Feb 5, 2017
… is worse than LSTM

#2. Fix backward bug in Dropout layer
#3. Refactoring code
zhongkaifu added a commit that referenced this issue Feb 5, 2017
…hidden layer is more than 1

#2. Improve training part of bi-directional RNN. We don't re-run forward before updating weights
#3. Fix bugs in Dropout layer
#4. Change hidden layer settings in configuration file.
#5. Refactoring code
ericxsun pushed a commit to ericxsun/RNNSharp that referenced this issue Feb 9, 2017
ericxsun pushed a commit to ericxsun/RNNSharp that referenced this issue Feb 9, 2017
ericxsun pushed a commit to ericxsun/RNNSharp that referenced this issue Feb 9, 2017
…in test model

zhongkaifu#2. Update: Train can be ended early if current PPL is larger than the previous one

Signed-off-by: Zhongkai Fu <fuzhongkai@gmail.com>
ericxsun pushed a commit to ericxsun/RNNSharp that referenced this issue Feb 9, 2017
zhongkaifu#2. Using error token ratio to verify validated set performance
zhongkaifu added a commit that referenced this issue Feb 19, 2017
zhongkaifu added a commit that referenced this issue Mar 8, 2017
#2. Refactoring code
#3. Make RNNDecoder thread-safe
zhongkaifu added a commit that referenced this issue Mar 21, 2017
zhongkaifu added a commit that referenced this issue May 3, 2017
#2. Improve training performnce ~ 300% up
#3. Fix learning rate update bug
#4. Apply SIMD instruction to update error in layers
#5. Code refactoring
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants