Skip to content

Implementing basic RNN #162

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 31 commits into
base: main
Choose a base branch
from
Draft
Changes from 1 commit
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
64e1b69
Prototyping RNN layer based on Dense
castelao Oct 26, 2023
8c11911
Extenting uses
castelao Oct 26, 2023
adef7d7
Reading coefficients from h5f model
castelao Oct 26, 2023
b51d66f
feat: get_params()
castelao Oct 26, 2023
a797502
feat: set_params()
castelao Oct 26, 2023
ff1c392
feat: get_num_params()
castelao Oct 26, 2023
f686950
Initializing recurrent kernel and states
castelao Oct 26, 2023
acf1afd
feat: forward()
castelao Oct 26, 2023
69fed32
More informative error messages
castelao Oct 27, 2023
fd24e16
Minor adjustments on rnn_layer
castelao Oct 27, 2023
7415081
Constructor for RNN
castelao Oct 28, 2023
6f56863
Loading rnn constructor in the root
castelao Oct 28, 2023
ad598a8
Back to 1D concept
castelao Oct 31, 2023
0ae7af1
fix: Recurrent is actually a square matrix
castelao Oct 31, 2023
c164924
Apply loss function if RNN is the output layer
castelao Nov 1, 2023
55ad96d
fix: Getting biases
castelao Nov 1, 2023
b345865
Allowing backward 1D from dense to RNN
castelao Nov 1, 2023
91b85e0
Allowing backward 1D from RNN
castelao Nov 1, 2023
5e197f0
Allowing forward from dense to RNN
castelao Nov 1, 2023
7f671c8
Allowing forward from RNN
castelao Nov 1, 2023
c27f59c
Getting output from RNN
castelao Nov 1, 2023
524d2c4
feat: Implementing reset state for RNN
castelao Nov 1, 2023
598f9e7
refactor: set_state() on layer level
castelao Nov 6, 2023
b7bead6
wip: A simple RNN example
castelao Nov 14, 2023
088e4f3
feat: layer getting gradient from RNN
castelao Nov 14, 2023
4d0a4fd
feat: layer setting params for RNN
castelao Nov 14, 2023
ee516a8
Might not use set_state at rnn_layer level
castelao Nov 14, 2023
07f7587
fix: New access point to 'loss % derivative'
castelao Jun 25, 2024
9b22826
Define set_state as pure
castelao Jun 30, 2024
5bc9bc5
fix: pure interface for set_state
castelao Jun 30, 2024
4c7c0b9
fix: Conciliating with latest main state
castelao Oct 21, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Initializing recurrent kernel and states
  • Loading branch information
castelao committed Sep 15, 2024
commit f686950fec42ef5fc96bdc88818e007f55e8a2a0
9 changes: 9 additions & 0 deletions src/nf/nf_rnn_layer_submodule.f90
Original file line number Diff line number Diff line change
Expand Up @@ -131,9 +131,18 @@ module subroutine init(self, input_shape)
call random_normal(self % weights)
self % weights = self % weights / self % input_size

! Recurrent weights are a 2-d square array of shape this layer size.
! Each neuron is adjusted by each state times a recurrent weight.
allocate(self % recurrent(self % output_size, self % output_size))
call random_normal(self % recurrent)
self % recurrent = self % recurrent / self % output_size

! Broadcast weights to all other images, if any.
call co_broadcast(self % weights, 1)

allocate(self % state(self % output_size))
self % state = 0

allocate(self % biases(self % output_size))
self % biases = 0

Expand Down