-
-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
high-dimensional semilinear pde solver using deep bsde algorithm #9
Conversation
added test cases
-fixed the mistake with the neural network -fixed the mistake with dimensional of Brownian motion process
x_sde(x_dim,t,dwa) = [x_dim[i] + μ(t,x_dim[i])*dt + σ(t,x_dim[i])*dwa[i] for i = 1: d] | ||
|
||
get_x_sde(x_cur,l,dwA) = [x_sde(x_cur[i], ts[l] , dwA[i]) for i = 1: m] | ||
reduceN(x_dim, l, dwA) = sum([gradU*dwA[i] for (i, gradU) in enumerate(chains[l](x_dim))]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gradU is a function?
opt = neuralNetworkParams[2] | ||
|
||
U0(hide_layer_size, d) = neuralNetworkParams[3](hide_layer_size, d) | ||
gradU(hide_layer_size, d) = neuralNetworkParams[4](hide_layer_size, d) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a good spot for unicode on the grad.
function sol() | ||
x_cur = x_0 | ||
U = [chainU(x)[1] for x in x_0] | ||
global x_prev |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why global this? that shouldn't be needed. local x_prev
should be fine.
U0(hide_layer_size, d) = neuralNetworkParams[3](hide_layer_size, d) | ||
gradU(hide_layer_size, d) = neuralNetworkParams[4](hide_layer_size, d) | ||
|
||
chains = [gradU(hide_layer_size, d) for i=1:length(ts)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it be much easier to read if it was just
u0 = neuralNetworkParams[3](hide_layer_size, d)
∇u = [neuralNetworkParams[4](hide_layer_size, d) for i=1:length(ts)]
since then it would be notationally a lot closer.
ps = Flux.params(chainU, chains...) | ||
|
||
# brownian motion | ||
dw(dt) = sqrt(dt) * randn() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
delete this so you can just use dW
for the Brownian motions. It took a bit to see that dwa
was just a renaming of dw array
since dw
was taken.
# the Euler-Maruyama scheme | ||
x_sde(x_dim,t,dwa) = [x_dim[i] + μ(t,x_dim[i])*dt + σ(t,x_dim[i])*dwa[i] for i = 1: d] | ||
|
||
get_x_sde(x_cur,l,dwA) = [x_sde(x_cur[i], ts[l] , dwA[i]) for i = 1: m] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is going to need to be iterating through time. I am not sure how x_cur[i] is supposed to act as the previous value here?
|
||
get_x_sde(x_cur,l,dwA) = [x_sde(x_cur[i], ts[l] , dwA[i]) for i = 1: m] | ||
reduceN(x_dim, l, dwA) = sum([gradU*dwA[i] for (i, gradU) in enumerate(chains[l](x_dim))]) | ||
getN(x_cur, l, dwA) = [reduceN(x_cur[i], l, dwA[i]) for i = 1: m] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what this is doing. What equation does this correspond to?
# brownian motion | ||
dw(dt) = sqrt(dt) * randn() | ||
# the Euler-Maruyama scheme | ||
x_sde(x_dim,t,dwa) = [x_dim[i] + μ(t,x_dim[i])*dt + σ(t,x_dim[i])*dwa[i] for i = 1: d] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sigma is not necessarily diagonal, so you cannot essentially broadcast along the dimensions here.
prototype of one-dimensional pde solver using deep bsde algorithm