Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

high-dimensional semilinear pde solver using deep bsde algorithm #9

Closed
wants to merge 4 commits into from

Conversation

KirillZubov
Copy link
Member

prototype of one-dimensional pde solver using deep bsde algorithm

@KirillZubov KirillZubov marked this pull request as ready for review July 15, 2019 12:34
@KirillZubov KirillZubov changed the title prototype of one-dimensional pde solver using deep bsde algorithm high-dimensional semilinear pde solver using deep bsde algorithm Jul 15, 2019
@KirillZubov KirillZubov requested review from ChrisRackauckas and akaysh and removed request for ChrisRackauckas and akaysh July 15, 2019 12:57
x_sde(x_dim,t,dwa) = [x_dim[i] + μ(t,x_dim[i])*dt + σ(t,x_dim[i])*dwa[i] for i = 1: d]

get_x_sde(x_cur,l,dwA) = [x_sde(x_cur[i], ts[l] , dwA[i]) for i = 1: m]
reduceN(x_dim, l, dwA) = sum([gradU*dwA[i] for (i, gradU) in enumerate(chains[l](x_dim))])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gradU is a function?

opt = neuralNetworkParams[2]

U0(hide_layer_size, d) = neuralNetworkParams[3](hide_layer_size, d)
gradU(hide_layer_size, d) = neuralNetworkParams[4](hide_layer_size, d)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a good spot for unicode on the grad.

function sol()
x_cur = x_0
U = [chainU(x)[1] for x in x_0]
global x_prev
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why global this? that shouldn't be needed. local x_prev should be fine.

U0(hide_layer_size, d) = neuralNetworkParams[3](hide_layer_size, d)
gradU(hide_layer_size, d) = neuralNetworkParams[4](hide_layer_size, d)

chains = [gradU(hide_layer_size, d) for i=1:length(ts)]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it be much easier to read if it was just

u0 = neuralNetworkParams[3](hide_layer_size, d)
∇u = [neuralNetworkParams[4](hide_layer_size, d) for i=1:length(ts)]

since then it would be notationally a lot closer.

ps = Flux.params(chainU, chains...)

# brownian motion
dw(dt) = sqrt(dt) * randn()
Copy link
Member

@ChrisRackauckas ChrisRackauckas Jul 16, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete this so you can just use dW for the Brownian motions. It took a bit to see that dwa was just a renaming of dw array since dw was taken.

# the Euler-Maruyama scheme
x_sde(x_dim,t,dwa) = [x_dim[i] + μ(t,x_dim[i])*dt + σ(t,x_dim[i])*dwa[i] for i = 1: d]

get_x_sde(x_cur,l,dwA) = [x_sde(x_cur[i], ts[l] , dwA[i]) for i = 1: m]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is going to need to be iterating through time. I am not sure how x_cur[i] is supposed to act as the previous value here?


get_x_sde(x_cur,l,dwA) = [x_sde(x_cur[i], ts[l] , dwA[i]) for i = 1: m]
reduceN(x_dim, l, dwA) = sum([gradU*dwA[i] for (i, gradU) in enumerate(chains[l](x_dim))])
getN(x_cur, l, dwA) = [reduceN(x_cur[i], l, dwA[i]) for i = 1: m]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what this is doing. What equation does this correspond to?

# brownian motion
dw(dt) = sqrt(dt) * randn()
# the Euler-Maruyama scheme
x_sde(x_dim,t,dwa) = [x_dim[i] + μ(t,x_dim[i])*dt + σ(t,x_dim[i])*dwa[i] for i = 1: d]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sigma is not necessarily diagonal, so you cannot essentially broadcast along the dimensions here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants