Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initialising the Optimal Stopping Time Problem #84

Merged
merged 15 commits into from
May 26, 2020

Conversation

ashutosh-b-b
Copy link
Contributor

No description provided.

@kanav99
Copy link
Contributor

kanav99 commented May 9, 2020

You are using so many globals, try to pass them through arguments instead, globals hit performance hard

@ashutosh-b-b
Copy link
Contributor Author

Its an error on my part. All the variables are declared locally. I am changing it in the next commit.

@ashutosh-b-b ashutosh-b-b changed the title [WIP] Initialising the Optimal Stopping Time Problem Initialising the Optimal Stopping Time Problem May 13, 2020
@ashutosh-b-b
Copy link
Contributor Author

Screen Shot 2020-05-12 at 7 34 35 PM

@ashutosh-b-b
Copy link
Contributor Author

@ChrisRackauckas

@@ -56,13 +56,35 @@ function Base.show(io::IO, A::KolmogorovPDEProblem)
println(io,"Sigma")
show(io , A.sigma)
end
struct OptimalStoppingProblem{ Mu, Sigma, G , U0 , T ,P} <: DiffEqBase.DEProblem
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More generally, this is just an SDE problem with a value function g, so I would just make this SDEProblem and g.

X = u.u
a = []
function Un(n , X )
if size(a)[1] >= n
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where is n defined?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sump = 0
for u in sim.u
X = u.u
a = []
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a is a very non-descriptive name

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes this to un

payoff = []
times = []
iter = 0
sump = 0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sumprice? totalprice?

@ashutosh-b-b
Copy link
Contributor Author

Screen Shot 2020-05-24 at 9 12 55 AM

reward = reward + sum(Un(i , X )*g(ts[i] , X[i]) for i in 1 : size(ts)[1])
un = []
end
return 10000 - reward
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is the 10000 added to it?

@ChrisRackauckas
Copy link
Member

Awesome, that's looking great! Try a really big train now: see if you can get it almost exact, just to check the convergence.

Copy link
Member

@ChrisRackauckas ChrisRackauckas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, but the 10000 scaling seems unnecessary.

Copy link
Member

@ChrisRackauckas ChrisRackauckas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a test group. If this passes it's good to merge.

@ChrisRackauckas ChrisRackauckas merged commit 55c1fbc into SciML:master May 26, 2020
ChrisRackauckas added a commit that referenced this pull request Jun 30, 2022
I'm not sure how to easily isolate this one. It seems deep in the braoa

```julia
using Flux, OptimizationFlux
using DiffEqFlux
using Test, NeuralPDE
using Optimization
using CUDA, QuasiMonteCarlo
import ModelingToolkit: Interval, infimum, supremum

using Random
Random.seed!(100)

@parameters t x
@variables u(..)
Dt = Differential(t)
Dxx = Differential(x)^2

eq = Dt(u(t, x)) ~ Dxx(u(t, x))
bcs = [u(0, x) ~ cos(x),
    u(t, 0) ~ exp(-t),
    u(t, 1) ~ exp(-t) * cos(1)]

domains = [t ∈ Interval(0.0, 1.0),
    x ∈ Interval(0.0, 1.0)]

@nAmed pdesys = PDESystem(eq, bcs, domains, [t, x], [u(t, x)])

inner = 30
chain = FastChain(FastDense(2, inner, Flux.σ),
                  FastDense(inner, inner, Flux.σ),
                  FastDense(inner, inner, Flux.σ),
                  FastDense(inner, inner, Flux.σ),
                  FastDense(inner, inner, Flux.σ),
                  FastDense(inner, inner, Flux.σ),
                  FastDense(inner, 1))#,(u,p)->gpuones .* u)

strategy = NeuralPDE.StochasticTraining(500)
initθ = CuArray(Float64.(DiffEqFlux.initial_params(chain)))
discretization = NeuralPDE.PhysicsInformedNN(chain,
                                             strategy;
                                             init_params = initθ)
prob = NeuralPDE.discretize(pdesys, discretization)
symprob = NeuralPDE.symbolic_discretize(pdesys, discretization)


res = Optimization.solve(prob, ADAM(0.01); maxiters = 1000)
```

```
julia> res = Optimization.solve(prob, ADAM(0.01); maxiters = 1000)
ERROR: type Nothing has no field buffer
Stacktrace:
  [1] getproperty(x::Nothing, f::Symbol)
    @ Base .\Base.jl:38
  [2] unsafe_convert
    @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\src\array.jl:321 [inlined]
  [3] unsafe_convert
    @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\src\pointer.jl:62 [inlined]
  [4] macro expansion
    @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\libcublas.jl:1406 [inlined]
  [5] macro expansion
    @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\src\pool.jl:232 [inlined]
  [6] macro expansion
    @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\error.jl:61 [inlined]
  [7] cublasGemmEx(handle::Ptr{Nothing}, transa::Char, transb::Char, m::Int64, n::Int64, k::Int64, alpha::Base.RefValue{Float64}, A::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}, Atype::Type, lda::Int64, B::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}, Btype::Type, ldb::Int64, beta::Base.RefValue{Float64}, C::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}, Ctype::Type, ldc::Int64, computeType::CUDA.CUBLAS.cublasComputeType_t, algo::CUDA.CUBLAS.cublasGemmAlgo_t)     
    @ CUDA.CUBLAS C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\utils\call.jl:26
  [8] gemmEx!(transA::Char, transB::Char, alpha::Number, A::StridedCuVecOrMat, B::StridedCuVecOrMat, beta::Number, C::StridedCuVecOrMat; algo::CUDA.CUBLAS.cublasGemmAlgo_t)
    @ CUDA.CUBLAS C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\wrappers.jl:920
  [9] gemmEx!
    @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\wrappers.jl:895 [inlined]
 [10] gemm_dispatch!(C::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}, A::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}, B::LinearAlgebra.Adjoint{Float64, CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}}, alpha::Bool, beta::Bool)
    @ CUDA.CUBLAS C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\linalg.jl:298
 [11] mul!
    @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\linalg.jl:321 [inlined]
 [12] mul!
    @ C:\Users\accou\.julia\juliaup\julia-1.8.0-rc1+0~x64\share\julia\stdlib\v1.8\LinearAlgebra\src\matmul.jl:276 [inlined]
 [13] *
    @ C:\Users\accou\.julia\juliaup\julia-1.8.0-rc1+0~x64\share\julia\stdlib\v1.8\LinearAlgebra\src\matmul.jl:148 [inlined]
 [14] (::DiffEqFlux.var"#FastDense_adjoint#114"{FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}})(ȳ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer})
    @ DiffEqFlux C:\Users\accou\.julia\packages\DiffEqFlux\5e9D2\src\fast_layers.jl:278
 [15] #252#back
    @ C:\Users\accou\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:67 [inlined]
 [16] Pullback (repeats 2 times)
    @ C:\Users\accou\.julia\packages\DiffEqFlux\5e9D2\src\fast_layers.jl:20 [inlined]
 [17] (::typeof(∂(applychain)))(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer})
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
--- the last 2 lines are repeated 5 more times ---
 [28] Pullback
    @ C:\Users\accou\.julia\packages\DiffEqFlux\5e9D2\src\fast_layers.jl:21 [inlined]
 [29] Pullback
    @ C:\Users\accou\.julia\dev\NeuralPDE\src\pinn_types.jl:228 [inlined]
 [30] (::typeof(∂(λ)))(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer})
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
 [31] Pullback
    @ C:\Users\accou\.julia\dev\NeuralPDE\src\pinn_types.jl:232 [inlined]
 [32] (::typeof(∂(#12)))(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer})
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
 [33] Pullback
    @ C:\Users\accou\.julia\dev\NeuralPDE\src\pinn_types.jl:261 [inlined]
 [34] (::typeof(∂(numeric_derivative)))(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer})
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
 [35] macro expansion
    @ C:\Users\accou\.julia\dev\NeuralPDE\src\discretize.jl:142 [inlined]
 [36] macro expansion
    @ C:\Users\accou\.julia\packages\RuntimeGeneratedFunctions\KrkGo\src\RuntimeGeneratedFunctions.jl:129 [inlined]
 [37] macro expansion
    @ .\none:0 [inlined]
 [38] Pullback
    @ .\none:0 [inlined]
 [39] (::Zygote.var"#208#209"{Tuple{Tuple{Nothing}, NTuple{7, Nothing}}, typeof(∂(generated_callfunc))})(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer})   
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\lib.jl:207
 [40] #1750#back
    @ C:\Users\accou\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:67 [inlined]
 [41] Pullback
    @ C:\Users\accou\.julia\packages\RuntimeGeneratedFunctions\KrkGo\src\RuntimeGeneratedFunctions.jl:117 [inlined]
 [42] Pullback
    @ C:\Users\accou\.julia\dev\NeuralPDE\src\discretize.jl:157 [inlined]
 [43] (::typeof(∂(λ)))(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer})
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
 [44] Pullback
    @ C:\Users\accou\.julia\dev\NeuralPDE\src\training_strategies.jl:89 [inlined]
 [45] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
 [46] Pullback
    @ .\none:0 [inlined]
 [47] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
 [48] #546
    @ C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\array.jl:198 [inlined]
 [49] #4
    @ .\generator.jl:36 [inlined]
 [50] iterate
    @ .\generator.jl:47 [inlined]
 [51] collect(itr::Base.Generator{Base.Iterators.Zip{Tuple{Vector{Tuple{Float64, typeof(∂(λ))}}, Vector{Float64}}}, Base.var"#4#5"{Zygote.var"#546#551"}})
    @ Base .\array.jl:787
 [52] map
    @ .\abstractarray.jl:3053 [inlined]
 [53] (::Zygote.var"#map_back#548"{NeuralPDE.var"#318#329"{CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, 1, Tuple{Vector{NeuralPDE.var"#87#88"{NeuralPDE.var"#231#232"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#313"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x26bb95dc, 0xdf5e9632, 0x817f3943, 0xb81f03ec, 0x97e64e44)}, NeuralPDE.var"#12#13", NeuralPDE.var"#297#304"{NeuralPDE.var"#297#298#305"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, StochasticTraining}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, Nothing}, Vector{Any}, DataType, Int64}}}, Tuple{Tuple{Base.OneTo{Int64}}}, Vector{Tuple{Float64, typeof(∂(λ))}}})(Δ::Vector{Float64})
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\array.jl:198
 [54] (::Zygote.var"#back#578"{Zygote.var"#map_back#548"{NeuralPDE.var"#318#329"{CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, 1, Tuple{Vector{NeuralPDE.var"#87#88"{NeuralPDE.var"#231#232"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#313"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x26bb95dc, 0xdf5e9632, 0x817f3943, 0xb81f03ec, 0x97e64e44)}, NeuralPDE.var"#12#13", NeuralPDE.var"#297#304"{NeuralPDE.var"#297#298#305"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, StochasticTraining}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, Nothing}, Vector{Any}, DataType, Int64}}}, Tuple{Tuple{Base.OneTo{Int64}}}, Vector{Tuple{Float64, typeof(∂(λ))}}}})(ȳ::Vector{Float64})
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\array.jl:234
 [55] Pullback
    @ C:\Users\accou\.julia\dev\NeuralPDE\src\discretize.jl:445 [inlined]
 [56] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
 [57] #208
    @ C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\lib.jl:207 [inlined]
 [58] #1750#back
    @ C:\Users\accou\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:67 [inlined]
 [59] Pullback
    @ C:\Users\accou\.julia\dev\SciMLBase\src\scimlfunctions.jl:2887 [inlined]
 [60] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
 [61] #208
    @ C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\lib.jl:207 [inlined]
 [62] #1750#back
    @ C:\Users\accou\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:67 [inlined]
 [63] Pullback
    @ C:\Users\accou\.julia\dev\Optimization\src\function\zygote.jl:30 [inlined]
 [64] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
 [65] #208
    @ C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\lib.jl:207 [inlined]
 [66] #1750#back
    @ C:\Users\accou\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:67 [inlined]
 [67] Pullback
    @ C:\Users\accou\.julia\dev\Optimization\src\function\zygote.jl:32 [inlined]
 [68] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
 [69] (::Zygote.var"#52#53"{typeof(∂(λ))})(Δ::Float64)
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface.jl:41
 [70] gradient(f::Function, args::CuArray{Float64, 1, CUDA.Mem.DeviceBuffer})
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface.jl:76
 [71] (::Optimization.var"#85#95"{Optimization.var"#84#94"{OptimizationFunction{true, Optimization.AutoZygote, NeuralPDE.var"#full_loss_function#328"{NeuralPDE.var"#null_nonadaptive_loss#127", Vector{NeuralPDE.var"#87#88"{loss_function, Vector{Any}, DataType, Int64} where loss_function}, Vector{NeuralPDE.var"#87#88"{NeuralPDE.var"#231#232"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#313"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x26bb95dc, 0xdf5e9632, 0x817f3943, 0xb81f03ec, 0x97e64e44)}, NeuralPDE.var"#12#13", NeuralPDE.var"#297#304"{NeuralPDE.var"#297#298#305"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, StochasticTraining}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, Nothing}, Vector{Any}, DataType, Int64}}, NeuralPDE.PINNRepresentation, Bool, Vector{Int64}, Int64, NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, NonAdaptiveLoss{Float64}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, SciMLBase.NullParameters}})(::CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}, ::CuArray{Float64, 1, CUDA.Mem.DeviceBuffer})
    @ Optimization C:\Users\accou\.julia\dev\Optimization\src\function\zygote.jl:32
 [72] macro expansion
    @ C:\Users\accou\.julia\packages\OptimizationFlux\cpWyO\src\OptimizationFlux.jl:32 [inlined]
 [73] macro expansion
    @ C:\Users\accou\.julia\dev\Optimization\src\utils.jl:35 [inlined]
 [74] __solve(prob::OptimizationProblem{true, OptimizationFunction{true, Optimization.AutoZygote, NeuralPDE.var"#full_loss_function#328"{NeuralPDE.var"#null_nonadaptive_loss#127", Vector{NeuralPDE.var"#87#88"{loss_function, Vector{Any}, DataType, Int64} where loss_function}, Vector{NeuralPDE.var"#87#88"{NeuralPDE.var"#231#232"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#313"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x26bb95dc, 0xdf5e9632, 0x817f3943, 0xb81f03ec, 0x97e64e44)}, NeuralPDE.var"#12#13", NeuralPDE.var"#297#304"{NeuralPDE.var"#297#298#305"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, StochasticTraining}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, Nothing}, Vector{Any}, DataType, Int64}}, NeuralPDE.PINNRepresentation, Bool, Vector{Int64}, Int64, NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, NonAdaptiveLoss{Float64}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, opt::ADAM, data::Base.Iterators.Cycle{Tuple{Optimization.NullData}}; maxiters::Int64, callback::Function, progress::Bool, save_best::Bool, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ OptimizationFlux C:\Users\accou\.julia\packages\OptimizationFlux\cpWyO\src\OptimizationFlux.jl:30
 [75] #solve#494
    @ C:\Users\accou\.julia\dev\SciMLBase\src\solve.jl:71 [inlined]
```
ChrisRackauckas added a commit that referenced this pull request Jun 30, 2022
I'm not sure how to easily isolate this one. It seems deep in the braodcast

```julia
using Flux, OptimizationFlux
using DiffEqFlux
using Test, NeuralPDE
using Optimization
using CUDA, QuasiMonteCarlo
import ModelingToolkit: Interval, infimum, supremum

using Random
Random.seed!(100)

@parameters t x
@variables u(..)
Dt = Differential(t)
Dxx = Differential(x)^2

eq = Dt(u(t, x)) ~ Dxx(u(t, x))
bcs = [u(0, x) ~ cos(x),
    u(t, 0) ~ exp(-t),
    u(t, 1) ~ exp(-t) * cos(1)]

domains = [t ∈ Interval(0.0, 1.0),
    x ∈ Interval(0.0, 1.0)]

@nAmed pdesys = PDESystem(eq, bcs, domains, [t, x], [u(t, x)])

inner = 30
chain = FastChain(FastDense(2, inner, Flux.σ),
                  FastDense(inner, inner, Flux.σ),
                  FastDense(inner, inner, Flux.σ),
                  FastDense(inner, inner, Flux.σ),
                  FastDense(inner, inner, Flux.σ),
                  FastDense(inner, inner, Flux.σ),
                  FastDense(inner, 1))#,(u,p)->gpuones .* u)

strategy = NeuralPDE.StochasticTraining(500)
initθ = CuArray(Float64.(DiffEqFlux.initial_params(chain)))
discretization = NeuralPDE.PhysicsInformedNN(chain,
                                             strategy;
                                             init_params = initθ)
prob = NeuralPDE.discretize(pdesys, discretization)
symprob = NeuralPDE.symbolic_discretize(pdesys, discretization)


res = Optimization.solve(prob, ADAM(0.01); maxiters = 1000)
```

```
julia> res = Optimization.solve(prob, ADAM(0.01); maxiters = 1000)
ERROR: type Nothing has no field buffer
Stacktrace:
  [1] getproperty(x::Nothing, f::Symbol)
    @ Base .\Base.jl:38
  [2] unsafe_convert
    @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\src\array.jl:321 [inlined]
  [3] unsafe_convert
    @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\src\pointer.jl:62 [inlined]
  [4] macro expansion
    @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\libcublas.jl:1406 [inlined]
  [5] macro expansion
    @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\src\pool.jl:232 [inlined]
  [6] macro expansion
    @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\error.jl:61 [inlined]
  [7] cublasGemmEx(handle::Ptr{Nothing}, transa::Char, transb::Char, m::Int64, n::Int64, k::Int64, alpha::Base.RefValue{Float64}, A::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}, Atype::Type, lda::Int64, B::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}, Btype::Type, ldb::Int64, beta::Base.RefValue{Float64}, C::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}, Ctype::Type, ldc::Int64, computeType::CUDA.CUBLAS.cublasComputeType_t, algo::CUDA.CUBLAS.cublasGemmAlgo_t)     
    @ CUDA.CUBLAS C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\utils\call.jl:26
  [8] gemmEx!(transA::Char, transB::Char, alpha::Number, A::StridedCuVecOrMat, B::StridedCuVecOrMat, beta::Number, C::StridedCuVecOrMat; algo::CUDA.CUBLAS.cublasGemmAlgo_t)
    @ CUDA.CUBLAS C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\wrappers.jl:920
  [9] gemmEx!
    @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\wrappers.jl:895 [inlined]
 [10] gemm_dispatch!(C::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}, A::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}, B::LinearAlgebra.Adjoint{Float64, CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}}, alpha::Bool, beta::Bool)
    @ CUDA.CUBLAS C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\linalg.jl:298
 [11] mul!
    @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\linalg.jl:321 [inlined]
 [12] mul!
    @ C:\Users\accou\.julia\juliaup\julia-1.8.0-rc1+0~x64\share\julia\stdlib\v1.8\LinearAlgebra\src\matmul.jl:276 [inlined]
 [13] *
    @ C:\Users\accou\.julia\juliaup\julia-1.8.0-rc1+0~x64\share\julia\stdlib\v1.8\LinearAlgebra\src\matmul.jl:148 [inlined]
 [14] (::DiffEqFlux.var"#FastDense_adjoint#114"{FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}})(ȳ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer})
    @ DiffEqFlux C:\Users\accou\.julia\packages\DiffEqFlux\5e9D2\src\fast_layers.jl:278
 [15] #252#back
    @ C:\Users\accou\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:67 [inlined]
 [16] Pullback (repeats 2 times)
    @ C:\Users\accou\.julia\packages\DiffEqFlux\5e9D2\src\fast_layers.jl:20 [inlined]
 [17] (::typeof(∂(applychain)))(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer})
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
--- the last 2 lines are repeated 5 more times ---
 [28] Pullback
    @ C:\Users\accou\.julia\packages\DiffEqFlux\5e9D2\src\fast_layers.jl:21 [inlined]
 [29] Pullback
    @ C:\Users\accou\.julia\dev\NeuralPDE\src\pinn_types.jl:228 [inlined]
 [30] (::typeof(∂(λ)))(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer})
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
 [31] Pullback
    @ C:\Users\accou\.julia\dev\NeuralPDE\src\pinn_types.jl:232 [inlined]
 [32] (::typeof(∂(#12)))(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer})
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
 [33] Pullback
    @ C:\Users\accou\.julia\dev\NeuralPDE\src\pinn_types.jl:261 [inlined]
 [34] (::typeof(∂(numeric_derivative)))(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer})
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
 [35] macro expansion
    @ C:\Users\accou\.julia\dev\NeuralPDE\src\discretize.jl:142 [inlined]
 [36] macro expansion
    @ C:\Users\accou\.julia\packages\RuntimeGeneratedFunctions\KrkGo\src\RuntimeGeneratedFunctions.jl:129 [inlined]
 [37] macro expansion
    @ .\none:0 [inlined]
 [38] Pullback
    @ .\none:0 [inlined]
 [39] (::Zygote.var"#208#209"{Tuple{Tuple{Nothing}, NTuple{7, Nothing}}, typeof(∂(generated_callfunc))})(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer})   
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\lib.jl:207
 [40] #1750#back
    @ C:\Users\accou\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:67 [inlined]
 [41] Pullback
    @ C:\Users\accou\.julia\packages\RuntimeGeneratedFunctions\KrkGo\src\RuntimeGeneratedFunctions.jl:117 [inlined]
 [42] Pullback
    @ C:\Users\accou\.julia\dev\NeuralPDE\src\discretize.jl:157 [inlined]
 [43] (::typeof(∂(λ)))(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer})
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
 [44] Pullback
    @ C:\Users\accou\.julia\dev\NeuralPDE\src\training_strategies.jl:89 [inlined]
 [45] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
 [46] Pullback
    @ .\none:0 [inlined]
 [47] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
 [48] #546
    @ C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\array.jl:198 [inlined]
 [49] #4
    @ .\generator.jl:36 [inlined]
 [50] iterate
    @ .\generator.jl:47 [inlined]
 [51] collect(itr::Base.Generator{Base.Iterators.Zip{Tuple{Vector{Tuple{Float64, typeof(∂(λ))}}, Vector{Float64}}}, Base.var"#4#5"{Zygote.var"#546#551"}})
    @ Base .\array.jl:787
 [52] map
    @ .\abstractarray.jl:3053 [inlined]
 [53] (::Zygote.var"#map_back#548"{NeuralPDE.var"#318#329"{CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, 1, Tuple{Vector{NeuralPDE.var"#87#88"{NeuralPDE.var"#231#232"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#313"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x26bb95dc, 0xdf5e9632, 0x817f3943, 0xb81f03ec, 0x97e64e44)}, NeuralPDE.var"#12#13", NeuralPDE.var"#297#304"{NeuralPDE.var"#297#298#305"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, StochasticTraining}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, Nothing}, Vector{Any}, DataType, Int64}}}, Tuple{Tuple{Base.OneTo{Int64}}}, Vector{Tuple{Float64, typeof(∂(λ))}}})(Δ::Vector{Float64})
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\array.jl:198
 [54] (::Zygote.var"#back#578"{Zygote.var"#map_back#548"{NeuralPDE.var"#318#329"{CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, 1, Tuple{Vector{NeuralPDE.var"#87#88"{NeuralPDE.var"#231#232"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#313"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x26bb95dc, 0xdf5e9632, 0x817f3943, 0xb81f03ec, 0x97e64e44)}, NeuralPDE.var"#12#13", NeuralPDE.var"#297#304"{NeuralPDE.var"#297#298#305"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, StochasticTraining}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, Nothing}, Vector{Any}, DataType, Int64}}}, Tuple{Tuple{Base.OneTo{Int64}}}, Vector{Tuple{Float64, typeof(∂(λ))}}}})(ȳ::Vector{Float64})
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\array.jl:234
 [55] Pullback
    @ C:\Users\accou\.julia\dev\NeuralPDE\src\discretize.jl:445 [inlined]
 [56] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
 [57] #208
    @ C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\lib.jl:207 [inlined]
 [58] #1750#back
    @ C:\Users\accou\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:67 [inlined]
 [59] Pullback
    @ C:\Users\accou\.julia\dev\SciMLBase\src\scimlfunctions.jl:2887 [inlined]
 [60] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
 [61] #208
    @ C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\lib.jl:207 [inlined]
 [62] #1750#back
    @ C:\Users\accou\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:67 [inlined]
 [63] Pullback
    @ C:\Users\accou\.julia\dev\Optimization\src\function\zygote.jl:30 [inlined]
 [64] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
 [65] #208
    @ C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\lib.jl:207 [inlined]
 [66] #1750#back
    @ C:\Users\accou\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:67 [inlined]
 [67] Pullback
    @ C:\Users\accou\.julia\dev\Optimization\src\function\zygote.jl:32 [inlined]
 [68] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0
 [69] (::Zygote.var"#52#53"{typeof(∂(λ))})(Δ::Float64)
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface.jl:41
 [70] gradient(f::Function, args::CuArray{Float64, 1, CUDA.Mem.DeviceBuffer})
    @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface.jl:76
 [71] (::Optimization.var"#85#95"{Optimization.var"#84#94"{OptimizationFunction{true, Optimization.AutoZygote, NeuralPDE.var"#full_loss_function#328"{NeuralPDE.var"#null_nonadaptive_loss#127", Vector{NeuralPDE.var"#87#88"{loss_function, Vector{Any}, DataType, Int64} where loss_function}, Vector{NeuralPDE.var"#87#88"{NeuralPDE.var"#231#232"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#313"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x26bb95dc, 0xdf5e9632, 0x817f3943, 0xb81f03ec, 0x97e64e44)}, NeuralPDE.var"#12#13", NeuralPDE.var"#297#304"{NeuralPDE.var"#297#298#305"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, StochasticTraining}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, Nothing}, Vector{Any}, DataType, Int64}}, NeuralPDE.PINNRepresentation, Bool, Vector{Int64}, Int64, NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, NonAdaptiveLoss{Float64}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, SciMLBase.NullParameters}})(::CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}, ::CuArray{Float64, 1, CUDA.Mem.DeviceBuffer})
    @ Optimization C:\Users\accou\.julia\dev\Optimization\src\function\zygote.jl:32
 [72] macro expansion
    @ C:\Users\accou\.julia\packages\OptimizationFlux\cpWyO\src\OptimizationFlux.jl:32 [inlined]
 [73] macro expansion
    @ C:\Users\accou\.julia\dev\Optimization\src\utils.jl:35 [inlined]
 [74] __solve(prob::OptimizationProblem{true, OptimizationFunction{true, Optimization.AutoZygote, NeuralPDE.var"#full_loss_function#328"{NeuralPDE.var"#null_nonadaptive_loss#127", Vector{NeuralPDE.var"#87#88"{loss_function, Vector{Any}, DataType, Int64} where loss_function}, Vector{NeuralPDE.var"#87#88"{NeuralPDE.var"#231#232"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#313"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x26bb95dc, 0xdf5e9632, 0x817f3943, 0xb81f03ec, 0x97e64e44)}, NeuralPDE.var"#12#13", NeuralPDE.var"#297#304"{NeuralPDE.var"#297#298#305"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, StochasticTraining}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, Nothing}, Vector{Any}, DataType, Int64}}, NeuralPDE.PINNRepresentation, Bool, Vector{Int64}, Int64, NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, NonAdaptiveLoss{Float64}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, opt::ADAM, data::Base.Iterators.Cycle{Tuple{Optimization.NullData}}; maxiters::Int64, callback::Function, progress::Bool, save_best::Bool, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ OptimizationFlux C:\Users\accou\.julia\packages\OptimizationFlux\cpWyO\src\OptimizationFlux.jl:30
 [75] #solve#494
    @ C:\Users\accou\.julia\dev\SciMLBase\src\solve.jl:71 [inlined]
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants