-
-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
described relationship between SDEs and PDE #87
Conversation
Awesome stuff, probably change name of the file to not contain spaces, stuff is not good with such names and can lead to unexpected errors. Also a note on curse of dimensionality would be nice too :) |
done. |
|
||
There are many definitions of **Brownian Motion**. One of the easiest to picture is that it's the random walk where at every infinitesimal ``dt`` you move by ``N(0,dt)`` (this is formally true in non-standard analysis, see **Radically Elementary Probability Theory** for details). Another way to think of it is as a limit of standard random walks. Let's say you have a random walk were every time step of ``h`` you move ``\Delta x`` to the right or left. If you let ``h/\Delta x = 1`` and send ``h \rightarrow 0``, then the limiting process is a **Brownian Motion**. All of the stuff about normal distributions just comes from the central limit theorem and the fact that you're doing infinitely many movements per second! | ||
|
||
## Wiener Process Calculus Summarized |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure this is necessary at all? It's somewhat orthogonal to the understanding of what the method is solving and how to use the method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ChrisRackauckas It's just an explanation to tell that there are many definition of Brownian Motion and had tried to give a view about Radically Elementary Probability Theory. If you think that it's not necessary then I will remove it? Waiting for your instruction.
…-test Neural Network ODE Solver using Test
I'm not sure how to easily isolate this one. It seems deep in the braoa ```julia using Flux, OptimizationFlux using DiffEqFlux using Test, NeuralPDE using Optimization using CUDA, QuasiMonteCarlo import ModelingToolkit: Interval, infimum, supremum using Random Random.seed!(100) @parameters t x @variables u(..) Dt = Differential(t) Dxx = Differential(x)^2 eq = Dt(u(t, x)) ~ Dxx(u(t, x)) bcs = [u(0, x) ~ cos(x), u(t, 0) ~ exp(-t), u(t, 1) ~ exp(-t) * cos(1)] domains = [t ∈ Interval(0.0, 1.0), x ∈ Interval(0.0, 1.0)] @nAmed pdesys = PDESystem(eq, bcs, domains, [t, x], [u(t, x)]) inner = 30 chain = FastChain(FastDense(2, inner, Flux.σ), FastDense(inner, inner, Flux.σ), FastDense(inner, inner, Flux.σ), FastDense(inner, inner, Flux.σ), FastDense(inner, inner, Flux.σ), FastDense(inner, inner, Flux.σ), FastDense(inner, 1))#,(u,p)->gpuones .* u) strategy = NeuralPDE.StochasticTraining(500) initθ = CuArray(Float64.(DiffEqFlux.initial_params(chain))) discretization = NeuralPDE.PhysicsInformedNN(chain, strategy; init_params = initθ) prob = NeuralPDE.discretize(pdesys, discretization) symprob = NeuralPDE.symbolic_discretize(pdesys, discretization) res = Optimization.solve(prob, ADAM(0.01); maxiters = 1000) ``` ``` julia> res = Optimization.solve(prob, ADAM(0.01); maxiters = 1000) ERROR: type Nothing has no field buffer Stacktrace: [1] getproperty(x::Nothing, f::Symbol) @ Base .\Base.jl:38 [2] unsafe_convert @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\src\array.jl:321 [inlined] [3] unsafe_convert @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\src\pointer.jl:62 [inlined] [4] macro expansion @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\libcublas.jl:1406 [inlined] [5] macro expansion @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\src\pool.jl:232 [inlined] [6] macro expansion @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\error.jl:61 [inlined] [7] cublasGemmEx(handle::Ptr{Nothing}, transa::Char, transb::Char, m::Int64, n::Int64, k::Int64, alpha::Base.RefValue{Float64}, A::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}, Atype::Type, lda::Int64, B::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}, Btype::Type, ldb::Int64, beta::Base.RefValue{Float64}, C::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}, Ctype::Type, ldc::Int64, computeType::CUDA.CUBLAS.cublasComputeType_t, algo::CUDA.CUBLAS.cublasGemmAlgo_t) @ CUDA.CUBLAS C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\utils\call.jl:26 [8] gemmEx!(transA::Char, transB::Char, alpha::Number, A::StridedCuVecOrMat, B::StridedCuVecOrMat, beta::Number, C::StridedCuVecOrMat; algo::CUDA.CUBLAS.cublasGemmAlgo_t) @ CUDA.CUBLAS C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\wrappers.jl:920 [9] gemmEx! @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\wrappers.jl:895 [inlined] [10] gemm_dispatch!(C::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}, A::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}, B::LinearAlgebra.Adjoint{Float64, CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}}, alpha::Bool, beta::Bool) @ CUDA.CUBLAS C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\linalg.jl:298 [11] mul! @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\linalg.jl:321 [inlined] [12] mul! @ C:\Users\accou\.julia\juliaup\julia-1.8.0-rc1+0~x64\share\julia\stdlib\v1.8\LinearAlgebra\src\matmul.jl:276 [inlined] [13] * @ C:\Users\accou\.julia\juliaup\julia-1.8.0-rc1+0~x64\share\julia\stdlib\v1.8\LinearAlgebra\src\matmul.jl:148 [inlined] [14] (::DiffEqFlux.var"#FastDense_adjoint#114"{FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}})(ȳ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}) @ DiffEqFlux C:\Users\accou\.julia\packages\DiffEqFlux\5e9D2\src\fast_layers.jl:278 [15] #252#back @ C:\Users\accou\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:67 [inlined] [16] Pullback (repeats 2 times) @ C:\Users\accou\.julia\packages\DiffEqFlux\5e9D2\src\fast_layers.jl:20 [inlined] [17] (::typeof(∂(applychain)))(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 --- the last 2 lines are repeated 5 more times --- [28] Pullback @ C:\Users\accou\.julia\packages\DiffEqFlux\5e9D2\src\fast_layers.jl:21 [inlined] [29] Pullback @ C:\Users\accou\.julia\dev\NeuralPDE\src\pinn_types.jl:228 [inlined] [30] (::typeof(∂(λ)))(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 [31] Pullback @ C:\Users\accou\.julia\dev\NeuralPDE\src\pinn_types.jl:232 [inlined] [32] (::typeof(∂(#12)))(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 [33] Pullback @ C:\Users\accou\.julia\dev\NeuralPDE\src\pinn_types.jl:261 [inlined] [34] (::typeof(∂(numeric_derivative)))(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 [35] macro expansion @ C:\Users\accou\.julia\dev\NeuralPDE\src\discretize.jl:142 [inlined] [36] macro expansion @ C:\Users\accou\.julia\packages\RuntimeGeneratedFunctions\KrkGo\src\RuntimeGeneratedFunctions.jl:129 [inlined] [37] macro expansion @ .\none:0 [inlined] [38] Pullback @ .\none:0 [inlined] [39] (::Zygote.var"#208#209"{Tuple{Tuple{Nothing}, NTuple{7, Nothing}}, typeof(∂(generated_callfunc))})(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\lib.jl:207 [40] #1750#back @ C:\Users\accou\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:67 [inlined] [41] Pullback @ C:\Users\accou\.julia\packages\RuntimeGeneratedFunctions\KrkGo\src\RuntimeGeneratedFunctions.jl:117 [inlined] [42] Pullback @ C:\Users\accou\.julia\dev\NeuralPDE\src\discretize.jl:157 [inlined] [43] (::typeof(∂(λ)))(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 [44] Pullback @ C:\Users\accou\.julia\dev\NeuralPDE\src\training_strategies.jl:89 [inlined] [45] (::typeof(∂(λ)))(Δ::Float64) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 [46] Pullback @ .\none:0 [inlined] [47] (::typeof(∂(λ)))(Δ::Float64) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 [48] #546 @ C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\array.jl:198 [inlined] [49] #4 @ .\generator.jl:36 [inlined] [50] iterate @ .\generator.jl:47 [inlined] [51] collect(itr::Base.Generator{Base.Iterators.Zip{Tuple{Vector{Tuple{Float64, typeof(∂(λ))}}, Vector{Float64}}}, Base.var"#4#5"{Zygote.var"#546#551"}}) @ Base .\array.jl:787 [52] map @ .\abstractarray.jl:3053 [inlined] [53] (::Zygote.var"#map_back#548"{NeuralPDE.var"#318#329"{CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, 1, Tuple{Vector{NeuralPDE.var"#87#88"{NeuralPDE.var"#231#232"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#313"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x26bb95dc, 0xdf5e9632, 0x817f3943, 0xb81f03ec, 0x97e64e44)}, NeuralPDE.var"#12#13", NeuralPDE.var"#297#304"{NeuralPDE.var"#297#298#305"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, StochasticTraining}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, Nothing}, Vector{Any}, DataType, Int64}}}, Tuple{Tuple{Base.OneTo{Int64}}}, Vector{Tuple{Float64, typeof(∂(λ))}}})(Δ::Vector{Float64}) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\array.jl:198 [54] (::Zygote.var"#back#578"{Zygote.var"#map_back#548"{NeuralPDE.var"#318#329"{CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, 1, Tuple{Vector{NeuralPDE.var"#87#88"{NeuralPDE.var"#231#232"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#313"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x26bb95dc, 0xdf5e9632, 0x817f3943, 0xb81f03ec, 0x97e64e44)}, NeuralPDE.var"#12#13", NeuralPDE.var"#297#304"{NeuralPDE.var"#297#298#305"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, StochasticTraining}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, Nothing}, Vector{Any}, DataType, Int64}}}, Tuple{Tuple{Base.OneTo{Int64}}}, Vector{Tuple{Float64, typeof(∂(λ))}}}})(ȳ::Vector{Float64}) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\array.jl:234 [55] Pullback @ C:\Users\accou\.julia\dev\NeuralPDE\src\discretize.jl:445 [inlined] [56] (::typeof(∂(λ)))(Δ::Float64) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 [57] #208 @ C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\lib.jl:207 [inlined] [58] #1750#back @ C:\Users\accou\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:67 [inlined] [59] Pullback @ C:\Users\accou\.julia\dev\SciMLBase\src\scimlfunctions.jl:2887 [inlined] [60] (::typeof(∂(λ)))(Δ::Float64) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 [61] #208 @ C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\lib.jl:207 [inlined] [62] #1750#back @ C:\Users\accou\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:67 [inlined] [63] Pullback @ C:\Users\accou\.julia\dev\Optimization\src\function\zygote.jl:30 [inlined] [64] (::typeof(∂(λ)))(Δ::Float64) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 [65] #208 @ C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\lib.jl:207 [inlined] [66] #1750#back @ C:\Users\accou\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:67 [inlined] [67] Pullback @ C:\Users\accou\.julia\dev\Optimization\src\function\zygote.jl:32 [inlined] [68] (::typeof(∂(λ)))(Δ::Float64) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 [69] (::Zygote.var"#52#53"{typeof(∂(λ))})(Δ::Float64) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface.jl:41 [70] gradient(f::Function, args::CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface.jl:76 [71] (::Optimization.var"#85#95"{Optimization.var"#84#94"{OptimizationFunction{true, Optimization.AutoZygote, NeuralPDE.var"#full_loss_function#328"{NeuralPDE.var"#null_nonadaptive_loss#127", Vector{NeuralPDE.var"#87#88"{loss_function, Vector{Any}, DataType, Int64} where loss_function}, Vector{NeuralPDE.var"#87#88"{NeuralPDE.var"#231#232"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#313"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x26bb95dc, 0xdf5e9632, 0x817f3943, 0xb81f03ec, 0x97e64e44)}, NeuralPDE.var"#12#13", NeuralPDE.var"#297#304"{NeuralPDE.var"#297#298#305"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, StochasticTraining}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, Nothing}, Vector{Any}, DataType, Int64}}, NeuralPDE.PINNRepresentation, Bool, Vector{Int64}, Int64, NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, NonAdaptiveLoss{Float64}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, SciMLBase.NullParameters}})(::CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}, ::CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}) @ Optimization C:\Users\accou\.julia\dev\Optimization\src\function\zygote.jl:32 [72] macro expansion @ C:\Users\accou\.julia\packages\OptimizationFlux\cpWyO\src\OptimizationFlux.jl:32 [inlined] [73] macro expansion @ C:\Users\accou\.julia\dev\Optimization\src\utils.jl:35 [inlined] [74] __solve(prob::OptimizationProblem{true, OptimizationFunction{true, Optimization.AutoZygote, NeuralPDE.var"#full_loss_function#328"{NeuralPDE.var"#null_nonadaptive_loss#127", Vector{NeuralPDE.var"#87#88"{loss_function, Vector{Any}, DataType, Int64} where loss_function}, Vector{NeuralPDE.var"#87#88"{NeuralPDE.var"#231#232"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#313"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x26bb95dc, 0xdf5e9632, 0x817f3943, 0xb81f03ec, 0x97e64e44)}, NeuralPDE.var"#12#13", NeuralPDE.var"#297#304"{NeuralPDE.var"#297#298#305"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, StochasticTraining}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, Nothing}, Vector{Any}, DataType, Int64}}, NeuralPDE.PINNRepresentation, Bool, Vector{Int64}, Int64, NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, NonAdaptiveLoss{Float64}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, opt::ADAM, data::Base.Iterators.Cycle{Tuple{Optimization.NullData}}; maxiters::Int64, callback::Function, progress::Bool, save_best::Bool, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}) @ OptimizationFlux C:\Users\accou\.julia\packages\OptimizationFlux\cpWyO\src\OptimizationFlux.jl:30 [75] #solve#494 @ C:\Users\accou\.julia\dev\SciMLBase\src\solve.jl:71 [inlined] ```
I'm not sure how to easily isolate this one. It seems deep in the braodcast ```julia using Flux, OptimizationFlux using DiffEqFlux using Test, NeuralPDE using Optimization using CUDA, QuasiMonteCarlo import ModelingToolkit: Interval, infimum, supremum using Random Random.seed!(100) @parameters t x @variables u(..) Dt = Differential(t) Dxx = Differential(x)^2 eq = Dt(u(t, x)) ~ Dxx(u(t, x)) bcs = [u(0, x) ~ cos(x), u(t, 0) ~ exp(-t), u(t, 1) ~ exp(-t) * cos(1)] domains = [t ∈ Interval(0.0, 1.0), x ∈ Interval(0.0, 1.0)] @nAmed pdesys = PDESystem(eq, bcs, domains, [t, x], [u(t, x)]) inner = 30 chain = FastChain(FastDense(2, inner, Flux.σ), FastDense(inner, inner, Flux.σ), FastDense(inner, inner, Flux.σ), FastDense(inner, inner, Flux.σ), FastDense(inner, inner, Flux.σ), FastDense(inner, inner, Flux.σ), FastDense(inner, 1))#,(u,p)->gpuones .* u) strategy = NeuralPDE.StochasticTraining(500) initθ = CuArray(Float64.(DiffEqFlux.initial_params(chain))) discretization = NeuralPDE.PhysicsInformedNN(chain, strategy; init_params = initθ) prob = NeuralPDE.discretize(pdesys, discretization) symprob = NeuralPDE.symbolic_discretize(pdesys, discretization) res = Optimization.solve(prob, ADAM(0.01); maxiters = 1000) ``` ``` julia> res = Optimization.solve(prob, ADAM(0.01); maxiters = 1000) ERROR: type Nothing has no field buffer Stacktrace: [1] getproperty(x::Nothing, f::Symbol) @ Base .\Base.jl:38 [2] unsafe_convert @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\src\array.jl:321 [inlined] [3] unsafe_convert @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\src\pointer.jl:62 [inlined] [4] macro expansion @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\libcublas.jl:1406 [inlined] [5] macro expansion @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\src\pool.jl:232 [inlined] [6] macro expansion @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\error.jl:61 [inlined] [7] cublasGemmEx(handle::Ptr{Nothing}, transa::Char, transb::Char, m::Int64, n::Int64, k::Int64, alpha::Base.RefValue{Float64}, A::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}, Atype::Type, lda::Int64, B::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}, Btype::Type, ldb::Int64, beta::Base.RefValue{Float64}, C::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}, Ctype::Type, ldc::Int64, computeType::CUDA.CUBLAS.cublasComputeType_t, algo::CUDA.CUBLAS.cublasGemmAlgo_t) @ CUDA.CUBLAS C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\utils\call.jl:26 [8] gemmEx!(transA::Char, transB::Char, alpha::Number, A::StridedCuVecOrMat, B::StridedCuVecOrMat, beta::Number, C::StridedCuVecOrMat; algo::CUDA.CUBLAS.cublasGemmAlgo_t) @ CUDA.CUBLAS C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\wrappers.jl:920 [9] gemmEx! @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\wrappers.jl:895 [inlined] [10] gemm_dispatch!(C::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}, A::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}, B::LinearAlgebra.Adjoint{Float64, CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}}, alpha::Bool, beta::Bool) @ CUDA.CUBLAS C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\linalg.jl:298 [11] mul! @ C:\Users\accou\.julia\packages\CUDA\tTK8Y\lib\cublas\linalg.jl:321 [inlined] [12] mul! @ C:\Users\accou\.julia\juliaup\julia-1.8.0-rc1+0~x64\share\julia\stdlib\v1.8\LinearAlgebra\src\matmul.jl:276 [inlined] [13] * @ C:\Users\accou\.julia\juliaup\julia-1.8.0-rc1+0~x64\share\julia\stdlib\v1.8\LinearAlgebra\src\matmul.jl:148 [inlined] [14] (::DiffEqFlux.var"#FastDense_adjoint#114"{FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}})(ȳ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}) @ DiffEqFlux C:\Users\accou\.julia\packages\DiffEqFlux\5e9D2\src\fast_layers.jl:278 [15] #252#back @ C:\Users\accou\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:67 [inlined] [16] Pullback (repeats 2 times) @ C:\Users\accou\.julia\packages\DiffEqFlux\5e9D2\src\fast_layers.jl:20 [inlined] [17] (::typeof(∂(applychain)))(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 --- the last 2 lines are repeated 5 more times --- [28] Pullback @ C:\Users\accou\.julia\packages\DiffEqFlux\5e9D2\src\fast_layers.jl:21 [inlined] [29] Pullback @ C:\Users\accou\.julia\dev\NeuralPDE\src\pinn_types.jl:228 [inlined] [30] (::typeof(∂(λ)))(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 [31] Pullback @ C:\Users\accou\.julia\dev\NeuralPDE\src\pinn_types.jl:232 [inlined] [32] (::typeof(∂(#12)))(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 [33] Pullback @ C:\Users\accou\.julia\dev\NeuralPDE\src\pinn_types.jl:261 [inlined] [34] (::typeof(∂(numeric_derivative)))(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 [35] macro expansion @ C:\Users\accou\.julia\dev\NeuralPDE\src\discretize.jl:142 [inlined] [36] macro expansion @ C:\Users\accou\.julia\packages\RuntimeGeneratedFunctions\KrkGo\src\RuntimeGeneratedFunctions.jl:129 [inlined] [37] macro expansion @ .\none:0 [inlined] [38] Pullback @ .\none:0 [inlined] [39] (::Zygote.var"#208#209"{Tuple{Tuple{Nothing}, NTuple{7, Nothing}}, typeof(∂(generated_callfunc))})(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\lib.jl:207 [40] #1750#back @ C:\Users\accou\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:67 [inlined] [41] Pullback @ C:\Users\accou\.julia\packages\RuntimeGeneratedFunctions\KrkGo\src\RuntimeGeneratedFunctions.jl:117 [inlined] [42] Pullback @ C:\Users\accou\.julia\dev\NeuralPDE\src\discretize.jl:157 [inlined] [43] (::typeof(∂(λ)))(Δ::CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 [44] Pullback @ C:\Users\accou\.julia\dev\NeuralPDE\src\training_strategies.jl:89 [inlined] [45] (::typeof(∂(λ)))(Δ::Float64) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 [46] Pullback @ .\none:0 [inlined] [47] (::typeof(∂(λ)))(Δ::Float64) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 [48] #546 @ C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\array.jl:198 [inlined] [49] #4 @ .\generator.jl:36 [inlined] [50] iterate @ .\generator.jl:47 [inlined] [51] collect(itr::Base.Generator{Base.Iterators.Zip{Tuple{Vector{Tuple{Float64, typeof(∂(λ))}}, Vector{Float64}}}, Base.var"#4#5"{Zygote.var"#546#551"}}) @ Base .\array.jl:787 [52] map @ .\abstractarray.jl:3053 [inlined] [53] (::Zygote.var"#map_back#548"{NeuralPDE.var"#318#329"{CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, 1, Tuple{Vector{NeuralPDE.var"#87#88"{NeuralPDE.var"#231#232"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#313"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x26bb95dc, 0xdf5e9632, 0x817f3943, 0xb81f03ec, 0x97e64e44)}, NeuralPDE.var"#12#13", NeuralPDE.var"#297#304"{NeuralPDE.var"#297#298#305"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, StochasticTraining}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, Nothing}, Vector{Any}, DataType, Int64}}}, Tuple{Tuple{Base.OneTo{Int64}}}, Vector{Tuple{Float64, typeof(∂(λ))}}})(Δ::Vector{Float64}) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\array.jl:198 [54] (::Zygote.var"#back#578"{Zygote.var"#map_back#548"{NeuralPDE.var"#318#329"{CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, 1, Tuple{Vector{NeuralPDE.var"#87#88"{NeuralPDE.var"#231#232"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#313"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x26bb95dc, 0xdf5e9632, 0x817f3943, 0xb81f03ec, 0x97e64e44)}, NeuralPDE.var"#12#13", NeuralPDE.var"#297#304"{NeuralPDE.var"#297#298#305"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, StochasticTraining}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, Nothing}, Vector{Any}, DataType, Int64}}}, Tuple{Tuple{Base.OneTo{Int64}}}, Vector{Tuple{Float64, typeof(∂(λ))}}}})(ȳ::Vector{Float64}) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\array.jl:234 [55] Pullback @ C:\Users\accou\.julia\dev\NeuralPDE\src\discretize.jl:445 [inlined] [56] (::typeof(∂(λ)))(Δ::Float64) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 [57] #208 @ C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\lib.jl:207 [inlined] [58] #1750#back @ C:\Users\accou\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:67 [inlined] [59] Pullback @ C:\Users\accou\.julia\dev\SciMLBase\src\scimlfunctions.jl:2887 [inlined] [60] (::typeof(∂(λ)))(Δ::Float64) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 [61] #208 @ C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\lib.jl:207 [inlined] [62] #1750#back @ C:\Users\accou\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:67 [inlined] [63] Pullback @ C:\Users\accou\.julia\dev\Optimization\src\function\zygote.jl:30 [inlined] [64] (::typeof(∂(λ)))(Δ::Float64) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 [65] #208 @ C:\Users\accou\.julia\packages\Zygote\DkIUK\src\lib\lib.jl:207 [inlined] [66] #1750#back @ C:\Users\accou\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:67 [inlined] [67] Pullback @ C:\Users\accou\.julia\dev\Optimization\src\function\zygote.jl:32 [inlined] [68] (::typeof(∂(λ)))(Δ::Float64) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface2.jl:0 [69] (::Zygote.var"#52#53"{typeof(∂(λ))})(Δ::Float64) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface.jl:41 [70] gradient(f::Function, args::CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}) @ Zygote C:\Users\accou\.julia\packages\Zygote\DkIUK\src\compiler\interface.jl:76 [71] (::Optimization.var"#85#95"{Optimization.var"#84#94"{OptimizationFunction{true, Optimization.AutoZygote, NeuralPDE.var"#full_loss_function#328"{NeuralPDE.var"#null_nonadaptive_loss#127", Vector{NeuralPDE.var"#87#88"{loss_function, Vector{Any}, DataType, Int64} where loss_function}, Vector{NeuralPDE.var"#87#88"{NeuralPDE.var"#231#232"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#313"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x26bb95dc, 0xdf5e9632, 0x817f3943, 0xb81f03ec, 0x97e64e44)}, NeuralPDE.var"#12#13", NeuralPDE.var"#297#304"{NeuralPDE.var"#297#298#305"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, StochasticTraining}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, Nothing}, Vector{Any}, DataType, Int64}}, NeuralPDE.PINNRepresentation, Bool, Vector{Int64}, Int64, NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, NonAdaptiveLoss{Float64}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, SciMLBase.NullParameters}})(::CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}, ::CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}) @ Optimization C:\Users\accou\.julia\dev\Optimization\src\function\zygote.jl:32 [72] macro expansion @ C:\Users\accou\.julia\packages\OptimizationFlux\cpWyO\src\OptimizationFlux.jl:32 [inlined] [73] macro expansion @ C:\Users\accou\.julia\dev\Optimization\src\utils.jl:35 [inlined] [74] __solve(prob::OptimizationProblem{true, OptimizationFunction{true, Optimization.AutoZygote, NeuralPDE.var"#full_loss_function#328"{NeuralPDE.var"#null_nonadaptive_loss#127", Vector{NeuralPDE.var"#87#88"{loss_function, Vector{Any}, DataType, Int64} where loss_function}, Vector{NeuralPDE.var"#87#88"{NeuralPDE.var"#231#232"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#313"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x26bb95dc, 0xdf5e9632, 0x817f3943, 0xb81f03ec, 0x97e64e44)}, NeuralPDE.var"#12#13", NeuralPDE.var"#297#304"{NeuralPDE.var"#297#298#305"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, StochasticTraining}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, Nothing}, Vector{Any}, DataType, Int64}}, NeuralPDE.PINNRepresentation, Bool, Vector{Int64}, Int64, NeuralPDE.Phi{FastChain{Tuple{FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(sigmoid_fast), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#107"{Vector{Float32}}, Nothing}}}}, NonAdaptiveLoss{Float64}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, opt::ADAM, data::Base.Iterators.Cycle{Tuple{Optimization.NullData}}; maxiters::Int64, callback::Function, progress::Bool, save_best::Bool, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}) @ OptimizationFlux C:\Users\accou\.julia\packages\OptimizationFlux\cpWyO\src\OptimizationFlux.jl:30 [75] #solve#494 @ C:\Users\accou\.julia\dev\SciMLBase\src\solve.jl:71 [inlined] ```
Currently splitting out the tutorials into separate tabs on the side and adding proper explanations to those tutorials.