Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Neural adapter test is broken #412

Closed
killah-t-cell opened this issue Oct 11, 2021 · 31 comments
Closed

Neural adapter test is broken #412

killah-t-cell opened this issue Oct 11, 2021 · 31 comments

Comments

@killah-t-cell
Copy link
Contributor

killah-t-cell commented Oct 11, 2021

Seems like the 2D Poisson equation with Neural adapter test is broken. I tested it in master and it failed. Seems to be related to ChainRulesCore.

ERROR: LoadError: MethodError: no method matching *(::Tuple{Int64, Int64})
Closest candidates are:
  *(::Any, ::ChainRulesCore.Tangent) at /Users/gabrielbirnbaum/.julia/packages/ChainRulesCore/1L9My/src/tangent_arithmetic.jl:151
  *(::Any, ::ChainRulesCore.AbstractThunk) at /Users/gabrielbirnbaum/.julia/packages/ChainRulesCore/1L9My/src/tangent_arithmetic.jl:125
  *(::Any, ::ChainRulesCore.ZeroTangent) at /Users/gabrielbirnbaum/.julia/packages/ChainRulesCore/1L9My/src/tangent_arithmetic.jl:104
using Flux
using DiffEqFlux
using ModelingToolkit
using Test, NeuralPDE
using GalacticOptim
using SciMLBase
import ModelingToolkit: Interval

## Example, 2D Poisson equation with Neural adapter
println("Example, 2D Poisson equation with Neural adapter")
@parameters x y
@variables u(..)
Dxx = Differential(x)^2
Dyy = Differential(y)^2

# 2D PDE
eq  = Dxx(u(x,y)) + Dyy(u(x,y)) ~ -sin(pi*x)*sin(pi*y)

# Initial and boundary conditions
bcs = [u(0,y) ~ 0.0, u(1,y) ~ -sin(pi*1)*sin(pi*y),
       u(x,0) ~ 0.0, u(x,1) ~ -sin(pi*x)*sin(pi*1)]
# Space and time domains
domains = [x  Interval(0.0,1.0),
           y  Interval(0.0,1.0)]
quadrature_strategy = NeuralPDE.QuadratureTraining(reltol=1e-2,abstol=1e-2,
                                                   maxiters =50, batch=100)
inner = 8
af = Flux.tanh
chain1 = Chain(Dense(2,inner,af),
               Dense(inner,inner,af),
               Dense(inner,1))
initθ = Float64.(DiffEqFlux.initial_params(chain1))
discretization = NeuralPDE.PhysicsInformedNN(chain1,
                                             quadrature_strategy;
                                             init_params = initθ)

@named pde_system = PDESystem(eq,bcs,domains,[x,y],[u(x,y)])
prob = NeuralPDE.discretize(pde_system,discretization)
sym_prob = NeuralPDE.symbolic_discretize(pde_system,discretization)
res = GalacticOptim.solve(prob, BFGS();  maxiters=2000) #  LoadError: MethodError: no method matching *(::Tuple{Int64, Int64})
@killah-t-cell
Copy link
Contributor Author

@ChrisRackauckas I think this issue might be affecting more models actually (just wrote an unrelated model that errored with the exact same error). Might be worth figuring out what's causing it.

@ChrisRackauckas
Copy link
Member

Yeah, I've been pinging @DhairyaLGandhi that there's a few Zygote issues that seem to have recently popped up. This one doesn't even have DiffEq involved.

@DhairyaLGandhi
Copy link
Member

It's best to isolate the issue here. There's plenty of unrelated code to Zygote which we can remove hopefully. Could you give me the full stacktrace?

@killah-t-cell
Copy link
Contributor Author

Here is the full stacktrace

ERROR: LoadError: MethodError: no method matching *(::Tuple{Int64, Int64})
Closest candidates are:
  *(::Any, ::ChainRulesCore.Tangent) at /Users/gabrielbirnbaum/.julia/packages/ChainRulesCore/1L9My/src/tangent_arithmetic.jl:151
  *(::Any, ::ChainRulesCore.AbstractThunk) at /Users/gabrielbirnbaum/.julia/packages/ChainRulesCore/1L9My/src/tangent_arithmetic.jl:125
  *(::Any, ::ChainRulesCore.ZeroTangent) at /Users/gabrielbirnbaum/.julia/packages/ChainRulesCore/1L9My/src/tangent_arithmetic.jl:104
  ...
Stacktrace:
  [1] (::Cubature.var"#17#18"{Bool, Bool, Int64, Float64, Float64, Int64, Int32, Ptr{Nothing}, Cubature.IntegrandData{Quadrature.var"#88#100"{QuadratureProblem{false, Vector{Float64}, Quadrature.var"#48#59"{QuadratureProblem{false, Vector{Float64}, NeuralPDE.var"#_loss#331"{NeuralPDE.var"#175#176"{Nothing, Vector{NeuralPDE.var"#270#272"{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}, UnionAll}}, NeuralPDE.var"#276#277", NeuralPDE.var"#278#287"{QuadratureTraining, NeuralPDE.var"#278#279#288"{Vector{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}}, NeuralPDE.var"#276#277"}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, Vector{Symbol}, Vector{Symbol}}, RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#257"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x69e1735c, 0x34179182, 0x0039054c, 0xbe1ecdb6, 0x2d24e69f)}, NeuralPDE.var"#274#275"}, UnionAll}, Vector{Float64}, Vector{Float64}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}}, Vector{Float64}, Vector{Float64}, Base.Iterators.Pairs{Symbol, Real, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:reltol, :abstol, :maxiters), Tuple{Float64, Float64, Int64}}}}, Vector{Float64}}}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Int64})()
    @ Cubature ~/.julia/packages/Cubature/5zwuu/src/Cubature.jl:215
  [2] disable_sigint(f::Cubature.var"#17#18"{Bool, Bool, Int64, Float64, Float64, Int64, Int32, Ptr{Nothing}, Cubature.IntegrandData{Quadrature.var"#88#100"{QuadratureProblem{false, Vector{Float64}, Quadrature.var"#48#59"{QuadratureProblem{false, Vector{Float64}, NeuralPDE.var"#_loss#331"{NeuralPDE.var"#175#176"{Nothing, Vector{NeuralPDE.var"#270#272"{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}, UnionAll}}, NeuralPDE.var"#276#277", NeuralPDE.var"#278#287"{QuadratureTraining, NeuralPDE.var"#278#279#288"{Vector{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}}, NeuralPDE.var"#276#277"}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, Vector{Symbol}, Vector{Symbol}}, RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#257"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x69e1735c, 0x34179182, 0x0039054c, 0xbe1ecdb6, 0x2d24e69f)}, NeuralPDE.var"#274#275"}, UnionAll}, Vector{Float64}, Vector{Float64}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}}, Vector{Float64}, Vector{Float64}, Base.Iterators.Pairs{Symbol, Real, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:reltol, :abstol, :maxiters), Tuple{Float64, Float64, Int64}}}}, Vector{Float64}}}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Int64})
    @ Base ./c.jl:458
  [3] cubature(xscalar::Bool, fscalar::Bool, vectorized::Bool, padaptive::Bool, fdim::Int64, f::Quadrature.var"#88#100"{QuadratureProblem{false, Vector{Float64}, Quadrature.var"#48#59"{QuadratureProblem{false, Vector{Float64}, NeuralPDE.var"#_loss#331"{NeuralPDE.var"#175#176"{Nothing, Vector{NeuralPDE.var"#270#272"{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}, UnionAll}}, NeuralPDE.var"#276#277", NeuralPDE.var"#278#287"{QuadratureTraining, NeuralPDE.var"#278#279#288"{Vector{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}}, NeuralPDE.var"#276#277"}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, Vector{Symbol}, Vector{Symbol}}, RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#257"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x69e1735c, 0x34179182, 0x0039054c, 0xbe1ecdb6, 0x2d24e69f)}, NeuralPDE.var"#274#275"}, UnionAll}, Vector{Float64}, Vector{Float64}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}}, Vector{Float64}, Vector{Float64}, Base.Iterators.Pairs{Symbol, Real, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:reltol, :abstol, :maxiters), Tuple{Float64, Float64, Int64}}}}, Vector{Float64}}, xmin_::Vector{Float64}, xmax_::Vector{Float64}, reqRelError::Float64, reqAbsError::Float64, maxEval::Int64, error_norm::Int32)
    @ Cubature ~/.julia/packages/Cubature/5zwuu/src/Cubature.jl:169
  [4] hcubature_v(fdim::Int64, f::Function, xmin::Vector{Float64}, xmax::Vector{Float64}; reltol::Float64, abstol::Float64, maxevals::Int64, error_norm::Int32)
    @ Cubature ~/.julia/packages/Cubature/5zwuu/src/Cubature.jl:227
  [5] __solvebp_call(::QuadratureProblem{false, Vector{Float64}, Quadrature.var"#48#59"{QuadratureProblem{false, Vector{Float64}, NeuralPDE.var"#_loss#331"{NeuralPDE.var"#175#176"{Nothing, Vector{NeuralPDE.var"#270#272"{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}, UnionAll}}, NeuralPDE.var"#276#277", NeuralPDE.var"#278#287"{QuadratureTraining, NeuralPDE.var"#278#279#288"{Vector{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}}, NeuralPDE.var"#276#277"}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, Vector{Symbol}, Vector{Symbol}}, RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#257"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x69e1735c, 0x34179182, 0x0039054c, 0xbe1ecdb6, 0x2d24e69f)}, NeuralPDE.var"#274#275"}, UnionAll}, Vector{Float64}, Vector{Float64}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}}, Vector{Float64}, Vector{Float64}, Base.Iterators.Pairs{Symbol, Real, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:reltol, :abstol, :maxiters), Tuple{Float64, Float64, Int64}}}}, ::Quadrature.CubatureJLh, ::Quadrature.ReCallVJP{Quadrature.ZygoteVJP}, ::Vector{Float64}, ::Vector{Float64}, ::Vector{Float64}; reltol::Float64, abstol::Float64, maxiters::Int64, kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ Quadrature ~/.julia/packages/Quadrature/UCzIP/src/Quadrature.jl:370
  [6] (::Quadrature.var"#quadrature_adjoint#54"{Base.Iterators.Pairs{Symbol, Real, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:reltol, :abstol, :maxiters), Tuple{Float64, Float64, Int64}}}, QuadratureProblem{false, Vector{Float64}, NeuralPDE.var"#_loss#331"{NeuralPDE.var"#175#176"{Nothing, Vector{NeuralPDE.var"#270#272"{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}, UnionAll}}, NeuralPDE.var"#276#277", NeuralPDE.var"#278#287"{QuadratureTraining, NeuralPDE.var"#278#279#288"{Vector{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}}, NeuralPDE.var"#276#277"}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, Vector{Symbol}, Vector{Symbol}}, RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#257"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x69e1735c, 0x34179182, 0x0039054c, 0xbe1ecdb6, 0x2d24e69f)}, NeuralPDE.var"#274#275"}, UnionAll}, Vector{Float64}, Vector{Float64}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, Quadrature.CubatureJLh, Quadrature.ReCallVJP{Quadrature.ZygoteVJP}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Tuple{}})(Δ::Zygote.OneElement{Float64, 1, Tuple{Int64}, Tuple{Base.OneTo{Int64}}})
    @ Quadrature ~/.julia/packages/Quadrature/UCzIP/src/Quadrature.jl:551
  [7] ZBack
    @ ~/.julia/packages/Zygote/Lw5Kf/src/compiler/chainrules.jl:168 [inlined]
  [8] (::Zygote.var"#kw_zpullback#40"{Quadrature.var"#quadrature_adjoint#54"{Base.Iterators.Pairs{Symbol, Real, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:reltol, :abstol, :maxiters), Tuple{Float64, Float64, Int64}}}, QuadratureProblem{false, Vector{Float64}, NeuralPDE.var"#_loss#331"{NeuralPDE.var"#175#176"{Nothing, Vector{NeuralPDE.var"#270#272"{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}, UnionAll}}, NeuralPDE.var"#276#277", NeuralPDE.var"#278#287"{QuadratureTraining, NeuralPDE.var"#278#279#288"{Vector{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}}, NeuralPDE.var"#276#277"}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, Vector{Symbol}, Vector{Symbol}}, RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#257"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x69e1735c, 0x34179182, 0x0039054c, 0xbe1ecdb6, 0x2d24e69f)}, NeuralPDE.var"#274#275"}, UnionAll}, Vector{Float64}, Vector{Float64}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, Quadrature.CubatureJLh, Quadrature.ReCallVJP{Quadrature.ZygoteVJP}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Tuple{}}})(dy::Zygote.OneElement{Float64, 1, Tuple{Int64}, Tuple{Base.OneTo{Int64}}})
    @ Zygote ~/.julia/packages/Zygote/Lw5Kf/src/compiler/chainrules.jl:194
  [9] #203
    @ ~/.julia/packages/Zygote/Lw5Kf/src/lib/lib.jl:203 [inlined]
 [10] (::Zygote.var"#1733#back#205"{Zygote.var"#203#204"{Tuple{NTuple{8, Nothing}, Tuple{}}, Zygote.var"#kw_zpullback#40"{Quadrature.var"#quadrature_adjoint#54"{Base.Iterators.Pairs{Symbol, Real, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:reltol, :abstol, :maxiters), Tuple{Float64, Float64, Int64}}}, QuadratureProblem{false, Vector{Float64}, NeuralPDE.var"#_loss#331"{NeuralPDE.var"#175#176"{Nothing, Vector{NeuralPDE.var"#270#272"{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}, UnionAll}}, NeuralPDE.var"#276#277", NeuralPDE.var"#278#287"{QuadratureTraining, NeuralPDE.var"#278#279#288"{Vector{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}}, NeuralPDE.var"#276#277"}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, Vector{Symbol}, Vector{Symbol}}, RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#257"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x69e1735c, 0x34179182, 0x0039054c, 0xbe1ecdb6, 0x2d24e69f)}, NeuralPDE.var"#274#275"}, UnionAll}, Vector{Float64}, Vector{Float64}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, Quadrature.CubatureJLh, Quadrature.ReCallVJP{Quadrature.ZygoteVJP}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Tuple{}}}}})(Δ::Zygote.OneElement{Float64, 1, Tuple{Int64}, Tuple{Base.OneTo{Int64}}})
    @ Zygote ~/.julia/packages/ZygoteRules/AIbCs/src/adjoint.jl:67
 [11] Pullback
    @ ~/.julia/packages/Quadrature/UCzIP/src/Quadrature.jl:154 [inlined]
 [12] (::typeof((#solve#12)))(Δ::Zygote.OneElement{Float64, 1, Tuple{Int64}, Tuple{Base.OneTo{Int64}}})
    @ Zygote ~/.julia/packages/Zygote/Lw5Kf/src/compiler/interface2.jl:0
 [13] #203
    @ ~/.julia/packages/Zygote/Lw5Kf/src/lib/lib.jl:203 [inlined]
 [14] (::Zygote.var"#1733#back#205"{Zygote.var"#203#204"{Tuple{NTuple{5, Nothing}, Tuple{}}, typeof((#solve#12))}})(Δ::Zygote.OneElement{Float64, 1, Tuple{Int64}, Tuple{Base.OneTo{Int64}}})
    @ Zygote ~/.julia/packages/ZygoteRules/AIbCs/src/adjoint.jl:67
 [15] Pullback
    @ ~/.julia/packages/Quadrature/UCzIP/src/Quadrature.jl:153 [inlined]
 [16] (::typeof((solve##kw)))(Δ::Zygote.OneElement{Float64, 1, Tuple{Int64}, Tuple{Base.OneTo{Int64}}})
    @ Zygote ~/.julia/packages/Zygote/Lw5Kf/src/compiler/interface2.jl:0
 [17] Pullback
    @ ~/.julia/packages/NeuralPDE/HVA0c/src/pinns_pde_solve.jl:978 [inlined]
 [18] (::typeof((λ)))(Δ::Float64)
    @ Zygote ~/.julia/packages/Zygote/Lw5Kf/src/compiler/interface2.jl:0
 [19] Pullback
    @ ~/.julia/packages/NeuralPDE/HVA0c/src/pinns_pde_solve.jl:984 [inlined]
 [20] (::typeof((λ)))(Δ::Float64)
    @ Zygote ~/.julia/packages/Zygote/Lw5Kf/src/compiler/interface2.jl:0
 [21] Pullback
    @ ~/.julia/packages/NeuralPDE/HVA0c/src/pinns_pde_solve.jl:1164 [inlined]
 [22] (::typeof((λ)))(Δ::Float64)
    @ Zygote ~/.julia/packages/Zygote/Lw5Kf/src/compiler/interface2.jl:0
 [23] #547
    @ ~/.julia/packages/Zygote/Lw5Kf/src/lib/array.jl:202 [inlined]
 [24] (::Base.var"#4#5"{Zygote.var"#547#552"})(a::Tuple{Tuple{Float64, typeof(∂(λ))}, Float64})
    @ Base ./generator.jl:36
 [25] iterate
    @ ./generator.jl:47 [inlined]
 [26] collect(itr::Base.Generator{Base.Iterators.Zip{Tuple{Vector{Tuple{Float64, Zygote.Pullback}}, Vector{Float64}}}, Base.var"#4#5"{Zygote.var"#547#552"}})
    @ Base ./array.jl:681
 [27] map
    @ ./abstractarray.jl:2383 [inlined]
 [28] (::Zygote.var"#544#549"{NeuralPDE.var"#353#369"{Vector{Float64}}, 1, Tuple{Vector{NeuralPDE.var"#328#332"{loss_function, Vector{Float64}, Vector{Float64}, NeuralPDE.var"#327#330"{UnionAll, QuadratureTraining}, Float64} where loss_function}}, Vector{Tuple{Float64, Zygote.Pullback}}})(Δ::FillArrays.Fill{Float64, 1, Tuple{Base.OneTo{Int64}}})
    @ Zygote ~/.julia/packages/Zygote/Lw5Kf/src/lib/array.jl:202
 [29] (::Zygote.var"#2575#back#553"{Zygote.var"#544#549"{NeuralPDE.var"#353#369"{Vector{Float64}}, 1, Tuple{Vector{NeuralPDE.var"#328#332"{loss_function, Vector{Float64}, Vector{Float64}, NeuralPDE.var"#327#330"{UnionAll, QuadratureTraining}, Float64} where loss_function}}, Vector{Tuple{Float64, Zygote.Pullback}}}})(Δ::FillArrays.Fill{Float64, 1, Tuple{Base.OneTo{Int64}}})
    @ Zygote ~/.julia/packages/ZygoteRules/AIbCs/src/adjoint.jl:67
 [30] Pullback
    @ ~/.julia/packages/NeuralPDE/HVA0c/src/pinns_pde_solve.jl:1164 [inlined]
 [31] (::typeof((λ)))(Δ::Float64)
    @ Zygote ~/.julia/packages/Zygote/Lw5Kf/src/compiler/interface2.jl:0
 [32] Pullback
    @ ~/.julia/packages/NeuralPDE/HVA0c/src/pinns_pde_solve.jl:1165 [inlined]
 [33] (::typeof((λ)))(Δ::Float64)
    @ Zygote ~/.julia/packages/Zygote/Lw5Kf/src/compiler/interface2.jl:0
 [34] Pullback
    @ ~/.julia/packages/NeuralPDE/HVA0c/src/pinns_pde_solve.jl:1169 [inlined]
 [35] (::typeof((λ)))(Δ::Float64)
    @ Zygote ~/.julia/packages/Zygote/Lw5Kf/src/compiler/interface2.jl:0
 [36] #203
    @ ~/.julia/packages/Zygote/Lw5Kf/src/lib/lib.jl:203 [inlined]
 [37] #1733#back
    @ ~/.julia/packages/ZygoteRules/AIbCs/src/adjoint.jl:67 [inlined]
 [38] Pullback
    @ ~/.julia/packages/SciMLBase/NwvCY/src/problems/basic_problems.jl:107 [inlined]
 [39] (::typeof((λ)))(Δ::Float64)
    @ Zygote ~/.julia/packages/Zygote/Lw5Kf/src/compiler/interface2.jl:0
 [40] #203
    @ ~/.julia/packages/Zygote/Lw5Kf/src/lib/lib.jl:203 [inlined]
 [41] #1733#back
    @ ~/.julia/packages/ZygoteRules/AIbCs/src/adjoint.jl:67 [inlined]
 [42] Pullback
    @ ~/.julia/packages/GalacticOptim/bEh06/src/function/zygote.jl:6 [inlined]
 [43] (::typeof((λ)))(Δ::Float64)
    @ Zygote ~/.julia/packages/Zygote/Lw5Kf/src/compiler/interface2.jl:0
 [44] (::Zygote.var"#203#204"{Tuple{Tuple{Nothing}, Tuple{}}, typeof((λ))})(Δ::Float64)
    @ Zygote ~/.julia/packages/Zygote/Lw5Kf/src/lib/lib.jl:203
 [45] #1733#back
    @ ~/.julia/packages/ZygoteRules/AIbCs/src/adjoint.jl:67 [inlined]
 [46] Pullback
    @ ~/.julia/packages/GalacticOptim/bEh06/src/function/zygote.jl:8 [inlined]
 [47] (::typeof((λ)))(Δ::Float64)
    @ Zygote ~/.julia/packages/Zygote/Lw5Kf/src/compiler/interface2.jl:0
 [48] (::Zygote.var"#50#51"{typeof((λ))})(Δ::Float64)
    @ Zygote ~/.julia/packages/Zygote/Lw5Kf/src/compiler/interface.jl:41
 [49] gradient(f::Function, args::Vector{Float64})
    @ Zygote ~/.julia/packages/Zygote/Lw5Kf/src/compiler/interface.jl:76
 [50] (::GalacticOptim.var"#228#238"{GalacticOptim.var"#227#237"{OptimizationFunction{true, GalacticOptim.AutoZygote, NeuralPDE.var"#loss_function_#371"{NeuralPDE.var"#354#370"{NeuralPDE.var"#352#368", NeuralPDE.var"#350#366"}, Vector{NeuralPDE.var"#270#272"{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}, UnionAll}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, SciMLBase.NullParameters}})(::Vector{Float64}, ::Vector{Float64})
    @ GalacticOptim ~/.julia/packages/GalacticOptim/bEh06/src/function/zygote.jl:8
 [51] (::GalacticOptim.var"#104#112"{GalacticOptim.var"#103#111"{OptimizationProblem{true, OptimizationFunction{true, GalacticOptim.AutoZygote, NeuralPDE.var"#loss_function_#371"{NeuralPDE.var"#354#370"{NeuralPDE.var"#352#368", NeuralPDE.var"#350#366"}, Vector{NeuralPDE.var"#270#272"{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}, UnionAll}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Vector{Float64}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, OptimizationFunction{false, GalacticOptim.AutoZygote, OptimizationFunction{true, GalacticOptim.AutoZygote, NeuralPDE.var"#loss_function_#371"{NeuralPDE.var"#354#370"{NeuralPDE.var"#352#368", NeuralPDE.var"#350#366"}, Vector{NeuralPDE.var"#270#272"{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}, UnionAll}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, GalacticOptim.var"#228#238"{GalacticOptim.var"#227#237"{OptimizationFunction{true, GalacticOptim.AutoZygote, NeuralPDE.var"#loss_function_#371"{NeuralPDE.var"#354#370"{NeuralPDE.var"#352#368", NeuralPDE.var"#350#366"}, Vector{NeuralPDE.var"#270#272"{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}, UnionAll}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, SciMLBase.NullParameters}}, GalacticOptim.var"#231#241"{GalacticOptim.var"#227#237"{OptimizationFunction{true, GalacticOptim.AutoZygote, NeuralPDE.var"#loss_function_#371"{NeuralPDE.var"#354#370"{NeuralPDE.var"#352#368", NeuralPDE.var"#350#366"}, Vector{NeuralPDE.var"#270#272"{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}, UnionAll}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, SciMLBase.NullParameters}}, GalacticOptim.var"#236#246", Nothing, Nothing, Nothing}}, OptimizationFunction{false, GalacticOptim.AutoZygote, OptimizationFunction{true, GalacticOptim.AutoZygote, NeuralPDE.var"#loss_function_#371"{NeuralPDE.var"#354#370"{NeuralPDE.var"#352#368", NeuralPDE.var"#350#366"}, Vector{NeuralPDE.var"#270#272"{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}, UnionAll}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, GalacticOptim.var"#228#238"{GalacticOptim.var"#227#237"{OptimizationFunction{true, GalacticOptim.AutoZygote, NeuralPDE.var"#loss_function_#371"{NeuralPDE.var"#354#370"{NeuralPDE.var"#352#368", NeuralPDE.var"#350#366"}, Vector{NeuralPDE.var"#270#272"{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}, UnionAll}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, SciMLBase.NullParameters}}, GalacticOptim.var"#231#241"{GalacticOptim.var"#227#237"{OptimizationFunction{true, GalacticOptim.AutoZygote, NeuralPDE.var"#loss_function_#371"{NeuralPDE.var"#354#370"{NeuralPDE.var"#352#368", NeuralPDE.var"#350#366"}, Vector{NeuralPDE.var"#270#272"{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}, UnionAll}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, SciMLBase.NullParameters}}, GalacticOptim.var"#236#246", Nothing, Nothing, Nothing}})(G::Vector{Float64}, θ::Vector{Float64})
    @ GalacticOptim ~/.julia/packages/GalacticOptim/bEh06/src/solve/optim.jl:44
 [52] value_gradient!!(obj::TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}, x::Vector{Float64})
    @ NLSolversBase ~/.julia/packages/NLSolversBase/GRQ1x/src/interface.jl:82
 [53] initial_state(method::BFGS{LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Nothing, Nothing, Flat}, options::Optim.Options{Float64, GalacticOptim.var"#_cb#110"{var"#61#62", BFGS{LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Nothing, Nothing, Flat}, Base.Iterators.Cycle{Tuple{GalacticOptim.NullData}}}}, d::TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}, initial_x::Vector{Float64})
    @ Optim ~/.julia/packages/Optim/3K7JI/src/multivariate/solvers/first_order/bfgs.jl:94
 [54] optimize(d::TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}, initial_x::Vector{Float64}, method::BFGS{LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Nothing, Nothing, Flat}, options::Optim.Options{Float64, GalacticOptim.var"#_cb#110"{var"#61#62", BFGS{LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Nothing, Nothing, Flat}, Base.Iterators.Cycle{Tuple{GalacticOptim.NullData}}}})
    @ Optim ~/.julia/packages/Optim/3K7JI/src/multivariate/optimize/optimize.jl:35
 [55] __solve(prob::OptimizationProblem{true, OptimizationFunction{true, GalacticOptim.AutoZygote, NeuralPDE.var"#loss_function_#371"{NeuralPDE.var"#354#370"{NeuralPDE.var"#352#368", NeuralPDE.var"#350#366"}, Vector{NeuralPDE.var"#270#272"{FastChain{Tuple{FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(σ), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}, FastDense{typeof(identity), DiffEqFlux.var"#initial_params#82"{Vector{Float32}}}}}, UnionAll}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Vector{Float64}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, opt::BFGS{LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Nothing, Nothing, Flat}, data::Base.Iterators.Cycle{Tuple{GalacticOptim.NullData}}; maxiters::Int64, cb::Function, progress::Bool, kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ GalacticOptim ~/.julia/packages/GalacticOptim/bEh06/src/solve/optim.jl:55
 [56] #solve#476
    @ ~/.julia/packages/SciMLBase/NwvCY/src/solve.jl:3 [inlined]
in expression starting at /Users/gabrielbirnbaum/.julia/dev/Plasma/train/1D_electrostatic.jl:71

@ChrisRackauckas
Copy link
Member

Oh, maybe the change to ChainRules rrules is the problem here. They have 1 argument difference, so you might need to add a NoTangent at the front or something.

https://github.com/SciML/Quadrature.jl/blob/master/src/Quadrature.jl#L557-L559

I would double check that the sizes here are correct.

@killah-t-cell
Copy link
Contributor Author

What change are you referring to? Our latest Quadrature PR didn't touch the rrule. Also they seem to have the same number of arguments (7 + the ntuple of length of args...)?

FWIW the error seems to go through the rrule in Quadrature but on that line.

Screenshot 2021-10-12 at 17 10 02

@ChrisRackauckas
Copy link
Member

What is that tuple of ints then?

@killah-t-cell
Copy link
Contributor Author

I'm unsure as well. Will try to investigate.

@hjli528
Copy link

hjli528 commented Oct 12, 2021

I also saw similar errors after updating my packages. Though the error is a bit different with this ticket. Mine is

ERROR: MethodError: no method matching *(::Tuple{Int64, Int64})
Closest candidates are:
  *(::Any, ::VectorizationBase.CartesianVIndex) at C:\Users\hli163\.julia\packages\VectorizationBase\wxfc7\src\cartesianvindex.jl:59
  *(::Any, ::ChainRulesCore.Tangent) at C:\Users\hli163\.julia\packages\ChainRulesCore\1L9My\src\tangent_arithmetic.jl:151
  *(::Any, ::ChainRulesCore.AbstractThunk) at C:\Users\hli163\.julia\packages\ChainRulesCore\1L9My\src\tangent_arithmetic.jl:125

These show up after the following line of code

res = GalacticOptim.solve(prob, ADAM(0.1); cb = cb, maxiters=250, progress = true)

Code worked before I updated my project packages.

@killah-t-cell
Copy link
Contributor Author

killah-t-cell commented Oct 13, 2021

This may not narrow it down too much, but the issue doesn't occur if I revert to Zygote#v0.6.17.

In fact this starts to error in v0.6.19 FluxML/Zygote.jl@v0.6.18...v0.6.19

@ChrisRackauckas
Copy link
Member

That means it could be a ChainRules 1.0 issue

@killah-t-cell
Copy link
Contributor Author

killah-t-cell commented Oct 13, 2021

Yeah that makes sense. The error mentions the tangent_arithmetic.jl file in ChainRules (where multiplication rules are defined). Maybe something is getting there as a Tuple{Int64, Int64} when it shouldn't be, or maybe there is a dispatch missing.

A general question: how does one go about debugging issues like this? Zygote and ChainRules code flows are slightly hard to follow. Are there any best practices?

@KirillZubov
Copy link
Member

the problem should be with boundary condition - expression like u(x,0) because next code is worked

@parameters x y
@variables u(..)
Dxx = Differential(x)^2
Dyy = Differential(y)^2
eq  = Dxx(u(x,y)) + Dyy(u(x,y)) ~ 0.

# Initial and boundary conditions
bcs = [u(x,y) ~ u(x,y)]
# Space and time domains
domains = [x  Interval(0.0,1.0),
           y  Interval(0.0,1.0)]
quadrature_strategy = NeuralPDE.QuadratureTraining(reltol=1e-2,abstol=1e-2,
                                                   maxiters =50, batch=100)
inner = 8
af = Flux.tanh
chain1 = FastChain(FastDense(2,inner,af),
                   FastDense(inner,inner,af),
                   FastDense(inner,1))
initθ = Float64.(DiffEqFlux.initial_params(chain1))
discretization = NeuralPDE.PhysicsInformedNN(chain1,
                                             quadrature_strategy;
                                             init_params = initθ)
@named pde_system = PDESystem(eq,bcs,domains,[x,y],[u(x,y)])
prob = NeuralPDE.discretize(pde_system,discretization)
Zygote.gradient(prob.f.f.loss_function,initθ)
sym_prob = NeuralPDE.symbolic_discretize(pde_system,discretization)
res = GalacticOptim.solve(prob, ADAM(0.1); cb=cb, maxiters=20)

I think that Zygote breaks down on processing this line when there is used u(x,0):

(x, y) = (cord[[1], :], fill(0, size(cord[[1], :])))

this part

fill(0, size(cord[[1], :]))

@killah-t-cell
Copy link
Contributor Author

killah-t-cell commented Oct 15, 2021

Any clue why Zygote isn't processing this anymore? I see v0.6.19 of Zygote (the version where this started to fail) deleted an adjoint for fill statements (see line 109 here: FluxML/Zygote.jl@v0.6.18...v0.6.19)

So maybe a fix would be to add it back?

@DhairyaLGandhi
Copy link
Member

Xref JuliaDiff/ChainRules.jl#537 ?

@mcabbott
Copy link

The linked changes in Zygote v0.6.19 widen the signature from fill(::Real, ...) to fill(::Any, ...), and sum([(1,2), (3,4)]) is an error. It looks like imperfect translation of Zygote to ChainRules types may have caused some problems for a while. This example doesn't have exactly the same error, and was broken on some earlier versions too... and is now fixed:

julia> using Zygote

julia> gradient(x -> fill(x, 3)[1][1], (1,2))
# Zygote v0.6.0
# Zygote v0.6.19
ERROR: MethodError: no method matching +(::Tuple{Int64, Nothing}, ::Nothing)
# Zygote v0.6.28
((1.0, nothing),)

Can you edit the example above so that it loads every package required?

@killah-t-cell
Copy link
Contributor Author

@mcabbott I edited the example above :)

@mcabbott
Copy link

mcabbott commented Oct 16, 2021

Thanks. Can reproduce, but don't get it as a LoadError, maybe that's irrelevant... Julia 1.7:

WARNING: both Flux and Iterators export "flatten"; uses of it in module DiffEqFlux must be qualified
WARNING: both Flux and Distributions export "params"; uses of it in module DiffEqFlux must be qualified

julia> res = GalacticOptim.solve(prob, BFGS();  maxiters=2000) #
ERROR: MethodError: no method matching *(::Tuple{Int64, Int64})

This does not call the rrule for fill at all.

It's also not fixed by FluxML/Zygote.jl#1103 (which fixes more CRC type leaks) right now.

The obvious place to get Tuple{Int64, Int64} is the size of a matrix, and perhaps something was meant to splat that? If I supply a method, I get a different error from reshape, which might be a clue? Do you know what is of size (1, 15) in this problem?

julia> Base.:*(s::Tuple{Int,Int}) = (@show s; prod(s));

julia> res = GalacticOptim.solve(prob, BFGS();  maxiters=2000) #  LoadError: MethodError: no method matching *(::Tuple{Int64, Int64})
s = (1, 15)
ERROR: MethodError: no method matching reshape(::Vector{Int64}, ::Tuple{Tuple{Int64, Int64}})
Closest candidates are:
  reshape(::ChainRulesCore.AbstractZero, ::Any...) at /Users/me/.julia/dev/ChainRulesCore/src/tangent_types/abstract_zero.jl:33
  reshape(::ChainRulesCore.AbstractThunk, ::Any...) at /Users/me/.julia/dev/ChainRulesCore/src/tangent_types/thunks.jl:49
  reshape(::AbstractArray{T, N}, ::Val{N}) where {T, N} at reshapedarray.jl:139
  ...
Stacktrace:
  [1] (::Cubature.var"#17#18"{Bool, Bool, Int64, Float64, Float64, Int64, Int32, Ptr{Nothing}, Cubature.IntegrandData{Quadrature.var"#88#100"{QuadratureProblem{false, Vector{Float64}, Quadrature.var"#48#59"{QuadratureProblem{false, Vector{Float64}, NeuralPDE.var"#_loss#333"{NeuralPDE.var"#177#178"{Nothing, NeuralPDE.var"#273#275"{UnionAll, Flux.var"#64#66"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}}, NeuralPDE.var"#278#279", NeuralPDE.var"#280#289"{QuadratureTraining, NeuralPDE.var"#280#281#290"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, NeuralPDE.var"#278#279"}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, Vector{Symbol}, Vector{Symbol}}, RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#272"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x007a646a, 0xbcdf534e, 0xe5dba098, 0xb50727ee, 0x4ee0a07e)}, NeuralPDE.var"#276#277"}, UnionAll}, Vector{Float64}, Vector{Float64}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}}, Vector{Float64}, Vector{Float64}, Base.Pairs{Symbol, Real, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:reltol, :abstol, :maxiters), Tuple{Float64, Float64, Int64}}}}, Vector{Float64}}}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Int64})()
    @ Cubature ~/.julia/packages/Cubature/5zwuu/src/Cubature.jl:215
  [2] disable_sigint(f::Cubature.var"#17#18"{Bool, Bool, Int64, Float64, Float64, Int64, Int32, Ptr{Nothing}, Cubature.IntegrandData{Quadrature.var"#88#100"{QuadratureProblem{false, Vector{Float64}, Quadrature.var"#48#59"{QuadratureProblem{false, Vector{Float64}, NeuralPDE.var"#_loss#333"{NeuralPDE.var"#177#178"{Nothing, NeuralPDE.var"#273#275"{UnionAll, Flux.var"#64#66"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}}, NeuralPDE.var"#278#279", NeuralPDE.var"#280#289"{QuadratureTraining, NeuralPDE.var"#280#281#290"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, NeuralPDE.var"#278#279"}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, Vector{Symbol}, Vector{Symbol}}, RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#272"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x007a646a, 0xbcdf534e, 0xe5dba098, 0xb50727ee, 0x4ee0a07e)}, NeuralPDE.var"#276#277"}, UnionAll}, Vector{Float64}, Vector{Float64}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}}, Vector{Float64}, Vector{Float64}, Base.Pairs{Symbol, Real, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:reltol, :abstol, :maxiters), Tuple{Float64, Float64, Int64}}}}, Vector{Float64}}}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Int64})
    @ Base ./c.jl:458
  [3] cubature(xscalar::Bool, fscalar::Bool, vectorized::Bool, padaptive::Bool, fdim::Int64, f::Quadrature.var"#88#100"{QuadratureProblem{false, Vector{Float64}, Quadrature.var"#48#59"{QuadratureProblem{false, Vector{Float64}, NeuralPDE.var"#_loss#333"{NeuralPDE.var"#177#178"{Nothing, NeuralPDE.var"#273#275"{UnionAll, Flux.var"#64#66"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}}, NeuralPDE.var"#278#279", NeuralPDE.var"#280#289"{QuadratureTraining, NeuralPDE.var"#280#281#290"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, NeuralPDE.var"#278#279"}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, Vector{Symbol}, Vector{Symbol}}, RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#272"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x007a646a, 0xbcdf534e, 0xe5dba098, 0xb50727ee, 0x4ee0a07e)}, NeuralPDE.var"#276#277"}, UnionAll}, Vector{Float64}, Vector{Float64}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}}, Vector{Float64}, Vector{Float64}, Base.Pairs{Symbol, Real, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:reltol, :abstol, :maxiters), Tuple{Float64, Float64, Int64}}}}, Vector{Float64}}, xmin_::Vector{Float64}, xmax_::Vector{Float64}, reqRelError::Float64, reqAbsError::Float64, maxEval::Int64, error_norm::Int32)
    @ Cubature ~/.julia/packages/Cubature/5zwuu/src/Cubature.jl:169
  [4] hcubature_v(fdim::Int64, f::Function, xmin::Vector{Float64}, xmax::Vector{Float64}; reltol::Float64, abstol::Float64, maxevals::Int64, error_norm::Int32)
    @ Cubature ~/.julia/packages/Cubature/5zwuu/src/Cubature.jl:227
  [5] __solvebp_call(::QuadratureProblem{false, Vector{Float64}, Quadrature.var"#48#59"{QuadratureProblem{false, Vector{Float64}, NeuralPDE.var"#_loss#333"{NeuralPDE.var"#177#178"{Nothing, NeuralPDE.var"#273#275"{UnionAll, Flux.var"#64#66"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}}, NeuralPDE.var"#278#279", NeuralPDE.var"#280#289"{QuadratureTraining, NeuralPDE.var"#280#281#290"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, NeuralPDE.var"#278#279"}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, Vector{Symbol}, Vector{Symbol}}, RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#272"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x007a646a, 0xbcdf534e, 0xe5dba098, 0xb50727ee, 0x4ee0a07e)}, NeuralPDE.var"#276#277"}, UnionAll}, Vector{Float64}, Vector{Float64}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}}, Vector{Float64}, Vector{Float64}, Base.Pairs{Symbol, Real, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:reltol, :abstol, :maxiters), Tuple{Float64, Float64, Int64}}}}, ::Quadrature.CubatureJLh, ::Quadrature.ReCallVJP{Quadrature.ZygoteVJP}, ::Vector{Float64}, ::Vector{Float64}, ::Vector{Float64}; reltol::Float64, abstol::Float64, maxiters::Int64, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ Quadrature ~/.julia/packages/Quadrature/UCzIP/src/Quadrature.jl:370
  [6] (::Quadrature.var"#quadrature_adjoint#54"{Base.Pairs{Symbol, Real, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:reltol, :abstol, :maxiters), Tuple{Float64, Float64, Int64}}}, QuadratureProblem{false, Vector{Float64}, NeuralPDE.var"#_loss#333"{NeuralPDE.var"#177#178"{Nothing, NeuralPDE.var"#273#275"{UnionAll, Flux.var"#64#66"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}}, NeuralPDE.var"#278#279", NeuralPDE.var"#280#289"{QuadratureTraining, NeuralPDE.var"#280#281#290"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, NeuralPDE.var"#278#279"}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, Vector{Symbol}, Vector{Symbol}}, RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#272"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x007a646a, 0xbcdf534e, 0xe5dba098, 0xb50727ee, 0x4ee0a07e)}, NeuralPDE.var"#276#277"}, UnionAll}, Vector{Float64}, Vector{Float64}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, Quadrature.CubatureJLh, Quadrature.ReCallVJP{Quadrature.ZygoteVJP}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Tuple{}})(Δ::Vector{Float64})
    @ Quadrature ~/.julia/packages/Quadrature/UCzIP/src/Quadrature.jl:551
  [7] ZBack
    @ ~/.julia/dev/Zygote/src/compiler/chainrules.jl:173 [inlined]
  [8] (::Zygote.var"#kw_zpullback#42"{Quadrature.var"#quadrature_adjoint#54"{Base.Pairs{Symbol, Real, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:reltol, :abstol, :maxiters), Tuple{Float64, Float64, Int64}}}, QuadratureProblem{false, Vector{Float64}, NeuralPDE.var"#_loss#333"{NeuralPDE.var"#177#178"{Nothing, NeuralPDE.var"#273#275"{UnionAll, Flux.var"#64#66"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}}, NeuralPDE.var"#278#279", NeuralPDE.var"#280#289"{QuadratureTraining, NeuralPDE.var"#280#281#290"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, NeuralPDE.var"#278#279"}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, Vector{Symbol}, Vector{Symbol}}, RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#272"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x007a646a, 0xbcdf534e, 0xe5dba098, 0xb50727ee, 0x4ee0a07e)}, NeuralPDE.var"#276#277"}, UnionAll}, Vector{Float64}, Vector{Float64}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, Quadrature.CubatureJLh, Quadrature.ReCallVJP{Quadrature.ZygoteVJP}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Tuple{}}})(dy::Vector{Float64})
    @ Zygote ~/.julia/dev/Zygote/src/compiler/chainrules.jl:199
  [9] #209
    @ ~/.julia/dev/Zygote/src/lib/lib.jl:203 [inlined]
 [10] (::Zygote.var"#1740#back#211"{Zygote.var"#209#210"{Tuple{NTuple{8, Nothing}, Tuple{}}, Zygote.var"#kw_zpullback#42"{Quadrature.var"#quadrature_adjoint#54"{Base.Pairs{Symbol, Real, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:reltol, :abstol, :maxiters), Tuple{Float64, Float64, Int64}}}, QuadratureProblem{false, Vector{Float64}, NeuralPDE.var"#_loss#333"{NeuralPDE.var"#177#178"{Nothing, NeuralPDE.var"#273#275"{UnionAll, Flux.var"#64#66"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}}, NeuralPDE.var"#278#279", NeuralPDE.var"#280#289"{QuadratureTraining, NeuralPDE.var"#280#281#290"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, NeuralPDE.var"#278#279"}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, Vector{Symbol}, Vector{Symbol}}, RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#272"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x007a646a, 0xbcdf534e, 0xe5dba098, 0xb50727ee, 0x4ee0a07e)}, NeuralPDE.var"#276#277"}, UnionAll}, Vector{Float64}, Vector{Float64}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, Quadrature.CubatureJLh, Quadrature.ReCallVJP{Quadrature.ZygoteVJP}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Tuple{}}}}})(Δ::Vector{Float64})
    @ Zygote ~/.julia/packages/ZygoteRules/AIbCs/src/adjoint.jl:67
 [11] Pullback
    @ ~/.julia/packages/Quadrature/UCzIP/src/Quadrature.jl:154 [inlined]
 [12] (::typeof(∂(#solve#12)))(Δ::Vector{Float64})
    @ Zygote ~/.julia/dev/Zygote/src/compiler/interface2.jl:0
 [13] #209
    @ ~/.julia/dev/Zygote/src/lib/lib.jl:203 [inlined]
 [14] (::Zygote.var"#1740#back#211"{Zygote.var"#209#210"{Tuple{NTuple{5, Nothing}, Tuple{}}, typeof(∂(#solve#12))}})(Δ::Vector{Float64})
    @ Zygote ~/.julia/packages/ZygoteRules/AIbCs/src/adjoint.jl:67
 [15] Pullback
    @ ~/.julia/packages/Quadrature/UCzIP/src/Quadrature.jl:153 [inlined]
 [16] (::typeof(∂(solve##kw)))(Δ::Vector{Float64})
    @ Zygote ~/.julia/dev/Zygote/src/compiler/interface2.jl:0
 [17] Pullback
    @ ~/.julia/packages/NeuralPDE/HVA0c/src/pinns_pde_solve.jl:978 [inlined]
 [18] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote ~/.julia/dev/Zygote/src/compiler/interface2.jl:0
 [19] Pullback
    @ ~/.julia/packages/NeuralPDE/HVA0c/src/pinns_pde_solve.jl:984 [inlined]
 [20] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote ~/.julia/dev/Zygote/src/compiler/interface2.jl:0
 [21] Pullback
    @ ~/.julia/packages/NeuralPDE/HVA0c/src/pinns_pde_solve.jl:1164 [inlined]
 [22] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote ~/.julia/dev/Zygote/src/compiler/interface2.jl:0
 [23] #559
    @ ~/.julia/dev/Zygote/src/lib/array.jl:203 [inlined]
 [24] (::Base.var"#4#5"{Zygote.var"#559#564"})(a::Tuple{Tuple{Float64, typeof(∂(λ))}, Float64})
    @ Base ./generator.jl:36
 [25] iterate
    @ ./generator.jl:47 [inlined]
 [26] collect(itr::Base.Generator{Base.Iterators.Zip{Tuple{Vector{Tuple{Float64, Zygote.Pullback}}, Vector{Float64}}}, Base.var"#4#5"{Zygote.var"#559#564"}})
    @ Base ./array.jl:710
 [27] map
    @ ./abstractarray.jl:2860 [inlined]
 [28] (::Zygote.var"#556#561"{NeuralPDE.var"#355#371"{Vector{Float64}}, 1, Tuple{Vector{NeuralPDE.var"#330#334"{loss_function, Vector{Float64}, Vector{Float64}, NeuralPDE.var"#329#332"{UnionAll, QuadratureTraining}, Float64} where loss_function}}, Vector{Tuple{Float64, Zygote.Pullback}}})(Δ::FillArrays.Fill{Float64, 1, Tuple{Base.OneTo{Int64}}})
    @ Zygote ~/.julia/dev/Zygote/src/lib/array.jl:203
 [29] (::Zygote.var"#2583#back#565"{Zygote.var"#556#561"{NeuralPDE.var"#355#371"{Vector{Float64}}, 1, Tuple{Vector{NeuralPDE.var"#330#334"{loss_function, Vector{Float64}, Vector{Float64}, NeuralPDE.var"#329#332"{UnionAll, QuadratureTraining}, Float64} where loss_function}}, Vector{Tuple{Float64, Zygote.Pullback}}}})(Δ::FillArrays.Fill{Float64, 1, Tuple{Base.OneTo{Int64}}})
    @ Zygote ~/.julia/packages/ZygoteRules/AIbCs/src/adjoint.jl:67
 [30] Pullback
    @ ~/.julia/packages/NeuralPDE/HVA0c/src/pinns_pde_solve.jl:1164 [inlined]
 [31] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote ~/.julia/dev/Zygote/src/compiler/interface2.jl:0
 [32] Pullback
    @ ~/.julia/packages/NeuralPDE/HVA0c/src/pinns_pde_solve.jl:1165 [inlined]
 [33] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote ~/.julia/dev/Zygote/src/compiler/interface2.jl:0
 [34] Pullback
    @ ~/.julia/packages/NeuralPDE/HVA0c/src/pinns_pde_solve.jl:1169 [inlined]
 [35] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote ~/.julia/dev/Zygote/src/compiler/interface2.jl:0
 [36] #209
    @ ~/.julia/dev/Zygote/src/lib/lib.jl:203 [inlined]
 [37] #1740#back
    @ ~/.julia/packages/ZygoteRules/AIbCs/src/adjoint.jl:67 [inlined]
 [38] Pullback
    @ ~/.julia/packages/SciMLBase/h4Gxc/src/problems/basic_problems.jl:107 [inlined]
 [39] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote ~/.julia/dev/Zygote/src/compiler/interface2.jl:0
 [40] (::Zygote.var"#209#210"{Tuple{Tuple{Nothing, Nothing}, Tuple{}}, typeof(∂(λ))})(Δ::Float64)
    @ Zygote ~/.julia/dev/Zygote/src/lib/lib.jl:203
 [41] #1740#back
    @ ~/.julia/packages/ZygoteRules/AIbCs/src/adjoint.jl:67 [inlined]
 [42] Pullback
    @ ~/.julia/packages/GalacticOptim/Pdieo/src/function/zygote.jl:6 [inlined]
 [43] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote ~/.julia/dev/Zygote/src/compiler/interface2.jl:0
 [44] (::Zygote.var"#209#210"{Tuple{Tuple{Nothing}, Tuple{}}, typeof(∂(λ))})(Δ::Float64)
    @ Zygote ~/.julia/dev/Zygote/src/lib/lib.jl:203
 [45] #1740#back
    @ ~/.julia/packages/ZygoteRules/AIbCs/src/adjoint.jl:67 [inlined]
 [46] Pullback
    @ ~/.julia/packages/GalacticOptim/Pdieo/src/function/zygote.jl:8 [inlined]
 [47] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote ~/.julia/dev/Zygote/src/compiler/interface2.jl:0
 [48] (::Zygote.var"#52#53"{typeof(∂(λ))})(Δ::Float64)
    @ Zygote ~/.julia/dev/Zygote/src/compiler/interface.jl:41
 [49] gradient(f::Function, args::Vector{Float64})
    @ Zygote ~/.julia/dev/Zygote/src/compiler/interface.jl:76
 [50] (::GalacticOptim.var"#260#270"{GalacticOptim.var"#259#269"{OptimizationFunction{true, GalacticOptim.AutoZygote, NeuralPDE.var"#loss_function_#373"{NeuralPDE.var"#356#372"{NeuralPDE.var"#354#370", NeuralPDE.var"#352#368"}, NeuralPDE.var"#273#275"{UnionAll, Flux.var"#64#66"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, SciMLBase.NullParameters}})(::Vector{Float64}, ::Vector{Float64})
    @ GalacticOptim ~/.julia/packages/GalacticOptim/Pdieo/src/function/zygote.jl:8
 [51] (::GalacticOptim.var"#136#144"{OptimizationProblem{true, OptimizationFunction{true, GalacticOptim.AutoZygote, NeuralPDE.var"#loss_function_#373"{NeuralPDE.var"#356#372"{NeuralPDE.var"#354#370", NeuralPDE.var"#352#368"}, NeuralPDE.var"#273#275"{UnionAll, Flux.var"#64#66"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Vector{Float64}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, GalacticOptim.var"#135#143"{OptimizationProblem{true, OptimizationFunction{true, GalacticOptim.AutoZygote, NeuralPDE.var"#loss_function_#373"{NeuralPDE.var"#356#372"{NeuralPDE.var"#354#370", NeuralPDE.var"#352#368"}, NeuralPDE.var"#273#275"{UnionAll, Flux.var"#64#66"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Vector{Float64}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, OptimizationFunction{false, GalacticOptim.AutoZygote, OptimizationFunction{true, GalacticOptim.AutoZygote, NeuralPDE.var"#loss_function_#373"{NeuralPDE.var"#356#372"{NeuralPDE.var"#354#370", NeuralPDE.var"#352#368"}, NeuralPDE.var"#273#275"{UnionAll, Flux.var"#64#66"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, GalacticOptim.var"#260#270"{GalacticOptim.var"#259#269"{OptimizationFunction{true, GalacticOptim.AutoZygote, NeuralPDE.var"#loss_function_#373"{NeuralPDE.var"#356#372"{NeuralPDE.var"#354#370", NeuralPDE.var"#352#368"}, NeuralPDE.var"#273#275"{UnionAll, Flux.var"#64#66"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, SciMLBase.NullParameters}}, GalacticOptim.var"#263#273"{GalacticOptim.var"#259#269"{OptimizationFunction{true, GalacticOptim.AutoZygote, NeuralPDE.var"#loss_function_#373"{NeuralPDE.var"#356#372"{NeuralPDE.var"#354#370", NeuralPDE.var"#352#368"}, NeuralPDE.var"#273#275"{UnionAll, Flux.var"#64#66"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, SciMLBase.NullParameters}}, GalacticOptim.var"#268#278", Nothing, Nothing, Nothing}}, OptimizationFunction{false, GalacticOptim.AutoZygote, OptimizationFunction{true, GalacticOptim.AutoZygote, NeuralPDE.var"#loss_function_#373"{NeuralPDE.var"#356#372"{NeuralPDE.var"#354#370", NeuralPDE.var"#352#368"}, NeuralPDE.var"#273#275"{UnionAll, Flux.var"#64#66"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, GalacticOptim.var"#260#270"{GalacticOptim.var"#259#269"{OptimizationFunction{true, GalacticOptim.AutoZygote, NeuralPDE.var"#loss_function_#373"{NeuralPDE.var"#356#372"{NeuralPDE.var"#354#370", NeuralPDE.var"#352#368"}, NeuralPDE.var"#273#275"{UnionAll, Flux.var"#64#66"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, SciMLBase.NullParameters}}, GalacticOptim.var"#263#273"{GalacticOptim.var"#259#269"{OptimizationFunction{true, GalacticOptim.AutoZygote, NeuralPDE.var"#loss_function_#373"{NeuralPDE.var"#356#372"{NeuralPDE.var"#354#370", NeuralPDE.var"#352#368"}, NeuralPDE.var"#273#275"{UnionAll, Flux.var"#64#66"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, SciMLBase.NullParameters}}, GalacticOptim.var"#268#278", Nothing, Nothing, Nothing}})(G::Vector{Float64}, θ::Vector{Float64})
    @ GalacticOptim ~/.julia/packages/GalacticOptim/Pdieo/src/solve/optim.jl:76
 [52] value_gradient!!(obj::TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}, x::Vector{Float64})
    @ NLSolversBase ~/.julia/packages/NLSolversBase/GRQ1x/src/interface.jl:82
 [53] initial_state(method::BFGS{LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Nothing, Nothing, Flat}, options::Optim.Options{Float64, GalacticOptim.var"#_cb#142"{GalacticOptim.var"#141#149", BFGS{LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Nothing, Nothing, Flat}, Base.Iterators.Cycle{Tuple{GalacticOptim.NullData}}}}, d::TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}, initial_x::Vector{Float64})
    @ Optim ~/.julia/packages/Optim/3K7JI/src/multivariate/solvers/first_order/bfgs.jl:94
 [54] optimize(d::TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}, initial_x::Vector{Float64}, method::BFGS{LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Nothing, Nothing, Flat}, options::Optim.Options{Float64, GalacticOptim.var"#_cb#142"{GalacticOptim.var"#141#149", BFGS{LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Nothing, Nothing, Flat}, Base.Iterators.Cycle{Tuple{GalacticOptim.NullData}}}})
    @ Optim ~/.julia/packages/Optim/3K7JI/src/multivariate/optimize/optimize.jl:35
 [55] __solve(prob::OptimizationProblem{true, OptimizationFunction{true, GalacticOptim.AutoZygote, NeuralPDE.var"#loss_function_#373"{NeuralPDE.var"#356#372"{NeuralPDE.var"#354#370", NeuralPDE.var"#352#368"}, NeuralPDE.var"#273#275"{UnionAll, Flux.var"#64#66"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Vector{Float64}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, opt::BFGS{LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Nothing, Nothing, Flat}, data::Base.Iterators.Cycle{Tuple{GalacticOptim.NullData}}; cb::Function, maxiters::Int64, maxtime::Nothing, abstol::Nothing, reltol::Nothing, progress::Bool, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ GalacticOptim ~/.julia/packages/GalacticOptim/Pdieo/src/solve/optim.jl:112
 [56] #solve#480
    @ ~/.julia/packages/SciMLBase/h4Gxc/src/solve.jl:3 [inlined]
 [57] top-level scope
    @ REPL[34]:1

Adding another method:

julia> Base.reshape(x::Vector{Int}, t::Tuple{Tuple{Int64, Int64}}) = (@show x t; reshape(x, t[1]))

julia> res = GalacticOptim.solve(prob, BFGS();  maxiters=2000) #  LoadError: MethodError: no method matching *(::Tuple{Int64, Int64})
s = (1, 15)
x = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
t = ((1, 15),)
s = (1, 15)
x = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
t = ((1, 15),)
...
s = (1, 15)
x = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
t = ((1, 15),)
u: 105-element Vector{Float64}:
 -1.0556664752390683
 -0.14393367915024535
  0.859596626398331
  0.6634825878154411
...

@killah-t-cell
Copy link
Contributor Author

killah-t-cell commented Oct 19, 2021

@mcabbott this (1,15) is coming from x here:

x = adapt(parameterless_type_θ,x)

x = [0.5 0.025446043828620757 0.9745539561713792 0.12923440720030277 0.8707655927996972 0.2970774243113014 0.7029225756886985 0.004272314439593694 0.9957276855604063 0.06756778832011545 0.9324322116798845 0.20695638226615443 0.7930436177338456 0.39610752249605075 0.6038924775039493]

This is how the loss function is built.

function get_loss_function(loss_function, lb,ub ,eltypeθ, parameterless_type_θ,strategy::QuadratureTraining=nothing)

    if length(lb) == 0
        loss = (θ) -> mean(abs2,loss_function(rand(eltypeθ,1,10), θ))
        return loss
    end
    area = eltypeθ(prod(abs.(ub .-lb)))
    f_ = (lb,ub,loss_,θ) -> begin
        # last_x = 1
        function _loss(x,θ)
            # last_x = x
            # mean(abs2,loss_(x,θ), dims=2)
            # size_x = fill(size(x)[2],(1,1))
            x = adapt(parameterless_type_θ,x) 
            @show x
            sum(abs2,loss_(x,θ), dims=2) #./ size_x
        end
        prob = QuadratureProblem(_loss,lb,ub,θ,batch = strategy.batch,nout=1)
        solve(prob,
              strategy.quadrature_alg,
              reltol = strategy.reltol,
              abstol = strategy.abstol,
              maxiters = strategy.maxiters)[1]
    end
    loss = (θ) -> 1/area* f_(lb,ub,loss_function,θ) # errors here
    return loss
end

@killah-t-cell
Copy link
Contributor Author

killah-t-cell commented Oct 20, 2021

@ChrisRackauckas @mcabbott ~~ it seems like this error was fixed with the latest Zygote release~~

EDIT: it wasn't, it worked on my local environment for some unrelated reason.

However, there is now a new error showing up in the NNPDEHAN_tests. This example errors with a LoadError: Gradient Tangent{Tuple{Vector{Float64}, Float64}}([738.7218673748109], -30.240511201823313) should be a tuple:

using Flux, Zygote, LinearAlgebra, Statistics
using Test, StochasticDiffEq
using NeuralPDE

using Random
Random.seed!(100)

# one-dimensional heat equation
x0 = [11.0f0]  # initial points
tspan = (0.0f0,5.0f0)
dt = 0.5   # time step
time_steps = div(tspan[2]-tspan[1],dt)
d = 1      # number of dimensions
m = 10     # number of trajectories (batch size)

g(X) = sum(X.^2)   # terminal condition
f(X,u,σᵀ∇u,p,t) = 0.0  # function from solved equation
μ_f(X,p,t) = 0.0
σ_f(X,p,t) = 1.0
prob = TerminalPDEProblem(g, f, μ_f, σ_f, x0, tspan)

hls = 10 + d #hidden layer size
opt = Flux.ADAM(0.005)  #optimizer
#sub-neural network approximating solutions at the desired point
u0 = Flux.Chain(Dense(d,hls,relu),
                Dense(hls,hls,relu),
                Dense(hls,1))
# sub-neural network approximating the spatial gradients at time point
σᵀ∇u = [Flux.Chain(Dense(d,hls,relu),
                  Dense(hls,hls,relu),
                  Dense(hls,d)) for i in 1:time_steps]

alg = NNPDEHan(u0, σᵀ∇u, opt = opt)

ans = solve(prob, alg, verbose = true, abstol=1e-8, maxiters = 200, dt=dt, trajectories=m)

u_analytical(x,t) = sum(x.^2) .+ d*t
analytical_ans = u_analytical(x0, tspan[end])

error_l2 = sqrt((ans-analytical_ans)^2/ans^2)

println("one-dimensional heat equation")
# println("numerical = ", ans)
# println("analytical = " ,analytical_ans)
println("error_l2 = ", error_l2, "\n")
@test error_l2 < 0.1

Here is the stacktrace:

ERROR: LoadError: Gradient Tangent{Tuple{Vector{Float64}, Float64}}([738.7218673748109], -30.240511201823313) should be a tuple
Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:33
  [2] gradtuple1(x::ChainRulesCore.Tangent{Tuple{Vector{Float64}, Float64}, Tuple{Vector{Float64}, Float64}})
    @ ZygoteRules ~/.julia/packages/ZygoteRules/AIbCs/src/adjoint.jl:24
  [3] (::Zygote.var"#1613#back#150"{typeof(identity)})(Δ::ChainRulesCore.Tangent{Tuple{Vector{Float64}, Float64}, Tuple{Vector{Float64}, Float64}})
    @ Zygote ~/.julia/packages/ZygoteRules/AIbCs/src/adjoint.jl:67
  [4] Pullback
    @ ~/.julia/dev/NeuralPDE/src/pde_solve.jl:50 [inlined]
  [5] (::typeof(∂(λ)))(Δ::ChainRulesCore.Tangent{Tuple{Vector{Float64}, Float64}, Tuple{Vector{Float64}, Float64}})
    @ Zygote ~/.julia/packages/Zygote/rv6db/src/compiler/interface2.jl:0
  [6] #553
    @ ~/.julia/packages/Zygote/rv6db/src/lib/array.jl:202 [inlined]
  [7] #4
    @ ./generator.jl:36 [inlined]
  [8] iterate
    @ ./generator.jl:47 [inlined]
  [9] collect(itr::Base.Generator{Base.Iterators.Zip{Tuple{Vector{Tuple{Tuple{Vector{Float64}, Float64}, typeof(∂(λ))}}, Vector{ChainRulesCore.Tangent{Tuple{Vector{Float64}, Float64}, Tuple{Vector{Float64}, Float64}}}}}, Base.var"#4#5"{Zygote.var"#553#558"}})
    @ Base ./array.jl:681
 [10] map
    @ ./abstractarray.jl:2383 [inlined]
 [11] (::Zygote.var"#550#555"{NeuralPDE.var"#33#43"{Float64, Vector{Chain{Tuple{Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}, Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}, Chain{Tuple{Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}, Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, Nothing, typeof(σ_f), typeof(μ_f), typeof(f), Int64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, Vector{Float32}}, 1, Tuple{UnitRange{Int64}}, Vector{Tuple{Tuple{Vector{Float64}, Float64}, typeof(∂(λ))}}})(Δ::Vector{ChainRulesCore.Tangent{Tuple{Vector{Float64}, Float64}, Tuple{Vector{Float64}, Float64}}})
    @ Zygote ~/.julia/packages/Zygote/rv6db/src/lib/array.jl:202
 [12] #2576#back
    @ ~/.julia/packages/ZygoteRules/AIbCs/src/adjoint.jl:67 [inlined]
 [13] Pullback
    @ ~/.julia/dev/NeuralPDE/src/pde_solve.jl:40 [inlined]
 [14] (::typeof(∂(λ)))(Δ::Vector{ChainRulesCore.Tangent{Tuple{Vector{Float64}, Float64}, Tuple{Vector{Float64}, Float64}}})
    @ Zygote ~/.julia/packages/Zygote/rv6db/src/compiler/interface2.jl:0
 [15] Pullback
    @ ~/.julia/dev/NeuralPDE/src/pde_solve.jl:55 [inlined]
 [16] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote ~/.julia/packages/Zygote/rv6db/src/compiler/interface2.jl:0
 [17] #203
    @ ~/.julia/packages/Zygote/rv6db/src/lib/lib.jl:203 [inlined]
 [18] #1733#back
    @ ~/.julia/packages/ZygoteRules/AIbCs/src/adjoint.jl:67 [inlined]
 [19] Pullback
    @ ~/.julia/packages/Flux/ZnXxS/src/optimise/train.jl:105 [inlined]
 [20] (::typeof(∂(λ)))(Δ::Float64)
    @ Zygote ~/.julia/packages/Zygote/rv6db/src/compiler/interface2.jl:0
 [21] (::Zygote.var"#84#85"{Params, typeof(∂(λ)), Zygote.Context})(Δ::Float64)
    @ Zygote ~/.julia/packages/Zygote/rv6db/src/compiler/interface.jl:343
 [22] gradient(f::Function, args::Params)
    @ Zygote ~/.julia/packages/Zygote/rv6db/src/compiler/interface.jl:76
 [23] macro expansion
    @ ~/.julia/packages/Flux/ZnXxS/src/optimise/train.jl:104 [inlined]
 [24] macro expansion
    @ ~/.julia/packages/Juno/n6wyj/src/progress.jl:134 [inlined]
 [25] train!(loss::Function, ps::Params, data::Base.Iterators.Take{Base.Iterators.Repeated{Tuple{}}}, opt::ADAM; cb::NeuralPDE.var"#35#46"{Float64, Bool, Bool, Vector{Float32}, NeuralPDE.var"#loss#44"{NeuralPDE.var"#sol#42"{Float64, Int64, Vector{Chain{Tuple{Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}, Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}, Chain{Tuple{Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}, Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, Nothing, typeof(σ_f), typeof(μ_f), typeof(f), Int64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, Vector{Float32}}, typeof(g)}, Chain{Tuple{Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}, Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, Vector{Float32}})
    @ Flux.Optimise ~/.julia/packages/Flux/ZnXxS/src/optimise/train.jl:102
 [26] solve(prob::TerminalPDEProblem{typeof(g), typeof(f), typeof(μ_f), typeof(σ_f), Vector{Float32}, Float32, Nothing, Nothing, Nothing, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, alg::NNPDEHan{Chain{Tuple{Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}, Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, Vector{Chain{Tuple{Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}, Dense{typeof(relu), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}, ADAM}; abstol::Float64, verbose::Bool, maxiters::Int64, save_everystep::Bool, dt::Float64, give_limit::Bool, trajectories::Int64, sdealg::EM{true}, ensemblealg::EnsembleThreads, trajectories_upper::Int64, trajectories_lower::Int64, maxiters_upper::Int64)
    @ NeuralPDE ~/.julia/dev/NeuralPDE/src/pde_solve.jl:67
 [27] top-level scope
    @ ~/.julia/dev/NeuralPDE/test/NNPDEHan_tests.jl:37
in expression starting at /Users/gabrielbirnbaum/.julia/dev/NeuralPDE/test/NNPDEHan_tests.jl:37

@mcabbott
Copy link

mcabbott commented Oct 20, 2021

That looks like some ChainRules types are still leaking. Might be worth trying with FluxML/Zygote.jl#1104 which fixes quite a few such bugs. [Just tried and can't seem to install these packages locally.]

CI here https://github.com/FluxML/Zygote.jl/pull/1104/checks?check_run_id=3947057934 still gets the MethodError: no method matching *(::Tuple{Int64, Int64})

@killah-t-cell
Copy link
Contributor Author

killah-t-cell commented Oct 20, 2021

@mcabbott pardon me. Forget what I said above, it seems like the MethodError: no method matching *(::Tuple{Int64, Int64}) still persists (my local environment + my enthusiasm fooled me). But the other error is also occurring which is weird.

But curious that it still fails on your draft PR as well! I wonder where this bug comes from.

@DhairyaLGandhi
Copy link
Member

DhairyaLGandhi commented Oct 20, 2021

@killah-t-cell could you try with TuringLang/DistributionsAD.jl#202

This issue should be fixed by the linked PR. All though I suspect I'll have to write them an adjoint to merge it.

@DhairyaLGandhi
Copy link
Member

This is what I see with the MWE in OP, if you can confirm the results

u: 105-element Vector{Float64}:
  0.42959385822287316
  0.8153385859558028
  0.2956733117400115
 -0.9269163896868351
 -0.41610725052663394
  0.788529007842906
 -0.4672638062633594
  0.42176117266919466
 -0.5566936525899321
 -0.939474974892133
  0.2560851698580619
 -0.8910921964785219
 -0.46272037710100034
 -0.8336165217105665
 -0.49261447676319536
  
  0.09034646876988914
 -0.15117509289064127
 -0.13851539974648314
  0.23539589937145752
  0.1614260435164039
  0.6194577316570241
 -0.2756485616495758
 -0.595470033222552
  0.31399084643840547
  0.7212511996796783
 -0.7670958923702983
  0.7249731108112188
  0.4991804995937005
  0.07514818316204416

@killah-t-cell
Copy link
Contributor Author

@DhairyaLGandhi yes the MWE seems to work for me as well.

@KirillZubov
Copy link
Member

I also checked, TuringLang/DistributionsAD.jl#202 works for MWE

@killah-t-cell
Copy link
Contributor Author

The fix PR is being continued here I think TuringLang/DistributionsAD.jl#203

@killah-t-cell
Copy link
Contributor Author

Any updates on FluxML/Zygote.jl#1104 @mcabbott @DhairyaLGandhi? Merging it would unblock TuringLang/DistributionsAD.jl#203 and solve this issue.

@DhairyaLGandhi
Copy link
Member

Yeah let's merge this, so we can unbreak the ecosystem first, and fix zygote after.

@devmotion
Copy link
Member

A new release of DistributionsAD with the fix is available.

@killah-t-cell
Copy link
Contributor Author

Works great! Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants