You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have installed NeuralPDE.jl successfully and I tried to run an example code, but I got Oprimization algorithm not found error. I am confused why I have got it.
using NeuralPDE, Lux, ModelingToolkit, Optimization
import ModelingToolkit: Interval, infimum, supremum
callback = function (p,l)
println("Current loss is: $l")
return false
end
res = Optimization.solve(prob, ADAM(0.1); callback = callback, maxiters=4000)
res = Optimization.solve(prob, ADAM(0.1); callback = callback, maxiters=4000)
ERROR: Optimization algorithm not found. Either the chosen algorithm is not a valid solver
choice for the OptimizationProblem, or the Optimization solver library is not loaded.
Make sure that you have loaded an appropriate Optimization.jl solver library, for example, solve(prob,Optim.BFGS()) requires using OptimizationOptimJL and solve(prob,Adam()) requires using OptimizationOptimisers.
You need to import the solvers that you're using. Note that you didn't paste what the example code actually says. It says at the top:
using NeuralPDE, Lux, Optimization, OptimizationOptimJL
which thus imports the OptimizationOptimJL solvers. If you want to use ADAM, then you need OptimizationFlux. Please make sure you're viewing the up to date tutorials:
I have installed NeuralPDE.jl successfully and I tried to run an example code, but I got Oprimization algorithm not found error. I am confused why I have got it.
using NeuralPDE, Lux, ModelingToolkit, Optimization
import ModelingToolkit: Interval, infimum, supremum
@parameters x y
@variables u(..)
Dxx = Differential(x)^2
Dyy = Differential(y)^2
2D PDE
eq = Dxx(u(x,y)) + Dyy(u(x,y)) ~ -sin(pi*x)sin(piy)
Boundary conditions
bcs = [u(0,y) ~ 0.0, u(1,y) ~ 0,
u(x,0) ~ 0.0, u(x,1) ~ 0]
Space and time domains
domains = [x ∈ Interval(0.0,1.0),
y ∈ Interval(0.0,1.0)]
I have added below the codes I have followed with the prompted error.
Discretization
dx = 0.1
Neural network
dim = 2 # number of dimensions
chain = Lux.Chain(Dense(dim,16,Lux.σ),Dense(16,16,Flux.σ),Dense(16,1))
discretization = PhysicsInformedNN(chain, QuadratureTraining())
@nAmed pde_system = PDESystem(eq,bcs,domains,[x,y],[u(x, y)])
prob = discretize(pde_system,discretization)
callback = function (p,l)
println("Current loss is: $l")
return false
end
res = Optimization.solve(prob, ADAM(0.1); callback = callback, maxiters=4000)
res = Optimization.solve(prob, ADAM(0.1); callback = callback, maxiters=4000)
ERROR: Optimization algorithm not found. Either the chosen algorithm is not a valid solver
choice for the
OptimizationProblem
, or the Optimization solver library is not loaded.Make sure that you have loaded an appropriate Optimization.jl solver library, for example,
solve(prob,Optim.BFGS())
requiresusing OptimizationOptimJL
andsolve(prob,Adam())
requiresusing OptimizationOptimisers
.For more information, see the Optimization.jl documentation: https://docs.sciml.ai/Optimization/stable/.
Chosen Optimizer: Adam(0.1, (0.9, 0.999), 1.0e-8, IdDict{Any, Any}())
Stacktrace:
[1] __solve(::OptimizationProblem{true, OptimizationFunction{true, Optimization.AutoZygote, NeuralPDE.var"#full_loss_function#326"{NeuralPDE.var"#null_nonadaptive_loss#127", Vector{NeuralPDE.var"#118#122"{loss_function, Vector{Float64}, Vector{Float64}, NeuralPDE.var"#117#120"{QuadratureTraining{IntegralsCubature.CubatureJLh, Float64}}, Float64} where loss_function}, Vector{NeuralPDE.var"#118#122"{NeuralPDE.var"#221#222"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#276"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x5816688b, 0x65829150, 0xb76265f3, 0xb4ef929e, 0x825eb372)}, NeuralPDE.var"#12#13", NeuralPDE.var"#287#294"{NeuralPDE.var"#287#288#295"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{IntegralsCubature.CubatureJLh, Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Lux.Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{Dense{true, typeof(sigmoid_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(sigmoid_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}}, NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}}}, Nothing}, Vector{Float64}, Vector{Float64}, NeuralPDE.var"#117#120"{QuadratureTraining{IntegralsCubature.CubatureJLh, Float64}}, Float64}}, NeuralPDE.PINNRepresentation, Bool, Vector{Int64}, Int64, NeuralPDE.Phi{Lux.Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{Dense{true, typeof(sigmoid_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(sigmoid_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}}, NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:48, Axis(weight = ViewAxis(1:32, ShapedAxis((16, 2), NamedTuple())), bias = ViewAxis(33:48, ShapedAxis((16, 1), NamedTuple())))), layer_2 = ViewAxis(49:320, Axis(weight = ViewAxis(1:256, ShapedAxis((16, 16), NamedTuple())), bias = ViewAxis(257:272, ShapedAxis((16, 1), NamedTuple())))), layer_3 = ViewAxis(321:337, Axis(weight = ViewAxis(1:16, ShapedAxis((1, 16), NamedTuple())), bias = ViewAxis(17:17, ShapedAxis((1, 1), NamedTuple())))))}}}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, ::Adam; kwargs::Base.Iterators.Pairs{Symbol, Any, Tuple{Symbol, Symbol}, NamedTuple{(:callback, :maxiters), Tuple{var"#1#2", Int64}}})
@ SciMLBase ~/.julia/packages/SciMLBase/wEAy7/src/solve.jl:173
[2] solve(::OptimizationProblem{true, OptimizationFunction{true, Optimization.AutoZygote, NeuralPDE.var"#full_loss_function#326"{NeuralPDE.var"#null_nonadaptive_loss#127", Vector{NeuralPDE.var"#118#122"{loss_function, Vector{Float64}, Vector{Float64}, NeuralPDE.var"#117#120"{QuadratureTraining{IntegralsCubature.CubatureJLh, Float64}}, Float64} where loss_function}, Vector{NeuralPDE.var"#118#122"{NeuralPDE.var"#221#222"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#276"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x5816688b, 0x65829150, 0xb76265f3, 0xb4ef929e, 0x825eb372)}, NeuralPDE.var"#12#13", NeuralPDE.var"#287#294"{NeuralPDE.var"#287#288#295"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{IntegralsCubature.CubatureJLh, Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Lux.Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{Dense{true, typeof(sigmoid_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(sigmoid_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}}, NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}}}, Nothing}, Vector{Float64}, Vector{Float64}, NeuralPDE.var"#117#120"{QuadratureTraining{IntegralsCubature.CubatureJLh, Float64}}, Float64}}, NeuralPDE.PINNRepresentation, Bool, Vector{Int64}, Int64, NeuralPDE.Phi{Lux.Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{Dense{true, typeof(sigmoid_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(sigmoid_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}}, NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:48, Axis(weight = ViewAxis(1:32, ShapedAxis((16, 2), NamedTuple())), bias = ViewAxis(33:48, ShapedAxis((16, 1), NamedTuple())))), layer_2 = ViewAxis(49:320, Axis(weight = ViewAxis(1:256, ShapedAxis((16, 16), NamedTuple())), bias = ViewAxis(257:272, ShapedAxis((16, 1), NamedTuple())))), layer_3 = ViewAxis(321:337, Axis(weight = ViewAxis(1:16, ShapedAxis((1, 16), NamedTuple())), bias = ViewAxis(17:17, ShapedAxis((1, 1), NamedTuple())))))}}}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, ::Adam; kwargs::Base.Iterators.Pairs{Symbol, Any, Tuple{Symbol, Symbol}, NamedTuple{(:callback, :maxiters), Tuple{var"#1#2", Int64}}})
@ SciMLBase ~/.julia/packages/SciMLBase/wEAy7/src/solve.jl:84
[3] top-level scope
@ REPL[37]:1
[4] top-level scope
@ ~/.julia/packages/CUDA/DfvRa/src/initialization.jl:52
The text was updated successfully, but these errors were encountered: