Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: Optimization algorithm not found. #618

Closed
murushyam opened this issue Nov 21, 2022 · 2 comments
Closed

ERROR: Optimization algorithm not found. #618

murushyam opened this issue Nov 21, 2022 · 2 comments

Comments

@murushyam
Copy link

murushyam commented Nov 21, 2022

I have installed NeuralPDE.jl successfully and I tried to run an example code, but I got Oprimization algorithm not found error. I am confused why I have got it.

using NeuralPDE, Lux, ModelingToolkit, Optimization
import ModelingToolkit: Interval, infimum, supremum

@parameters x y
@variables u(..)
Dxx = Differential(x)^2
Dyy = Differential(y)^2

2D PDE

eq = Dxx(u(x,y)) + Dyy(u(x,y)) ~ -sin(pi*x)sin(piy)

Boundary conditions

bcs = [u(0,y) ~ 0.0, u(1,y) ~ 0,
u(x,0) ~ 0.0, u(x,1) ~ 0]

Space and time domains

domains = [x ∈ Interval(0.0,1.0),
y ∈ Interval(0.0,1.0)]

I have added below the codes I have followed with the prompted error.

Discretization

dx = 0.1

Neural network

dim = 2 # number of dimensions
chain = Lux.Chain(Dense(dim,16,Lux.σ),Dense(16,16,Flux.σ),Dense(16,1))

discretization = PhysicsInformedNN(chain, QuadratureTraining())

@nAmed pde_system = PDESystem(eq,bcs,domains,[x,y],[u(x, y)])
prob = discretize(pde_system,discretization)

callback = function (p,l)
println("Current loss is: $l")
return false
end

res = Optimization.solve(prob, ADAM(0.1); callback = callback, maxiters=4000)

res = Optimization.solve(prob, ADAM(0.1); callback = callback, maxiters=4000)
ERROR: Optimization algorithm not found. Either the chosen algorithm is not a valid solver
choice for the OptimizationProblem, or the Optimization solver library is not loaded.
Make sure that you have loaded an appropriate Optimization.jl solver library, for example,
solve(prob,Optim.BFGS()) requires using OptimizationOptimJL and
solve(prob,Adam()) requires using OptimizationOptimisers.

For more information, see the Optimization.jl documentation: https://docs.sciml.ai/Optimization/stable/.

Chosen Optimizer: Adam(0.1, (0.9, 0.999), 1.0e-8, IdDict{Any, Any}())
Stacktrace:
[1] __solve(::OptimizationProblem{true, OptimizationFunction{true, Optimization.AutoZygote, NeuralPDE.var"#full_loss_function#326"{NeuralPDE.var"#null_nonadaptive_loss#127", Vector{NeuralPDE.var"#118#122"{loss_function, Vector{Float64}, Vector{Float64}, NeuralPDE.var"#117#120"{QuadratureTraining{IntegralsCubature.CubatureJLh, Float64}}, Float64} where loss_function}, Vector{NeuralPDE.var"#118#122"{NeuralPDE.var"#221#222"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#276"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x5816688b, 0x65829150, 0xb76265f3, 0xb4ef929e, 0x825eb372)}, NeuralPDE.var"#12#13", NeuralPDE.var"#287#294"{NeuralPDE.var"#287#288#295"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{IntegralsCubature.CubatureJLh, Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Lux.Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{Dense{true, typeof(sigmoid_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(sigmoid_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}}, NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}}}, Nothing}, Vector{Float64}, Vector{Float64}, NeuralPDE.var"#117#120"{QuadratureTraining{IntegralsCubature.CubatureJLh, Float64}}, Float64}}, NeuralPDE.PINNRepresentation, Bool, Vector{Int64}, Int64, NeuralPDE.Phi{Lux.Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{Dense{true, typeof(sigmoid_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(sigmoid_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}}, NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:48, Axis(weight = ViewAxis(1:32, ShapedAxis((16, 2), NamedTuple())), bias = ViewAxis(33:48, ShapedAxis((16, 1), NamedTuple())))), layer_2 = ViewAxis(49:320, Axis(weight = ViewAxis(1:256, ShapedAxis((16, 16), NamedTuple())), bias = ViewAxis(257:272, ShapedAxis((16, 1), NamedTuple())))), layer_3 = ViewAxis(321:337, Axis(weight = ViewAxis(1:16, ShapedAxis((1, 16), NamedTuple())), bias = ViewAxis(17:17, ShapedAxis((1, 1), NamedTuple())))))}}}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, ::Adam; kwargs::Base.Iterators.Pairs{Symbol, Any, Tuple{Symbol, Symbol}, NamedTuple{(:callback, :maxiters), Tuple{var"#1#2", Int64}}})
@ SciMLBase ~/.julia/packages/SciMLBase/wEAy7/src/solve.jl:173
[2] solve(::OptimizationProblem{true, OptimizationFunction{true, Optimization.AutoZygote, NeuralPDE.var"#full_loss_function#326"{NeuralPDE.var"#null_nonadaptive_loss#127", Vector{NeuralPDE.var"#118#122"{loss_function, Vector{Float64}, Vector{Float64}, NeuralPDE.var"#117#120"{QuadratureTraining{IntegralsCubature.CubatureJLh, Float64}}, Float64} where loss_function}, Vector{NeuralPDE.var"#118#122"{NeuralPDE.var"#221#222"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#276"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x5816688b, 0x65829150, 0xb76265f3, 0xb4ef929e, 0x825eb372)}, NeuralPDE.var"#12#13", NeuralPDE.var"#287#294"{NeuralPDE.var"#287#288#295"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{IntegralsCubature.CubatureJLh, Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Lux.Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{Dense{true, typeof(sigmoid_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(sigmoid_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}}, NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}}}, Nothing}, Vector{Float64}, Vector{Float64}, NeuralPDE.var"#117#120"{QuadratureTraining{IntegralsCubature.CubatureJLh, Float64}}, Float64}}, NeuralPDE.PINNRepresentation, Bool, Vector{Int64}, Int64, NeuralPDE.Phi{Lux.Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{Dense{true, typeof(sigmoid_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(sigmoid_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}}, NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:48, Axis(weight = ViewAxis(1:32, ShapedAxis((16, 2), NamedTuple())), bias = ViewAxis(33:48, ShapedAxis((16, 1), NamedTuple())))), layer_2 = ViewAxis(49:320, Axis(weight = ViewAxis(1:256, ShapedAxis((16, 16), NamedTuple())), bias = ViewAxis(257:272, ShapedAxis((16, 1), NamedTuple())))), layer_3 = ViewAxis(321:337, Axis(weight = ViewAxis(1:16, ShapedAxis((1, 16), NamedTuple())), bias = ViewAxis(17:17, ShapedAxis((1, 1), NamedTuple())))))}}}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, ::Adam; kwargs::Base.Iterators.Pairs{Symbol, Any, Tuple{Symbol, Symbol}, NamedTuple{(:callback, :maxiters), Tuple{var"#1#2", Int64}}})
@ SciMLBase ~/.julia/packages/SciMLBase/wEAy7/src/solve.jl:84
[3] top-level scope
@ REPL[37]:1
[4] top-level scope
@ ~/.julia/packages/CUDA/DfvRa/src/initialization.jl:52

@ChrisRackauckas
Copy link
Member

You need to import the solvers that you're using. Note that you didn't paste what the example code actually says. It says at the top:

using NeuralPDE, Lux, Optimization, OptimizationOptimJL

which thus imports the OptimizationOptimJL solvers. If you want to use ADAM, then you need OptimizationFlux. Please make sure you're viewing the up to date tutorials:

https://docs.sciml.ai/NeuralPDE/stable/tutorials/pdesystem/

Closing as there's nothing to fix here, but please feel free to ask more questions if you have more.

@murushyam
Copy link
Author

Thank you, So much it worked.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants