-
I am working on a parameter estimation problem. The parameters are used within a function that takes as input the dynamical variables. I tried to model this function as a parameter function because I would like to include its output also in the constraints and the objective of the problem. Is this possible? I am currently getting this error: @parameter_function(model, c == custom_function(x, p))`: Cannot have mixed parameter types in a tuple element and can only specify infinite parameters. where |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
To be more specific here is my concrete example: using InfiniteOpt, Ipopt
u0 = [0.0f0, 1.0f0]
tspan = (0,1)
nparams = 2
nstates = 2
nodes_per_element = 2
time_supports = 10
param_supports = 2
function custom_function(x,p)
reshape(p, 1, nstates) * x
end
optimizer = optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 0)
model = InfiniteModel(optimizer)
method = OrthogonalCollocation(nodes_per_element)
@infinite_parameter(
model, t in [tspan[1], tspan[2]], num_supports = time_supports, derivative_method = method
)
@infinite_parameter(model, p[1:nparams] in [-1, 1], independent = true, num_supports = param_supports)
@variables(
model,
begin
x[1:2], Infinite(t) # dynamic variables
end
)
@parameter_function(model, c == custom_function(x, p))
# custom_function(x, p) # trace
# @register(model, custom_function(x, p))
# initial conditions
@constraint(model, [i = 1:2], x[i](0) == u0[i])
# variable constraints
@constraint(model, -0.3 <= c(x, p)[1] <= 1.0)
@constraint(model, -0.4 <= x[1])
# dynamic equations
@constraints(
model,
begin
∂(x[1], t) == (1 - x[2]^2) * x[1] - x[2] + c(x,p)[1]
∂(x[2], t) == x[1]
end
)
@objective(model, Min, integral(x[1]^2 + x[2]^2, t)) # + c(x, p)[1]^2
optimize!(model)
jump_model = optimizer_model(model)
solution_summary(jump_model; verbose=false) |
Beta Was this translation helpful? Give feedback.
-
The parameter function syntax just provides a way to implement arbitrary functions that only depend on infinite parameters (no variables allowed). Functions of variables should be expressed algebraically if at all possible. In the above case it looks like you can just use an inner product instead We can also register functions following our nonlinear syntax, but this should be only done as a last resort (since JuMP does not give 2nd order derivatives for multivariate registered functions). Note that in like manner to JuMP, we do not currently support vector valued inputs for registered functions. Though we plan to support this in the future. |
Beta Was this translation helpful? Give feedback.
-
Thank you for the clarification. I was missusing the infinite parameters though, those should have been just classical decision variables without an infinite support for a traditional parameter estimation problem. My real function is nonlinear and unfortunately vectorized. I tried to follow the JUMP recommendation and posed it as a multiargument function with splatting instead and now the MWE seems to be working! using InfiniteOpt, Ipopt
u0 = [0.0f0, 1.0f0]
tspan = (0,1)
nstates = 2
nparams = 2
nodes_per_element = 2
time_supports = 10
function custom_function(x,p)
tanh.(reshape(p, 1, nstates) * x)
end
optimizer = optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 0)
model = InfiniteModel(optimizer)
method = OrthogonalCollocation(nodes_per_element)
@infinite_parameter(
model, t in [tspan[1], tspan[2]], num_supports = time_supports, derivative_method = method
)
# classic fixed-time decision variables
@variable(model, -1 <= p[i = 1:nparams] <= 1)
@variables(
model,
begin
x[1:2], Infinite(t) # dynamic variables
end
)
function scalar_fun(z...)
# @show z typeof(z) z[1:nstates] z[nstates+1:end]
x = collect(z[1:nstates])
p = collect(z[nstates+1:end])
custom_function(x, p)
end
# initial conditions
@constraint(model, [i = 1:2], x[i](0) == u0[i])
# variable constraints
@constraint(model, -0.4 <= x[1])
# dynamic equations
@constraints(
model,
begin
∂(x[1], t) == (1 - x[2]^2) * x[1] - x[2] + scalar_fun(vcat(x, p)...)[1]
∂(x[2], t) == x[1]
end
)
@objective(model, Min, integral(x[1]^2 + x[2]^2, t)) # + scalar_fun(vcat(x, p)...)[1]^2
optimize!(model)
jump_model = optimizer_model(model)
solution_summary(jump_model; verbose=false) Thanks! |
Beta Was this translation helpful? Give feedback.
The parameter function syntax just provides a way to implement arbitrary functions that only depend on infinite parameters (no variables allowed). Functions of variables should be expressed algebraically if at all possible. In the above case it looks like you can just use an inner product instead
p' * x
.We can also register functions following our nonlinear syntax, but this should be only done as a last resort (since JuMP does not give 2nd order derivatives for multivariate registered functions). Note that in like manner to JuMP, we do not currently support vector valued inputs for registered functions. Though we plan to support this in the future.