Skip to content

Commit 49eac38

Browse files
committed
add tutos
1 parent 11abfa9 commit 49eac38

File tree

11 files changed

+1383
-13
lines changed

11 files changed

+1383
-13
lines changed

docs/Project.toml

Lines changed: 24 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,28 @@
11
[deps]
2+
DifferentiationInterface = "a0c0ee7d-e4b9-4e03-894e-1c5f64a51d63"
23
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
4+
ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210"
5+
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
6+
MINPACK = "4854310b-de5a-5eb6-a2a5-c1dee2bd17f9"
7+
MadNLP = "2621e9c9-9eb4-46b1-8089-e8c72242dfb6"
8+
NLPModelsIpopt = "f4238b75-b362-5c4c-b852-0801c9a21d71"
9+
NonlinearSolve = "8913a72c-1f9b-4ce2-8d82-65094dcecaec"
10+
OptimalControl = "5f98b655-cc9a-415a-b60e-744165666948"
11+
OrdinaryDiffEq = "1dea7af3-3e70-54e6-95c3-0bf5283fa5ed"
12+
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
13+
Suppressor = "fd094767-a336-5f1f-9728-57cf17d0bbfb"
314

415
[compat]
5-
Documenter = "1"
16+
DifferentiationInterface = "0.6"
17+
Documenter = "1.8"
18+
ForwardDiff = "0.10"
19+
LinearAlgebra = "1"
20+
MINPACK = "1.3"
21+
MadNLP = "0.8"
22+
NLPModelsIpopt = "0.10"
23+
NonlinearSolve = "4.6"
24+
OptimalControl = "1.0"
25+
OrdinaryDiffEq = "6.93"
26+
Plots = "1.40"
27+
Suppressor = "0.2"
28+
julia = "1"

docs/make.jl

Lines changed: 20 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,34 @@
11
using Documenter
2+
using OptimalControl
23

34
repo_url = "github.com/control-toolbox/Tutorials.jl"
45

56
makedocs(;
6-
remotes=nothing,
7-
warnonly=:cross_references,
8-
sitename="Tutorials",
7+
warnonly=[:cross_references, :autodocs_block],
8+
sitename="Tutorials.jl",
99
format=Documenter.HTML(;
1010
repolink="https://" * repo_url,
11-
prettyurls=false,
12-
size_threshold_ignore=["index.md"],
11+
prettyurls=true,
12+
size_threshold_ignore=[
13+
""
14+
],
1315
assets=[
1416
asset("https://control-toolbox.org/assets/css/documentation.css"),
1517
asset("https://control-toolbox.org/assets/js/documentation.js"),
1618
],
1719
),
18-
pages=["Introduction" => "index.md"],
20+
pages=[
21+
"Getting Started" => "index.md",
22+
"Tutorials and Advanced Features" => [
23+
"Discrete continuation" => "tutorial-continuation.md",
24+
"Discretisation methods" => "tutorial-discretisation.md",
25+
"NLP manipulations" => "tutorial-nlp.md",
26+
"Indirect simple shooting" => "tutorial-iss.md",
27+
"Goddard: direct, indirect" => "tutorial-goddard.md",
28+
"Linear–quadratic regulator" => "tutorial-lqr-basic.md",
29+
"Minimal action" => "tutorial-mam.md",
30+
],
31+
],
1932
)
2033

21-
deploydocs(; repo=repo_url * ".git", devbranch="main")
34+
deploydocs(; repo=repo_url * ".git", devbranch="main")
491 KB
Loading

docs/src/index.md

Lines changed: 62 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,69 @@
11
# Tutorials
22

3-
Documentation for [Tutorials](https://github.com/control-toolbox/Tutorials.jl).
3+
This collection of tutorials is part of the [control-toolbox ecosystem](https://github.com/control-toolbox). The control-toolbox ecosystem gathers Julia packages for mathematical control and applications. It aims to provide tools to model and solve optimal control problems with ordinary differential equations by direct and indirect methods. If you want to define an optimal control problem and solve it, please check the [documentation](https://control-toolbox.org/OptimalControl.jl).
44

5-
## Dependencies
5+
From this page, you can find a list of tutorials to solve optimal control problems with OptimalControl.
66

7-
All the numerical simulations to generate this documentation are performed with the following packages.
7+
## Reproducibility
8+
9+
```@raw html
10+
<details><summary>The documentation of this package was built using these direct dependencies,</summary>
11+
```
12+
13+
```@example
14+
using Pkg # hide
15+
Pkg.status() # hide
16+
```
17+
18+
```@raw html
19+
</details>
20+
```
21+
22+
```@raw html
23+
<details><summary>and using this machine and Julia version.</summary>
24+
```
825

926
```@example
10-
using Pkg
11-
Pkg.status()
27+
using InteractiveUtils # hide
28+
versioninfo() # hide
1229
```
30+
31+
```@raw html
32+
</details>
33+
```
34+
35+
```@raw html
36+
<details><summary>A more complete overview of all dependencies and their versions is also provided.</summary>
37+
```
38+
39+
```@example
40+
using Pkg # hide
41+
Pkg.status(; mode = PKGMODE_MANIFEST) # hide
42+
```
43+
44+
```@raw html
45+
</details>
46+
```
47+
48+
```@eval
49+
using TOML
50+
using Markdown
51+
version = TOML.parse(read("../../Project.toml", String))["version"]
52+
name = TOML.parse(read("../../Project.toml", String))["name"]
53+
link_manifest = "https://github.com/SciML/" *
54+
name *
55+
".jl/tree/gh-pages/v" *
56+
version *
57+
"/assets/Manifest.toml"
58+
link_project = "https://github.com/SciML/" *
59+
name *
60+
".jl/tree/gh-pages/v" *
61+
version *
62+
"/assets/Project.toml"
63+
Markdown.parse("""You can also download the
64+
[manifest]($link_manifest)
65+
file and the
66+
[project]($link_project)
67+
file.
68+
""")
69+
```

docs/src/tutorial-continuation.md

Lines changed: 141 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,141 @@
1+
# Discrete continuation
2+
3+
```@meta
4+
CurrentModule = OptimalControl
5+
```
6+
7+
Using the warm start option, it is easy to implement a basic discrete continuation method, where a sequence of problems is solved using each solution as initial guess for the next problem.
8+
This usually gives better and faster convergence than solving each problem with the same initial guess, and is a way to handle problems that require a good initial guess.
9+
10+
11+
## Continuation on parametric OCP
12+
13+
The most compact syntax to perform a discrete continuation is to use a function that returns the OCP for a given value of the continuation parameter, and solve a sequence of these problems. We illustrate this on a very basic double integrator with increasing fixed final time.
14+
15+
First we load the required packages
16+
17+
```@example main
18+
using OptimalControl
19+
using NLPModelsIpopt
20+
using Printf
21+
using Plots
22+
```
23+
24+
and write a function that returns the OCP for a given final time
25+
26+
```@example main
27+
function ocp_T(T)
28+
ocp = @def begin
29+
t ∈ [0, T], time
30+
x ∈ R², state
31+
u ∈ R, control
32+
q = x₁
33+
v = x₂
34+
q(0) == 0
35+
v(0) == 0
36+
q(T) == 1
37+
v(T) == 0
38+
ẋ(t) == [ v(t), u(t) ]
39+
∫(u(t)^2) → min
40+
end
41+
return ocp
42+
end
43+
nothing # hide
44+
```
45+
46+
Then we perform the continuation with a simple *for* loop, using each solution to initialize the next problem.
47+
48+
```@example main
49+
init1 = ()
50+
for T=1:5
51+
ocp1 = ocp_T(T)
52+
sol1 = solve(ocp1; display=false, init=init1)
53+
global init1 = sol1
54+
@printf("T %.2f objective %9.6f iterations %d\n", T, objective(sol1), iterations(sol1))
55+
end
56+
```
57+
58+
## Continuation on global variable
59+
60+
As a second example, we show how to avoid redefining a new OCP each time, and modify the original one instead.
61+
More precisely we now solve a Goddard problem for a decreasing maximal thrust. If we store the value for *Tmax* in a global variable, we can simply modify this variable and keep the same OCP problem during the continuation.
62+
63+
Let us first define the Goddard problem (note that the formulation below illustrates all the possible constraints types, and the problem could be defined in a more compact way).
64+
65+
```@example main
66+
Cd = 310
67+
Tmax = 3.5
68+
β = 500
69+
b = 2
70+
function F0(x)
71+
r, v, m = x
72+
D = Cd * v^2 * exp(-β*(r - 1))
73+
return [ v, -D/m - 1/r^2, 0 ]
74+
end
75+
function F1(x)
76+
r, v, m = x
77+
return [ 0, Tmax/m, -b*Tmax ]
78+
end
79+
r0 = 1
80+
v0 = 0
81+
m0 = 1
82+
mf = 0.6
83+
x0 = [r0, v0, m0]
84+
vmax = 0.1
85+
86+
@def ocp begin
87+
tf ∈ R, variable
88+
t ∈ [0, tf], time
89+
x ∈ R^3, state
90+
u ∈ R, control
91+
0.01 ≤ tf ≤ Inf
92+
r = x[1]
93+
v = x[2]
94+
m = x[3]
95+
x(0) == x0
96+
m(tf) == mf
97+
r0 ≤ r(t) ≤ r0 + 0.1
98+
v0 ≤ v(t) ≤ vmax
99+
mf ≤ m(t) ≤ m0
100+
0 ≤ u(t) ≤ 1
101+
ẋ(t) == F0(x(t)) + u(t) * F1(x(t))
102+
r(tf) → max
103+
end
104+
105+
sol0 = solve(ocp; display=false)
106+
@printf("Objective for reference solution %.6f\n", objective(sol0))
107+
```
108+
109+
Then we perform the continuation on the maximal thrust.
110+
111+
```@example main
112+
sol = sol0
113+
Tmax_list = []
114+
obj_list = []
115+
for Tmax_local=3.5:-0.5:1
116+
global Tmax = Tmax_local
117+
global sol = solve(ocp; display=false, init=sol)
118+
@printf("Tmax %.2f objective %.6f iterations %d\n", Tmax, objective(sol), iterations(sol))
119+
push!(Tmax_list, Tmax)
120+
push!(obj_list, objective(sol))
121+
end
122+
```
123+
124+
We plot now the objective w.r.t the maximal thrust, as well as both solutions for *Tmax*=3.5 and *Tmax*=1.
125+
126+
```@example main
127+
using Plots.PlotMeasures # for leftmargin
128+
129+
plt_obj = plot(Tmax_list, obj_list;
130+
seriestype=:scatter,
131+
title="Goddard problem",
132+
label="r(tf)",
133+
xlabel="Maximal thrust (Tmax)",
134+
ylabel="Maximal altitude r(tf)")
135+
136+
plt_sol = plot(sol0; solution_label="(Tmax = "*string(Tmax_list[1])*")")
137+
plot!(plt_sol, sol; solution_label="(Tmax = "*string(Tmax_list[end])*")")
138+
139+
layout = grid(2, 1, heights=[0.2, 0.8])
140+
plot(plt_obj, plt_sol; layout=layout, size=(800, 1000), leftmargin=5mm)
141+
```
Lines changed: 98 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
# [Discretisation methods](@id tutorial-discretisation-methods)
2+
3+
## Discretisation formulas
4+
When calling `solve`, the option `disc_method=...` can be used to set the discretisation scheme.
5+
In addition to the default implicit `:trapeze` method (aka Crank-Nicolson), other choices are available, namely implicit `:midpoint` and the Gauss-Legendre collocations with 2 and stages, `:gauss_legendre_2` and `:gauss_legendre_3`, of order 4 and 6 respectively.
6+
Note that higher order methods will typically lead to larger NLP problems for the same number of time steps, and that accuracy will also depend on the smoothness of the problem.
7+
8+
As an example we will use the [Goddard problem](@ref tutorial-goddard)
9+
```@example main
10+
using OptimalControl # to define the optimal control problem and more
11+
using NLPModelsIpopt # to solve the problem via a direct method
12+
using Plots # to plot the solution
13+
14+
t0 = 0 # initial time
15+
r0 = 1 # initial altitude
16+
v0 = 0 # initial speed
17+
m0 = 1 # initial mass
18+
vmax = 0.1 # maximal authorized speed
19+
mf = 0.6 # final mass to target
20+
21+
ocp = @def begin # definition of the optimal control problem
22+
23+
tf ∈ R, variable
24+
t ∈ [t0, tf], time
25+
x = (r, v, m) ∈ R³, state
26+
u ∈ R, control
27+
28+
x(t0) == [ r0, v0, m0 ]
29+
m(tf) == mf, (1)
30+
0 ≤ u(t) ≤ 1
31+
r(t) ≥ r0
32+
0 ≤ v(t) ≤ vmax
33+
34+
ẋ(t) == F0(x(t)) + u(t) * F1(x(t))
35+
36+
r(tf) → max
37+
38+
end;
39+
40+
# Dynamics
41+
const Cd = 310
42+
const Tmax = 3.5
43+
const β = 500
44+
const b = 2
45+
46+
F0(x) = begin
47+
r, v, m = x
48+
D = Cd * v^2 * exp(-β*(r - 1)) # Drag force
49+
return [ v, -D/m - 1/r^2, 0 ]
50+
end
51+
52+
F1(x) = begin
53+
r, v, m = x
54+
return [ 0, Tmax/m, -b*Tmax ]
55+
end
56+
nothing # hide
57+
```
58+
Now let us compare different discretisations
59+
```@example main
60+
sol_trapeze = solve(ocp; tol=1e-8)
61+
plot(sol_trapeze)
62+
63+
sol_midpoint = solve(ocp, disc_method=:midpoint; tol=1e-8)
64+
plot!(sol_midpoint)
65+
66+
sol_euler = solve(ocp, disc_method=:euler; tol=1e-8)
67+
plot!(sol_euler)
68+
69+
sol_euler_imp = solve(ocp, disc_method=:euler_implicit; tol=1e-8)
70+
plot!(sol_euler_imp)
71+
72+
sol_gl2 = solve(ocp, disc_method=:gauss_legendre_2; tol=1e-8)
73+
plot!(sol_gl2)
74+
75+
sol_gl3 = solve(ocp, disc_method=:gauss_legendre_3; tol=1e-8)
76+
plot!(sol_gl3)
77+
```
78+
79+
## Large problems and AD backend
80+
For some large problems, you may notice that solving spends a long time before the iterations actually begin.
81+
This is due to the computing of the sparse derivatives, namely the Jacobian of the constraints and the Hessian of the Lagrangian, that can become quite costly.
82+
A possible alternative is to set the option `adnlp_backend=:manual`, which will use more basic sparsity patterns.
83+
The resulting matrices are faster to compute but are also less sparse, so this is a trade-off bewteen the AD preparation and the optimization itself.
84+
85+
```@example main
86+
solve(ocp, disc_method=:gauss_legendre_3, grid_size=1000, adnlp_backend=:manual)
87+
nothing # hide
88+
```
89+
90+
## Explicit time grid
91+
The option `time_grid=...` allows to pass the complete time grid vector `t0, t1, ..., tf`, which is typically useful if one wants a non uniform grid.
92+
In the case of a free initial and/or final time, provide a normalised grid between 0 and 1.
93+
Note that `time_grid` will override `grid_size` if both are present.
94+
95+
```@example main
96+
sol = solve(ocp, time_grid=[0, 0.1, 0.5, 0.9, 1], display=false)
97+
println(time_grid(sol))
98+
```

0 commit comments

Comments
 (0)