Skip to content

Error with StaticArrays #126

Closed
Closed
@bjack205

Description

@bjack205

I'm getting an error when running the following code:

using FiniteDiff, StaticArrays
n = 5
x = @SVector randn(n)
f(x) = x'x + sin(x[1])
grad = zeros(n)
cache = FiniteDiff.GradientCache(grad, zeros(n),Val(:forward))
FiniteDiff.finite_difference_gradient!(grad, f, x, cache)

which produces this error:

ERROR: LoadError: MethodError: no method matching ndims(::Type{Nothing})
Closest candidates are:
  ndims(::Number) at number.jl:83
  ndims(::LinearAlgebra.UniformScaling) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/uniformscaling.jl:87
  ndims(::Base.Iterators.ProductIterator) at iterators.jl:967
  ...
Stacktrace:
 [1] Base.Broadcast.BroadcastStyle(#unused#::Type{Nothing})
   @ Base.Broadcast ./broadcast.jl:103
 [2] combine_styles(c::Nothing)
   @ Base.Broadcast ./broadcast.jl:420
 [3] combine_styles(c1::Nothing, c2::Base.Broadcast.Broadcasted{StaticArrays.StaticArrayStyle{1}, Nothing, typeof(FiniteDiff.compute_epsilon), Tuple{Base.RefValue{Val{:forward}}, SVector{5, Float64}, Float64, Float64, Bool}})
   @ Base.Broadcast ./broadcast.jl:421
 [4] materialize!
   @ ./broadcast.jl:891 [inlined]
 [5] finite_difference_gradient!(df::Vector{Float64}, f::typeof(f), x::SVector{5, Float64}, cache::FiniteDiff.GradientCache{Nothing, Nothing, Nothing, Vector{Float64}, Val{:forward}(), Float64, Val{true}()}; relstep::Float64, absstep::Float64, dir::Bool)
   @ FiniteDiff ~/.julia/packages/FiniteDiff/msXcU/src/gradients.jl:140
 [6] finite_difference_gradient!(df::Vector{Float64}, f::Function, x::SVector{5, Float64}, cache::FiniteDiff.GradientCache{Nothing, Nothing, Nothing, Vector{Float64}, Val{:forward}(), Float64, Val{true}()})
   @ FiniteDiff ~/.julia/packages/FiniteDiff/msXcU/src/gradients.jl:138
 [7] top-level scope
   @ ~/.julia/dev/RobotDynamics/test/scalar_function_test.jl:165
in expression starting at /home/brian/.julia/dev/RobotDynamics/test/scalar_function_test.jl:165

I'm explicitly not passing in x to the GradientCache because I want to be able to also support normal Vector inputs with the same cache. There's definitely seems to be some errors in the non-StridedVector gradient function, since there are calls to compute_epsilon passing in the vector x instead of an element of x.

If I copy the StridedVector code and allow a StaticVector, the gradient computes just fine and without any allocations. Since you're already depending on StaticArrays, maybe make the StridedVector version take in a Union{StridedVector{<:Number},<:StaticVector{<:Any,<:Any,<:Number}} for the input vector x?

I'm not quite following all the logic in the GradientCache, and maybe my simple suggestion is missing something, but it'd be nice to allow for this type of use-case.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions