Skip to content

[API Proposal]: NativeTensor<T> #114089

Open
@michaelgsharp

Description

@michaelgsharp

Background and motivation

Currently we have Tensor<T> which is backed by managed memory. This gives us all the goodness we get from managed memory and will satisfy most of our needs. There will be times, however, that we need a tensor type that is backed by native memory. This can occur if we need to store more data than we can in our managed array, if we need to point to native memory that already exists, or if we are creating a tensor around tensor's from other frameworks.

API Proposal

public static partial class NativeTensor
{
        // Allocate native memory based on the specified lengths and fill it with a Gaussian normal distribution.
        // The tensor will own the underlying memory, so it will be responsible for freeing it. 
        public static System.Numerics.Tensors.NativeTensor<T> AllocateAndFillGaussianNormalDistribution<T>(System.Random random, params scoped System.ReadOnlySpan<nint> lengths) where T : System.Numerics.IFloatingPoint<T>;
        public static System.Numerics.Tensors.NativeTensor<T> AllocateAndFillGaussianNormalDistribution<T>(params scoped System.ReadOnlySpan<nint> lengths) where T : System.Numerics.IFloatingPoint<T>;

        // Allocate native memory based on the specified lengths and fill it with a uniform distribution.
        // The tensor will own the underlying memory, so it will be responsible for freeing it. 
        public static System.Numerics.Tensors.NativeTensor<T> AllocateAndFillUniformDistribution<T>(System.Random random, params scoped System.ReadOnlySpan<nint> lengths) where T : System.Numerics.IFloatingPoint<T>;
        public static System.Numerics.Tensors.NativeTensor<T> AllocateAndFillUniformDistribution<T>(params scoped System.ReadOnlySpan<nint> lengths) where T : System.Numerics.IFloatingPoint<T>;

        // Allocate native memory based on the specified lengths. does nothing to fill the memory.
        // The tensor will own the underlying memory, so it will be responsible for freeing it. 
        public static System.Numerics.Tensors.NativeTensor<T> AllocateUninitialized<T>(scoped System.ReadOnlySpan<nint> lengths);
        public static System.Numerics.Tensors.NativeTensor<T> AllocateUninitialized<T>(scoped System.ReadOnlySpan<nint> lengths, scoped System.ReadOnlySpan<nint> strides);

        // Do we want to keep the IEnumerable<T> here? It would copy the data over to the native tensor and essentially allow you to initialize a native tensor with an array of values.
        // The tensor will own the underlying memory, so it will be responsible for freeing it. 
        public static System.Numerics.Tensors.NativeTensor<T> Allocate<T>(System.Collections.Generic.IEnumerable<T> values, scoped System.ReadOnlySpan<nint> lengths);
        public static System.Numerics.Tensors.NativeTensor<T> Allocate<T>(System.Collections.Generic.IEnumerable<T> values, scoped System.ReadOnlySpan<nint> lengths, scoped System.ReadOnlySpan<nint> strides);

        // Allocate native memory based on the specified lengths. 0 filled.
        // The tensor will own the underlying memory, so it will be responsible for freeing it. 
        public static System.Numerics.Tensors.NativeTensor<T> Allocate<T>(scoped System.ReadOnlySpan<nint> lengths);
        public static System.Numerics.Tensors.NativeTensor<T> Allocate<T>(scoped System.ReadOnlySpan<nint> lengths, scoped System.ReadOnlySpan<nint> strides);

        // Do we want to keep the T[] here? It would copy the data over to the native tensor and essentially allow you to initialize a native tensor with an array of values.
        // The tensor will own the underlying memory, so it will be responsible for freeing it. 
        public static System.Numerics.Tensors.NativeTensor<T> Allocate<T>(T[] values, scoped System.ReadOnlySpan<nint> lengths);
        public static System.Numerics.Tensors.NativeTensor<T> Allocate<T>(T[] values, scoped System.ReadOnlySpan<nint> lengths, scoped System.ReadOnlySpan<nint> strides);


        // Create a native tensor using the provided T* location and fill it with a Gaussian normal distribution.
        // The tensor won't own the underlying memory, so it won't be responsible for freeing it.
        public static unsafe System.Numerics.Tensors.NativeTensor<T> CreateAndFillGaussianNormalDistribution<T>(System.Random random, T* data, nint dataLength, params scoped System.ReadOnlySpan<nint> lengths) where T : System.Numerics.IFloatingPoint<T>;
        public static unsafe System.Numerics.Tensors.NativeTensor<T> CreateAndFillGaussianNormalDistribution<T>(T* data, nint dataLength, params scoped System.ReadOnlySpan<nint> lengths) where T : System.Numerics.IFloatingPoint<T>;

        // Create a native tensor using the provided T* location and fill it with a uniform distribution.
        // The tensor won't own the underlying memory, so it won't be responsible for freeing it.
        public static unsafe System.Numerics.Tensors.NativeTensor<T> CreateAndFillUniformDistribution<T>(System.Random random, T* data, nint dataLength, params scoped System.ReadOnlySpan<nint> lengths) where T : System.Numerics.IFloatingPoint<T>;
        public static unsafe System.Numerics.Tensors.NativeTensor<T> CreateAndFillUniformDistribution<T>(T* data, nint dataLength, params scoped System.ReadOnlySpan<nint> lengths) where T : System.Numerics.IFloatingPoint<T>;

        // Create a native tensor using the provided T* location. does nothing to fill the memory.
        // The tensor won't own the underlying memory, so it won't be responsible for freeing it.
        public static unsafe System.Numerics.Tensors.NativeTensor<T> CreateUninitialized<T>(T* data, nint dataLength, scoped System.ReadOnlySpan<nint> lengths);
        public static unsafe System.Numerics.Tensors.NativeTensor<T> CreateUninitialized<T>(T* data, nint dataLength, scoped System.ReadOnlySpan<nint> lengths, scoped System.ReadOnlySpan<nint> strides);

        // Create a native tensor using the provided T* location. Do we want to keep the IEnumerable<T> here? It would copy the data over to the native tensor and essentially allow you to initialize a native tensor with an array of values.
        // The tensor won't own the underlying memory, so it won't be responsible for freeing it.
        public static unsafe System.Numerics.Tensors.NativeTensor<T> Create<T>(T* data, nint dataLength, System.Collections.Generic.IEnumerable<T> values, scoped System.ReadOnlySpan<nint> lengths);
        public static unsafe System.Numerics.Tensors.NativeTensor<T> Create<T>(T* data, nint dataLength, System.Collections.Generic.IEnumerable<T> values, scoped System.ReadOnlySpan<nint> lengths, scoped System.ReadOnlySpan<nint> strides);

        // Create a native tensor using the provided T* location. 0 filled.
        // The tensor won't own the underlying memory, so it won't be responsible for freeing it.
        public static unsafe System.Numerics.Tensors.NativeTensor<T> Create<T>(T* data, nint dataLength, scoped System.ReadOnlySpan<nint> lengths);
        public static unsafe System.Numerics.Tensors.NativeTensor<T> Create<T>(T* data, nint dataLength, scoped System.ReadOnlySpan<nint> lengths, scoped System.ReadOnlySpan<nint> strides);

        // Create a native tensor using the provided T* location. Do we want to keep the T[] here? It would copy the data over to the native tensor and essentially allow you to initialize a native tensor with an array of values.
        // The tensor won't own the underlying memory, so it won't be responsible for freeing it.
        public static unsafe System.Numerics.Tensors.NativeTensor<T> Create<T>(T* data, nint dataLength, T[] values, scoped System.ReadOnlySpan<nint> lengths);
        public static unsafe System.Numerics.Tensors.NativeTensor<T> Create<T>(T* data, nint dataLength, T[] values, scoped System.ReadOnlySpan<nint> lengths, scoped System.ReadOnlySpan<nint> strides);
}


public sealed class NativeTensor<T> : System.Collections.Generic.IEnumerable<T>, System.Numerics.Tensors.IReadOnlyTensor<System.Numerics.Tensors.NativeTensor<T>, T>, System.Numerics.Tensors.ITensor<System.Numerics.Tensors.NativeTensor<T>, T>, IDisposable
{
        internal NativeTensor() { }
        // THis would let us create a NativeTensor from a parent NativeTensor so that we can share the underlying memory, but still have lifetimes tracked correctly.
        internal NativeTensor(NativeTensor<T> parent, nint start, scoped System.ReadOnlySpan<nint> lengths, scoped System.ReadOnlySpan<nint> strides) { }

        // Everything below is exactly the same as the normal Tensor<T> 
        public static System.Numerics.Tensors.NativeTensor<T> Empty { get; }
        public nint FlattenedLength { get; }
        public bool IsEmpty { get; }
        public bool IsPinned { get; }
        public System.Numerics.Tensors.NativeTensor<T> this[System.Numerics.Tensors.Tensor<bool> filter] { get; }
        public ref T this[params scoped System.ReadOnlySpan<System.Buffers.NIndex> indexes] { get; }
        public System.Numerics.Tensors.NativeTensor<T> this[params scoped System.ReadOnlySpan<System.Buffers.NRange> ranges] { get; set { } }
        public ref T this[params scoped System.ReadOnlySpan<nint> indexes] { get; }
        public System.ReadOnlySpan<nint> Lengths { get; }
        public int Rank { get; }
        public System.ReadOnlySpan<nint> Strides { get; }
        object System.Numerics.Tensors.IReadOnlyTensor.this[params scoped System.ReadOnlySpan<System.Buffers.NIndex> indexes] { get; }
        object System.Numerics.Tensors.IReadOnlyTensor.this[params scoped System.ReadOnlySpan<nint> indexes] { get; }
        System.ReadOnlySpan<nint> System.Numerics.Tensors.IReadOnlyTensor.Lengths { get; }
        System.ReadOnlySpan<nint> System.Numerics.Tensors.IReadOnlyTensor.Strides { get; }
        T System.Numerics.Tensors.IReadOnlyTensor<System.Numerics.Tensors.NativeTensor<T>, T>.this[params scoped System.ReadOnlySpan<System.Buffers.NIndex> indexes] { get; }
        System.Numerics.Tensors.NativeTensor<T> System.Numerics.Tensors.IReadOnlyTensor<System.Numerics.Tensors.NativeTensor<T>, T>.this[params scoped System.ReadOnlySpan<System.Buffers.NRange> ranges] { get; }
        T System.Numerics.Tensors.IReadOnlyTensor<System.Numerics.Tensors.NativeTensor<T>, T>.this[params scoped System.ReadOnlySpan<nint> indexes] { get; }
        bool System.Numerics.Tensors.ITensor.IsReadOnly { get; }
        object System.Numerics.Tensors.ITensor.this[params scoped System.ReadOnlySpan<System.Buffers.NIndex> indexes] { get; set { } }
        object System.Numerics.Tensors.ITensor.this[params scoped System.ReadOnlySpan<nint> indexes] { get; set { } }
        T System.Numerics.Tensors.ITensor<System.Numerics.Tensors.NativeTensor<T>, T>.this[params scoped System.ReadOnlySpan<System.Buffers.NIndex> indexes] { get; set { } }
        T System.Numerics.Tensors.ITensor<System.Numerics.Tensors.NativeTensor<T>, T>.this[params scoped System.ReadOnlySpan<nint> indexes] { get; set { } }
        public System.Numerics.Tensors.ReadOnlyTensorSpan<T> AsReadOnlyTensorSpan();
        public System.Numerics.Tensors.ReadOnlyTensorSpan<T> AsReadOnlyTensorSpan(params scoped System.ReadOnlySpan<System.Buffers.NIndex> startIndex);
        public System.Numerics.Tensors.ReadOnlyTensorSpan<T> AsReadOnlyTensorSpan(params scoped System.ReadOnlySpan<System.Buffers.NRange> start);
        public System.Numerics.Tensors.ReadOnlyTensorSpan<T> AsReadOnlyTensorSpan(params scoped System.ReadOnlySpan<nint> start);
        public System.Numerics.Tensors.TensorSpan<T> AsTensorSpan();
        public System.Numerics.Tensors.TensorSpan<T> AsTensorSpan(params scoped System.ReadOnlySpan<System.Buffers.NIndex> startIndex);
        public System.Numerics.Tensors.TensorSpan<T> AsTensorSpan(params scoped System.ReadOnlySpan<System.Buffers.NRange> start);
        public System.Numerics.Tensors.TensorSpan<T> AsTensorSpan(params scoped System.ReadOnlySpan<nint> start);
        public void Clear() { }
        public void CopyTo(scoped System.Numerics.Tensors.TensorSpan<T> destination) { }
        public void Fill(object value) { }
        public void Fill(T value) { }
        public void FlattenTo(scoped System.Span<T> destination) { }
        public System.Collections.Generic.IEnumerator<T> GetEnumerator();
        public override int GetHashCode();
        public ref T GetPinnableReference();
        public System.Buffers.MemoryHandle GetPinnedHandle();
        public static implicit operator System.Numerics.Tensors.ReadOnlyTensorSpan<T> (System.Numerics.Tensors.NativeTensor<T> value);
        public static implicit operator System.Numerics.Tensors.TensorSpan<T> (System.Numerics.Tensors.NativeTensor<T> value);
        public static implicit operator System.Numerics.Tensors.NativeTensor<T> (T[] array);
        public System.Numerics.Tensors.NativeTensor<T> Slice(params scoped System.ReadOnlySpan<System.Buffers.NIndex> startIndex);
        public System.Numerics.Tensors.NativeTensor<T> Slice(params scoped System.ReadOnlySpan<System.Buffers.NRange> start);
        public System.Numerics.Tensors.NativeTensor<T> Slice(params scoped System.ReadOnlySpan<nint> start);
        System.Collections.Generic.IEnumerator<T> System.Collections.Generic.IEnumerable<T>.GetEnumerator();
        System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator();
        ref readonly T System.Numerics.Tensors.IReadOnlyTensor<System.Numerics.Tensors.NativeTensor<T>, T>.GetPinnableReference();
        static System.Numerics.Tensors.NativeTensor<T> System.Numerics.Tensors.ITensor<System.Numerics.Tensors.NativeTensor<T>, T>.Create(scoped System.ReadOnlySpan<nint> lengths, bool pinned);
        static System.Numerics.Tensors.NativeTensor<T> System.Numerics.Tensors.ITensor<System.Numerics.Tensors.NativeTensor<T>, T>.Create(scoped System.ReadOnlySpan<nint> lengths, scoped System.ReadOnlySpan<nint> strides, bool pinned);
        static System.Numerics.Tensors.NativeTensor<T> System.Numerics.Tensors.ITensor<System.Numerics.Tensors.NativeTensor<T>, T>.CreateUninitialized(scoped System.ReadOnlySpan<nint> lengths, bool pinned);
        static System.Numerics.Tensors.NativeTensor<T> System.Numerics.Tensors.ITensor<System.Numerics.Tensors.NativeTensor<T>, T>.CreateUninitialized(scoped System.ReadOnlySpan<nint> lengths, scoped System.ReadOnlySpan<nint> strides, bool pinned);
        public string ToString(params scoped System.ReadOnlySpan<nint> maximumLengths);
        public bool TryCopyTo(scoped System.Numerics.Tensors.TensorSpan<T> destination);
        public bool TryFlattenTo(scoped System.Span<T> destination);
        public void Dispose();
}

API Usage

Same as normal Tensor.

Alternative Designs

No response

Risks

No response

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions