Skip to content

Epic: IDistributedCache updates in .NET 9 #53255

Open

Description

Update:

HybridCache has relocated to dotnet/extensions:dev; it does not ship in .NET 9 RC1, as a few missing and necessary features are still in development; however, we expect to ship either alongside or very-shortly-after .NET 9! ("extensions" has a different release train, that allows additional changes beyond the limit usually reserved for in-box packages; HybridCache has always been described as out-of-box - i.e. a NuGet package - so: there is no reason for us to limit ourselves by the runtime restrictions)


Status: feedback eagerly sought

Tl;Dr

  • add a new HybridCache API (and supporting pieces) to support more convenient and efficient distributed cache usage
  • support read-through caching with lambda callbacks
  • support flexible serialization
  • support stampede protection
  • support L1/L2 cache scenarios
  • build on top of IDistributedCache so that all existing cache backends work without change (although they could optionally add support for new features)
  • support comparable expiration concepts to IDistributedCache

Problem statement

The distributed cache in asp.net (i.e. IDistributedCache) is not particularly developed; it is inconvenient to use, lacks many desirable features, and is inefficient. We would like this API to be a "no-brainer", easy to get right feature, making it desirable to use - giving better performance, and a better experience with the framework.

Typical usage is shown here; being explicit about the problems:

Inconvenient usage

The usage right now is extremely manual; you need to:

  • attempt to read a stored value (as byte[])
  • check that value for null ("no value")
    • if not null:
      • fetch the value
      • serialize it
      • store the value
    • return the value

This is a lot of verbose boilerplate, and while it can be abstracted inside projects using utility methods (often extension methods), the vanilla experience is very poor.

Inefficiencies

The existing API is solely based on byte[]; the demand for right-sized arrays means no pooled buffers can be used. This broadly works for in-process memory-based caches, since the same byte[] can be returned repeatedly (although this implicitly assumes the code doesn't mutate the data in the byte[]), but for out-of-process caches this is extremely inefficient, requiring constant allocation.

Missing features

The existing API is extremely limited; the concrete and implementation-specific IDistributedCache implementation is handed directly to callers, which means there is no shared code reuse to help provide these features in a central way. In particular, there is no mechanism for helping with "stampede" scenarios - i.e. multiple concurrent requests for the same non-cached value, causing concurrent backend load for the same data, whether due to a cold-start empty cache, or key invalidation. There are multiple best-practice approaches that can mitigate this scenario, which we do not currently employ.

Likewise, we currently assume an in-process or out-of-process cache implementation, but caching almost always benefits from multi-tier storage, with a limited in-process (L1) cache supplemented by a separate (usually larger) out-of-process (L2) cache; this gives the "best of both" world, where the majority of fetches are served efficiently from L1, but cold-start and less-frequently-accessed data still doesn't hammer the underlying backend, thanks to L2. Multi-tier caching can sometimes additionally exploit cache-invalidation support from the L2 implementation, to provide prompt L1 invalidation as required.

This epic proposes changes to fill these gaps

Current code layout

At the moment the code is split over multiple components, in the main runtime, asp.net, and external packages (only key APIs shown):

This list is not exhaustive - other 3rd-party and private implementations of IDistributedCache exist, and we should avoid breaking the world.

Proposal

The key proposal here is to add a new caching abstraction that is more focused, HybridCache, in Microsoft.Extensions.Caching.Abstractions; this API is designed to act more as a read-through cache, building on top[ of the existing IDistributedCache implementation, providing all the implementation details required for a rich experience. Additionally, while simple defaults are provided for the serializer, it is an explicit aim to make such concerns fully configurable, allowing for json, protobuf, xml, etc serialization as appropriate to the consumer.

namespace Microsoft.Extensions.Caching.Distributed;

public abstract class HybridCache // default concrete impl provided by service registration
{
    protected HybridCache() { }

    // read-thru usage
    public abstract ValueTask<T> GetOrCreateAsync<TState, T>(string key, TState state, Func<TState, CancellationToken, ValueTask<T>> callback, HybridCacheEntryOptions? options = null, ReadOnlyMemory<string> tags = default, CancellationToken cancellationToken = default);
    public virtual ValueTask<T> GetOrCreateAsync<T>(string key, Func<CancellationToken, ValueTask<T>> callback,
    HybridCacheEntryOptions? options = null, ReadOnlyMemory<string> tags = default, CancellationToken cancellationToken = default)
    { /* shared default implementation uses TState/T impl */ }

    // manual usage
    public abstract ValueTask<(bool Exists, T Value)> GetAsync<T>(string key, HybridCacheEntryOptions? options = null, CancellationToken cancellationToken = default);
    public abstract ValueTask SetAsync<T>(string key, T value, HybridCacheEntryOptions? options = null, ReadOnlyMemory<string> tags = default, CancellationToken cancellationToken = default);

    // key invalidation
    public abstract ValueTask RemoveKeyAsync(string key, CancellationToken cancellationToken = default);
    public virtual ValueTask RemoveKeysAsync(ReadOnlyMemory<string> keys, CancellationToken cancellationToken = default)
    { /* shared default implementation uses RemoveKeyAsync */ }

    // tag invalidation
    public virtual ValueTask RemoveTagAsync(string tag, CancellationToken cancellationToken = default)
    { /* shared default implementation uses RemoveTagsAsync */ }
    public virtual ValueTask RemoveTagsAsync(ReadOnlyMemory<string> tags, CancellationToken cancellationToken = default) => default;
}

Notes:

  • the intent is that instead of requesting IDistributedCache, consumers might use HybridCache; to enable this, the consumer must additionally perform a services.AddHybridCache(...); step during registration
  • the naming of GetOrCreateAsync<T> is for parity with MemoryCache.GetOrCreateAsync<T>
  • RemoveAsync and RefreshAsync mirror the similarIDistributedCache methods
  • it is expected that the callback (when invoked) will return a non-null value; consistent with MemoryCache et-al, null is not a supported value, and an appropriate runtime error will be raised

Usage of this API is then via a read-through approach using lambda; the simplest (but slightly less efficient) approach would be simply:

// HybridCache injected via DI
var data = await cache.GetOrCreateAsync(key, _ => /* some backend read */, [expiration etc], [cancellation]);

In this simple usage, it is anticipated that "captured variables" etc are used to convey the additional state required, as is common for lambda scenarios. A second "stateful" API is provided for more advanced scenarios where the caller wishes to trade convenience for efficiency; this usage is slightly more verbose but will be immediately familiar to the users who would want this feature:

// HybridCache injected via DI
var data = await cache.GetOrCreateAsync(key, (some state here), static (state, _) => /* some backend read */, [expiration etc], [cancellation]);

This has been prototyped and works successfully with type inference etc.

The implementation (see later) deals with all the backend fetch, testing, serialization etc aspects internally.

(in both examples, the "discard" (_) is conveying the CancellationToken for the backend read, and can be used by providing a receiving lambda parameter)

An internal implementation of this API would be registered and injected via a new AddHybridCache API (Microsoft.Extensions.Caching.Abstractions):

namespace Microsoft.Extensions.Caching.Distributed;

public static class HybridCacheServiceExtensions
{
    public static IServiceCollection AddHybridCache(this IServiceCollection services, Action<HybridCacheOptions> setupAction)
    {...}

    public static IServiceCollection AddHybridCache(this IServiceCollection services)
    {...}
}

The internal implementation behind this would receive IDistributedCache for the backend, as it exists currently; this means that the new implementation can use all existing distributed cache backends. By default, AddDistributedMemoryCache is also assumed and applied automatically, but it is intended that this API be effective with arbitrary IDistributedCache backends such as redis, SQL Server, etc. However, to address the issue of byte[] inefficiency, a new entirely optional API is provided and tested for; if the new backend is detected, lower-allocation usage is possible. This follows the pattern used for output-cache in net8:

namespace Microsoft.Extensions.Caching.Distributed;

public interface IBufferDistributedCache : IDistributedCache
{
    ValueTask<CacheGetResult> GetAsync(string key, IBufferWriter<byte> destination, CancellationToken cancellationToken);
    ValueTask SetAsync(string key, ReadOnlySequence<byte> value, DistributedCacheEntryOptions options, CancellationToken cancellationToken);
}

public readonly struct CacheGetResult
{
    public CacheGetResult(bool exists);
    public CacheGetResult(DateTime expiry);

    public CacheGetResult(TimeSpan expiry);

    public bool Exists { get; }
    public TimeSpan? ExpiryRelative { get; }
    public DateTime? ExpiryAbsolute { get; }
}

(the intent of the usual members here is to convey expiration in the most appropriate way for the backend, relative vs absolute, although only one can be specified; the internals are an implementation detail, likely to use overlapped 8-bytes for the DateTime/TimeSpan, with a discriminator)

In the event that the backend cache implementation does not yet implement this API, the byte[] API is used instead, which is exactly the status-quo, so: no harm. The purpose of CacheGetResult is to allow the backend to convey backend expiration information, relevant for L1+L2 scenarios (design note: async precludes out TimeSpan?; tuple-type result would be simpler, but is hard to tweak later). The expiry is entirely optional and some backends may not be able to convey it, and we need to handle it lacking when IBufferDistributedCache is not supported - in either event, the inbound expiration relative to now will be assumed for L1 - not ideal, but the best we have.

Serialization

For serialization, a new API is proposed, designed to be trivially implemented by most serializers - again, preferring modern buffer APIs:

namespace Microsoft.Extensions.Caching.Distributed;

public interface IHybridCacheSerializer<T>
{
    T Deserialize(ReadOnlySequence<byte> source);
    void Serialize(T value, IBufferWriter<byte> target);
}

Inbuilt handlers would be provided for string and byte[] (and possibly BinaryData if references allow); an extensible serialization configuration API supports other types - by default, an inbuilt object serializer using System.Text.Json would be assumed, but it is intended that alternative serializers can be provided globally or per-type. This is likely to be for more efficient bandwidth scenarios, such as protobuf (Google.Protobuf or protobuf-net) etc, but could also be to help match pre-existing serialization choices. While manually registering a specific IHybridCacheSerializer<Foo> should work, it is also intended to generalize the problem of serializer selection, via an ordered set of serializer factories, specifically by registering some number of:

namespace Microsoft.Extensions.Caching.Distributed;

public interface IHybridCacheSerializerFactory
{
    bool TryCreateSerializer<T>([NotNullWhen(true)] out IHybridCacheSerializer<T>? serializer);
}

By default, we will register a specific serializer for string, and a single factory that uses System.Text.Json, however external library implementations are possible, for example:

namespace Microsoft.Extensions.Caching.Distributed;

[SuppressMessage("ApiDesign", "RS0016:Add public types and members to the declared API", Justification = "demo code only")]
public static class ProtobufDistributedCacheServiceExtensions
{
    public static IServiceCollection AddHybridCacheSerializerProtobufNet(this IServiceCollection services)
    {
        ArgumentNullException.ThrowIfNull(services);
        services.AddSingleton<IHybridCacheSerializerFactory, ProtobufNetSerializerFactory>();
        return services;
    }

    private sealed class ProtobufNetSerializerFactory : IHybridCacheSerializerFactory
    {
        public bool TryCreateSerializer<T>([NotNullWhen(true)] out IHybridCacheSerializer<T>? serializer)
        {
            // in real implementation, would use library rules
            if (Attribute.IsDefined(typeof(T), typeof(DataContractAttribute)))
            {
                serializer = new ProtobufNetSerializer<T>();
                return true;
            }
            serializer = null;
            return false;
        }
    }
    internal sealed class ProtobufNetSerializer<T> : IHybridCacheSerializer<T>
    {
        // in real implementation, would use library serializer
        public T Deserialize(ReadOnlySequence<byte> source) => throw new NotImplementedException();

        public void Serialize(T value, IBufferWriter<byte> target) => throw new NotImplementedException();
    }
}

The internal implementation of HybridCache would lookup T as needed, caching locally to prevent constantly using the factory API.

Additional functionality

The internal implementation of HybridCache should also:

  • hold the necessary state to serve concurrent requests for the same key from the same incomplete task, similar to the output-cache implementation
  • hold the necessary state to support L1/L2 caching
  • optionally, support L1 invalidation by a new optional invalidation API

Note that it is this additional state for stampede and L1/L2 scenarios (and the serializer choice, etc) that makes it impractical to provide this feature simply as extension methods on the existing IDistributedCache.

The new invalidation API is anticipated to be something like:

namespace Microsoft.Extensions.Caching.Distributed;

public interface IDistributedCacheInvalidation : IDistributedCache
{
    event Func<string, ValueTask> CacheKeyInvalidated;
}

(the exact shape of this API is still under discussion)

When this is detected, the event would be subscribed to perform L1 cache invalidation from the backend.

Additional things to be explored for HybridCacheOptions:

  • options for L1 / L2 caching; perhaps enabled by default if we have IDistributedCacheInvalidation ?
  • eager pre-fetch, i.e. "you've asked for X, and the L1 value is still valid, but only just; I'll give you the L1 value, but I'll kick off a fetch against the backend, so there is not a delay when it expires shortly" (disabled by default, due to concerns over lambdas and captured state mutation)
  • compression (disabled by default, for simple compatibility with existing data)
  • ...?

Additional modules to be enhanced

To validate the feature set, and to provide the richest experience:

  • Microsoft.Extensions.Caching.StackExchangeRedis should gain support for IBufferDistributedCache and IDistributedCacheInvalidation - the latter using the "server-assisted client-side caching" feature in Redis
  • Microsoft.Extensions.Caching.SqlServer should gain support for IBufferDistributedCache, if this can be gainful re allocatiuons
  • guidance should be offered to the Microsoft.Extensions.Caching.Cosmos owners, and if possible: Alachisoft.NCache.OpenSource.SDK

Open issues

  • does the approach sound agreeable?
  • naming
  • where (in terms of packages) does the shared implementation go? in particular, it may need access to System.Text.Json, and possible an L1 implementation ( which could be System.Runtime.Caching, Microsoft.Extensions.Caching.Memory, this new one, or something else) and possibly compression; maybe a new Microsoft.Extensions.Caching.Distributed ? but if so, should it be in-box with .net, or just NuGet? or somewhere else?
  • the exact choice of L1 cache (note: this should be an implementation detail; we don't need L1+L2 for MVP)
  • how exactly to configure the serializer
  • options for eager pre-fetch TTL and enable/disable L1+L2, via TypedDistributedCacheOptions
  • should we add tagging support at this juncture?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

Labels

EpicGroups multiple user stories. Can be grouped under a theme.area-networkingIncludes servers, yarp, json patch, bedrock, websockets, http client factory, and http abstractions

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions