Skip to content

Data Persistence

Leonard Sperry edited this page Dec 22, 2024 · 5 revisions

V10.2

HaKafkaNet is designed to be ephemeral and does not require persistent storage. You can tear down your entire application stack and it will still function. That being said, there are some benefits to holding onto some data. This page describes how data is used from Kafka and the cache. It will also describe expected behavior if you do tear down your entire application stack.

Kafka

HaKafkaNet uses Kafka as described in getting started.

At startup, the framework will resume where it left off based on the GroupID set in configuration.

This means that if you rebuild your Kafka instance, the framework will only be able to respond to events since it was rebuilt.

Distributed Cache

You can use any IDistributedCache implementation of your choosing. The recommended choice is Redis, and the provided docker files include Redis. When the framework responds to a new message, it looks at the associated entity, and tries to retrieve that entity from the cache. It will use that information to determine the timing of the event as described in Event Timings. It also uses the cache to set the Old property of the state change.

If you use an in-memory cache, or you rebuild your Redis, at startup, all events will have a timing of either PreStartupNotCached (for the first time that entity is seen) or PostStartup. Additionally, traces and captured logs from previous runs will be lost.

By default, all cache entries have a sliding expiration of 30 days.

Clone this wiki locally