Skip to content

Getting Started

Leonard Sperry edited this page Jan 5, 2025 · 39 revisions

V11.1.3

HaKafkaNet is the coming together of 3 things:

  • Home Assistant
  • Kafka
  • dotnet

The first 2 need to be in place before we can build out our automations. For each, a file has been provided and they can be found in the infrastructure folder of this repository. After that, we must choose our development environment and deployment model. Docker is the recommended tool for managing both your Kafka and HaKafkaNet deployments.

Requirements

  • Kafka
  • An IDistributedCache implementation
  • Static IP or DNS set up for Home Assistant and your HaKafkaNet instance on your network.
  • Kafka integration configured in Home Assistant

A docker compose file is provided for easily spinning up Kafka and Redis.

Optional, but recommended

  • Portainer or some other mechanism for managing docker containers.
  • Redis or some other persistent storage for use with IDistributedCache. See DataPersistence for additional details and using an in-memory cache.

1. Set up Kafka

If you already have a Kafka instance and optional Redis, you can skip this step.

  • Copy the docker-compose.yml to your environment.
  • Modify the KAFKA_CFG_ADVERTISED_LISTENERS to match where your Kafka will be running.
  • docker compose up -d

Four Docker containers will be created. However, only the first one is absolutely required.

  • Kafka - A single node Kafka instance
  • Init Kafka - A temporary container to establish needed Kafka Topic
  • Redis - For recommended IDistribuedCache
  • Kafka UI - An open-source dashboard for inspecting/editing your Kafka instance

IMPORTANT: Some users have reported that the script that initializes the Kafka topic does not set the "Cleanup Policy" correctly. Go to your Kafka UI instance, inspect the topic settings and ensure that the Cleanup Policy is set to "Compact,Delete". By enabling "Compact", you tell Kafka to only keep the most recent version of messages per entity. The Delete policy should be set to 7 days or whatever you're comfortable with. That will delete all messages older than the timeframe specified.

2. Set up Kafka integration in Home Assistant

As with step one, a file has been created for you. In addition to the Kafka integration configuration, it contains some other items to help aid your integration, but only the kafka integration is required which you could put directly into your "configuration.yaml" file if you choose. To use the provided file, the Packages strategy should be used.

  • In your home assistant instance, navigate to the file editor of your choice.
  • Edit your configuration.yaml file and add the following if you do not already have it
homeassistant:
  packages: !include_dir_named packages
  • In the same directory as your configuration.yaml file, create a packages directory if you do not already have one.
  • Find the hakafnet.yaml file in the infrastructure folder of this repository and copy it into your packages directory in Home Assistant.
  • Modify the IP address to match your Kafka and HaKafkaNet instances.
  • You may need to use port 9092 or 9094 depending on your setup.
  • Under the rest_command node, modify the url nodes to point to your HaKafkaNet instance
  • Restart Home Assistant

At this point, if you used the docker-compose.yml file for setting up your Kafka, you should also have a Kafka UI instance. Navigating to it in your browser will allow you to see kafa messages streaming into the topic you have configured. The default port is 8080.

The provided file includes several defaults for configuring your Kafka integration in Home Assistant.

  • Reference Apache Kafka integration documentation
  • It is recommended to set an include_domains filter, otherwise you could likely produce hundreds of thousands of events, or more, every week. You should include all domains for all entities that you plan to respond to or inspect in your automations. The following are the minimum recommended domains to include:
    • light
    • switch
    • event - required for most scene controllers
    • sun - required for built-in sun-based automations
    • binary_sensor - most motion, presence, and contact sensors
    • sensor - other sensors like humidity and power meters
    • input_button - common helper entities
    • input_boolean - common helper entities
    • Other common domains include person, device_tracker, zone, schedule, calendar, and input_number. If your automation never triggers, it could be because you haven't added the right domain.

Note: After getting your system set up, it is recommended to implement Open Telemetry to inspect the volume of data sent to your system.

3. Choose a set-up for your dev environment.

You have 3 options for this:

  • Clone this repository and use the example app. This is not the recommended approach.
  • Create your own repository and add this repository as a submodule. This will allow you to get all the latest changes but could come with some instability. This is how the author has their environment set up while actively developing HaKafkaNet.
  • Create your own web app and use the nuget package. This will give you the most stable environment.
    • dotnet new web -o <your output directory>
    • cd <your output directory>
    • dotnet add package HaKafkaNet
    • dotnet add package Microsoft.Extensions.Caching.StackExchangeRedis (or a distributed cache of your choosing)

4. Configure your environment

Please see Configuration for options.

To complete configuration, you will need a long-lived access token for Home Assistant, which you can get from Home Assitant UI or follow directions here: Long Lived Access Tokens

You must provide an IDistributedCache implementation. The example code uses Redis.

Optional Log Tracing: To enable log tracing, NLog must also be configured. The example app uses the NLog.Extensions.Logging nuget package and configures NLog via the appsettings.json file. There are other ways to configure it. See NLog and its documentation for additional details.

Example program.cs

Note: if this documentation is out of date, see Program.cs in the example app.

using HaKafkaNet;
using Microsoft.AspNetCore.DataProtection;
using NLog.Web;
using StackExchange.Redis;

var builder = WebApplication.CreateBuilder(args);

//builder.Host.UseNLog(); // enables log tracing

var services = builder.Services;

HaKafkaNetConfig config = new HaKafkaNetConfig();
builder.Configuration.GetSection("HaKafkaNet").Bind(config);

// provide an IDistributedCache implementation
var redisUri = builder.Configuration.GetConnectionString("RedisConStr");
services.AddStackExchangeRedisCache(options => 
{
    options.Configuration = redisUri;
    /* optionally prefix keys */
    options.InstanceName = "ExampleApp.;
});

services.AddHaKafkaNet(config);

// add your own services as needed

var app = builder.Build();

// if you want to use appsettings.json for your nlog configuration
// call this line AFTER you call builder.Build()
// call this line BEFORE calling app.StartHaKafkaNet()
//NLog.LogManager.Configuration = new NLogLoggingConfiguration(builder.Configuration.GetSection("NLog"));
// this ensures that LogManager.Configuration is not null when HaKafkaNet wires up log tracing.

await app.StartHaKafkaNet();

app.MapGet("/", () => Results.Redirect("hakafkanet"));

app.Run();

If you set up with the program.cs above, you will be presented with the dashboard, or you can navigate to ~/hakafkanet.

Detailed information for UI found here

5. Create your automations

6. Setup Deployment

The example app provides a Dockerfile and docker-compose.yml. Copy them to your environment and modify appropriately. Then, from the command line, in the same directory, as the docker-compose.yml file, run the following commands:

  • docker compose build
  • docker compose up -d

7. Optional - Refine your set up

Clone this wiki locally