Skip to content
/ tansu Public

A drop-in replacement for Apache Kafka with PostgreSQL and S3 storage engines written in 100% safe ๐Ÿฆบ async ๐Ÿš€ Rust ๐Ÿฆ€

License

Notifications You must be signed in to change notification settings

tansu-io/tansu

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Tansu

Tansu is a drop-in replacement for Apache Kafka with PostgreSQL, S3 or memory storage engines. Without the cost of broker replicated storage for durability. Licensed under the GNU AGPL. Written in 100% safe ๐Ÿฆบ async ๐Ÿš€ Rust ๐Ÿฆ€

Features:

  • Kafka API compatible
  • Elastic stateless brokers: no more planning and reassigning partitions to a broker
  • Embedded JSON schema registration and validation of messages
  • Consensus free without the overhead of Raft or ZooKeeper
  • All brokers are the leader and ISR of any topic partition
  • All brokers are the transaction and group coordinator
  • No network replication or duplicate data storage charges
  • Spin up a broker for the duration of a Kafka API request: no more idle brokers
  • Available with PostgreSQL, S3 or memory storage engines

For data durability:

configuration

The storage-engine parameter is a S3 URL that specifies the bucket to be used. The following will configure a S3 storage engine using the "tansu" bucket (full context is in compose.yaml and .env):

Edit .env so that STORAGE_ENGINE is defined as:

STORAGE_ENGINE="s3://tansu/"

First time startup, you'll need to create a bucket, an access key and a secret in minio.

Just bring minio up, without tansu:

docker compose up -d minio

The minio console should now be running on http://localhost:9001, login using the default user credentials of "minioadmin", with password "minioadmin". Follow the bucket creation instructions to create a bucket called "tansu", and then create an access key and secret. Use your newly created access key and secret to update the following environment in .env:

# Your AWS access key:
AWS_ACCESS_KEY_ID="access key"

# Your AWS secret:
AWS_SECRET_ACCESS_KEY="secret"

# The endpoint URL of the S3 service:
AWS_ENDPOINT="http://minio:9000"

# Allow HTTP requests to the S3 service:
AWS_ALLOW_HTTP="true"

Once this is done, you can start tansu with:

docker compose up -d tansu

Using the regular Apache Kafka CLI you can create topics, produce and consume messages with Tansu:

kafka-topics \
  --bootstrap-server localhost:9092 \
  --partitions=3 \
  --replication-factor=1 \
  --create --topic test

Describe the test topic:

kafka-topics \
  --bootstrap-server localhost:9092 \
  --describe \
  --topic test

Note that node 111 is the leader and ISR for each topic partition. This node represents the broker handling your request. All brokers are node 111.

Producer:

echo "hello world" | kafka-console-producer \
    --bootstrap-server localhost:9092 \
    --topic test

Group consumer using test-consumer-group:

kafka-console-consumer \
  --bootstrap-server localhost:9092 \
  --group test-consumer-group \
  --topic test \
  --from-beginning \
  --property print.timestamp=true \
  --property print.key=true \
  --property print.offset=true \
  --property print.partition=true \
  --property print.headers=true \
  --property print.value=true

Describe the consumer test-consumer-group group:

kafka-consumer-groups \
  --bootstrap-server localhost:9092 \
  --group test-consumer-group \
  --describe

PostgreSQL

To switch between the minio and PostgreSQL examples, firstly shutdown Tansu:

docker compose down tansu

Switch to the PostgreSQL storage engine by updating .env:

# minio storage engine
# STORAGE_ENGINE="s3://tansu/"

# PostgreSQL storage engine -- NB: @db and NOT @localhost :)
STORAGE_ENGINE="postgres://postgres:postgres@db"

Bring Tansu back up:

docker compose up -d tansu

Using the regular Apache Kafka CLI you can create topics, produce and consume messages with Tansu:

kafka-topics \
  --bootstrap-server localhost:9092 \
  --partitions=3 \
  --replication-factor=1 \
  --create --topic test

Producer:

echo "hello world" | kafka-console-producer \
    --bootstrap-server localhost:9092 \
    --topic test

Consumer:

kafka-console-consumer \
  --bootstrap-server localhost:9092 \
  --group test-consumer-group \
  --topic test \
  --from-beginning \
  --property print.timestamp=true \
  --property print.key=true \
  --property print.offset=true \
  --property print.partition=true \
  --property print.headers=true \
  --property print.value=true

Or using librdkafka to produce:

echo "Lorem ipsum dolor..." | \
  ./examples/rdkafka_example -P \
  -t test -p 1 \
  -b localhost:9092 \
  -z gzip

Consumer:

./examples/rdkafka_example \
  -C \
  -t test -p 1 \
  -b localhost:9092

Feedback

Please raise an issue if you encounter a problem.

License

Tansu is licensed under the GNU AGPL.

About

A drop-in replacement for Apache Kafka with PostgreSQL and S3 storage engines written in 100% safe ๐Ÿฆบ async ๐Ÿš€ Rust ๐Ÿฆ€

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Languages