A fast redis library for python written in Rust using PyO3.
pip install --user zangy
Building from source requires nightly Rust.
zangy aims to be the fastest python redis library. This is done by using pyo3 to generate shared objects in binary form. It is pretty much identical to writing this in C, but less of a pain to compile and identical in speed.
Due to being completely in Rust, zangy can't do lifetime-based connection pooling and instead will create connections on startup and not lazily. All actions are distributed over the pool based on round robin. Internally, redis-rs is used for the redis operations and tokio is used to spawn tasks outside the GIL.
Because it uses tokio and rust-level tasks, zangy unleashes maximum performance when used with a lot of concurrent things to do.
Yes! It beats similar Python libraries by a fair margin. Tokio, no GIL lock and the speed of Rust especially show when setting 1 million keys in parallel.
Benchmark sources can be found in the bench
directory.
Benchmarks below done with Redis 7.2.5 and Python 3.12.4, redis-py 5.0.7 and the latest zangy master using a pool with 10 connections:
Task | redis-py | zangy |
---|---|---|
1.000.000 sequential GET | 1min 27s | 54s |
1.000.000 sequential SET | 1min 25s | 58s |
1.000.000 parallel SET | Didn't terminate in 45mins | 9s |
TLDR: zangy is faster in every regard but crushes in actually concurrent scenarios.
The API is subject to change.
import zangy
# Create a pool with 2 connections and 2 pubsub connections
pool = await zangy.create_pool("redis://localhost:6379", 2, 2)
# Generic redis commands (disadvised)
await pool.execute("SET", "a", "b")
# Individual commands
value = await pool.get("a")
# Wait for pubsub messages and echo back
with pool.pubsub() as pubsub:
await pubsub.subscribe("test1")
async for (channel, payload) in pubsub:
print(channel, payload)
await pool.publish("test2", payload)
Aliases for almost all operations exist on pool (.set
, .set_ex
, .zrange
, etc).
- Single connections. Just use a pool with 1 member.