Skip to content

High-throughput log ingestion service handling 50k+ EPS.

Notifications You must be signed in to change notification settings

Kimosabey/logstream-ai

Repository files navigation

LogStream AI

Thumbnail

High-Throughput Log Ingestion & Observability with Redis Shock-Absorber

Status License Pattern

LogStream AI is an enterprise-grade log ingestion platform designed to handle 10,000+ logs per second. It utilizes a "Shock Absorber" architecture with Redis (BullMQ) to decouple high-concurrency ingestion from slow database writes, guaranteeing <10ms latency for applications while ensuring zero data loss during traffic spikes.


🚀 Quick Start

Launch the observability stack (Redis + MongoDB + Dashboard) in one command:

# 1. Start Infrastructure
docker-compose up -d

# 2. Start Services (Ingestion + Worker + UI)
npm install && npm run start:all

Detailed Setup: See GETTING_STARTED.md.


📸 Demo & Architecture

Real-Time Log Dashboard

Dashboard Live log analytics with severity-based visualization and instant search.

System Architecture

Architecture Event-Driven Pipeline: API -> Redis (Buffer) -> Worker (Batch) -> MongoDB.

The Ingestion Journey

Workflow Scaling to 10k RPS: How BullMQ manages the ingestion spike.

Deep Dive: See ARCHITECTURE.md for the BullMQ and Batching logic.


✨ Key Features

  • ⚡ Sub-10ms Ingestion: Redis-backed write paths ensure the client never waits for DB operations.
  • 🛡️ Shock Absorber Pattern: BullMQ manages high-volume bursts, preventing MongoDB saturation.
  • 📉 98% IOPS Reduction: Intelligent batching logic writes thousands of logs in single bulk operations.
  • 📊 Live Search: Instant Next.js log viewer with severity filtering and timestamp sorting.
  • 🐳 Fully Containerized: One-click deployment for local and cloud infrastructure.

🏗️ The Protective Journey

How a log entry is handled under extreme load:

  1. Emit: App sends a log via POST to the LogStream API.
  2. Queue: API instantly pushes the log to Redis (BullMQ) and returns HTTP 202.
  3. Buffer: Logs accumulate in the high-speed Redis memory buffer.
  4. Batch: The background worker wakes up after 500ms or 1000 logs.
  5. Persist: A single bulk write operation commits the batch to MongoDB.
  6. Broadcast: Real-time updates are pushed to the dashboard via WebSockets/Actions.

📚 Documentation

Document Description
System Architecture Redis patterns, BullMQ config, and Batching math.
Getting Started Local installation, Environment, and Benchmark scripts.
Failure Scenarios Handling Redis OOM, Worker crash, and DB recovery.
Interview Q&A "Why Redis over direct DB?", "How to scale BullMQ?".

🔧 Tech Stack

Component Technology Role
Ingestion API Node.js (Express) Fast, non-blocking log intake.
Worker Engine TypeScript BullMQ Processor & Batch logic.
Message Bus Redis The "Shock Absorber" buffer.
Storage MongoDB Schema-less log repository.
Dashboard Next.js 14 Real-time observability UI.

👤 Author

Harshan Aiyappa
Senior Full-Stack Hybrid AI Engineer
Voice AI • Distributed Systems • Infrastructure

Portfolio GitHub LinkedIn X


📝 License

This project is licensed under the MIT License - see the LICENSE file for details.