LogStream AI is an enterprise-grade log ingestion platform designed to handle 10,000+ logs per second. It utilizes a "Shock Absorber" architecture with Redis (BullMQ) to decouple high-concurrency ingestion from slow database writes, guaranteeing <10ms latency for applications while ensuring zero data loss during traffic spikes.
Launch the observability stack (Redis + MongoDB + Dashboard) in one command:
# 1. Start Infrastructure
docker-compose up -d
# 2. Start Services (Ingestion + Worker + UI)
npm install && npm run start:allDetailed Setup: See GETTING_STARTED.md.
Live log analytics with severity-based visualization and instant search.
Event-Driven Pipeline: API -> Redis (Buffer) -> Worker (Batch) -> MongoDB.
Scaling to 10k RPS: How BullMQ manages the ingestion spike.
Deep Dive: See ARCHITECTURE.md for the BullMQ and Batching logic.
- ⚡ Sub-10ms Ingestion: Redis-backed write paths ensure the client never waits for DB operations.
- 🛡️ Shock Absorber Pattern: BullMQ manages high-volume bursts, preventing MongoDB saturation.
- 📉 98% IOPS Reduction: Intelligent batching logic writes thousands of logs in single bulk operations.
- 📊 Live Search: Instant Next.js log viewer with severity filtering and timestamp sorting.
- 🐳 Fully Containerized: One-click deployment for local and cloud infrastructure.
How a log entry is handled under extreme load:
- Emit: App sends a log via POST to the LogStream API.
- Queue: API instantly pushes the log to Redis (BullMQ) and returns HTTP 202.
- Buffer: Logs accumulate in the high-speed Redis memory buffer.
- Batch: The background worker wakes up after 500ms or 1000 logs.
- Persist: A single bulk write operation commits the batch to MongoDB.
- Broadcast: Real-time updates are pushed to the dashboard via WebSockets/Actions.
| Document | Description |
|---|---|
| System Architecture | Redis patterns, BullMQ config, and Batching math. |
| Getting Started | Local installation, Environment, and Benchmark scripts. |
| Failure Scenarios | Handling Redis OOM, Worker crash, and DB recovery. |
| Interview Q&A | "Why Redis over direct DB?", "How to scale BullMQ?". |
| Component | Technology | Role |
|---|---|---|
| Ingestion API | Node.js (Express) | Fast, non-blocking log intake. |
| Worker Engine | TypeScript | BullMQ Processor & Batch logic. |
| Message Bus | Redis | The "Shock Absorber" buffer. |
| Storage | MongoDB | Schema-less log repository. |
| Dashboard | Next.js 14 | Real-time observability UI. |
Harshan Aiyappa
Senior Full-Stack Hybrid AI Engineer
Voice AI • Distributed Systems • Infrastructure
This project is licensed under the MIT License - see the LICENSE file for details.