Data Nadhi is an open-source platform that helps you manage the flow of data starting from your application logs all the way to your desired destinations — databases, APIs, or alerting systems.
Direct. Transform. Deliver.
Flow your logs, trigger your pipelines.
Data Nadhi provides a unified platform to ingest, transform, and deliver data — powered by Temporal, MongoDB, Redis, and MinIO.
It connects easily with your applications using the Data Nadhi SDK, and gives you full control over how data moves across your system.
- Direct – Collect logs and data from your applications or external sources.
- Transform – Use Temporal workflows to apply filters, enrichments, or custom transformations.
- Deliver – Send the final processed data to any configured destination — all handled reliably and asynchronously.
Data Nadhi is designed to be modular, developer-friendly, and ready for production.
The platform is built from multiple services and tools working together:
| Component | Description |
|---|---|
| data-nadhi-server | Handles incoming requests from the SDK and passes them to Temporal. |
| data-nadhi-internal-server | Internal service for managing entities, pipelines, and configurations. |
| data-nadhi-temporal-worker | Executes workflow logic and handles transformations and delivery. |
| data-nadhi-sdk | Python SDK for logging and sending data from applications. |
| data-nadhi-dev | Local environment setup using Docker Compose for databases and Temporal. |
| data-nadhi-documentation | Documentation site built with Docusaurus (you’re here now). |
All components are connected through a shared Docker network, making local setup and development simple.
- 🧩 Unified Pipeline – Move data seamlessly from logs to destinations
- ⚙️ Custom Transformations – Define your own transformations using Temporal
- 🔄 Reliable Delivery – Retries, fault tolerance, and monitoring built in
- 🧠 Easy Integration – Simple SDK-based setup for applications
- 💡 Developer Focused – Dev containers and Docker-first setup for consistency
This repository contains the three workers for temporal listening to different task queues
- MongoDB – Primary datastore for pipeline and entity configurations
- Redis – Used for caching and quick lookups
- MinIO – S3-compatible object storage for failure logs temporarily
- Temporal – Workflow orchestration engine to run the data pipelines
- Python(temporalio) - Framework used to create temporal worker
- Docker – For consistent local and production deployment
- Docker Network (
datanadhi-net) – Shared network for connecting all services locally
- Docker & Docker Compose
- VS Code (with Dev Containers extension)
- Open data-nadhi-temporal-worker in Dev Container
- Set this in
.envfile# Secret Key for decrypting encrypted Creds SEC_DB=my-secret-db-key # MongoDB Configuration MONGO_URL=mongodb://mongo:27017/datanadhi_dev MONGO_DATABASE=datanadhi_dev # Redis Configuration (for caching) REDIS_URL=redis://redis:6379 # MinIO MINIO_ENDPOINT=datanadhi-minio:9000 MINIO_ACCESS_KEY=minio MINIO_SECRET_KEY=minio123 MINIO_BUCKET=failure-logs
- Give execute permission for the worker script
chmod +x scripts/run-worker.sh
- Run these in separate terminals:
./scripts/run-worker.sh default main ./scripts/run-worker.sh default-transform transformation ./scripts/run-worker.sh default-destination destination
- Main Website: https://datanadhi.com
- Documentation: https://docs.datanadhi.com
- GitHub Organization: Data-ARENA-Space
This project is open source and available under the GNU Affero General Public License v3.0.
- GitHub Discussions: [Coming soon]
- Discord: Data Nadhi Community
- Issues: GitHub Issues