Robotframework-RealtimeResults is a modular, extensible system for collecting, processing, and visualizing test results, application logs, and metrics in real time. It is designed for use with Robot Framework but also supports ingestion of application logs and custom metrics. The system is suitable for both local development and CI/CD pipelines.
- Realtime Dashboard: Live web dashboard for monitoring Robot Framework test runs, application logs, and metrics.
- Automatic Service Management: CLI automatically starts backend APIs and log tailers as needed.
- Multi-source Log Ingestion: Tail and ingest logs from multiple sources/files, each with its own label and timezone.
- Metric Tracking: Ingest and store custom metrics alongside logs and test events.
- Flexible Storage: Supports SQLite and PostgreSQL; Loki integration (planned).
- Pluggable Sinks: Easily extend with new sinks (e.g., HTTP, Loki, custom).
- Docker Support: Run all components using Docker (tested on macOS).
- Configurable via Wizard: Interactive setup wizard for easy configuration.
- REST API: FastAPI-based endpoints for event ingestion and dashboard queries.
- Extensible: Modular codebase for adding new readers, sinks, or event types.
[ Robot Framework Run ]
│
│
├──► Listener writes to (SQLite / FastApi Ingest) ──► Event Store
│ ▲ │
│ │ │
└─────► [ Log Tailer(s) (FastApi Ingest) ] ────────────┘ │
│
Reads from │ (or writes in case of in-memory mode)
▼
[ FastAPI Viewer ]
│
Serves data to Dashboard
▼
[ Dashboard UI ]
- Captures test events (suite/test start/end, log messages) in real time.
- Sends events to a configured sink (HTTP, SQLite or Loki).
- See
producers/listener/listener.py.
- Tails one or more application log files and sends parsed log lines to the ingest API.
- Supports per-source configuration (label, event type, timezone, poll interval).
- Multi-line grouping, parses timestamps, log levels, and message content using regex patterns.
- See
producers/log_producer/log_tails.py.
- Periodically scrapes CPU, memory, or other system metrics.
- Stores as structured
metricevents in the database. - See
producers/metrics/metric_scraper.py.
- Viewer API: Serves dashboard, test events, and application logs.
- Endpoints:
/events,/logs,/events/clear,/dashboard - See
api/viewer/main.py
- Endpoints:
-
-
Ingest API: Accepts incoming logs, metrics, and test events.
-
Endpoints:
/logs,/metric,/event,/event/log_message
-
- HTML+JS-based dashboard served at
/dashboard. - Displays live results, failures, logs, and metrics.
- Accessible at
/dashboardon the viewer backend. - See
dashboard/index.html.
- SQLite Sink: Persistent storage for listener events and logs.
- Async SQLite Sink: Async variant for log/metric ingestion.
- AsyncPostgresSink Async variant for log/metric ingestion.
- HTTP Sink: For sending events to remote APIs.
- Loki Sink: (Planned) Integration with Grafana Loki for log aggregation.
pip install robotframework-realtimeresultsgit clone https://github.com/alebr001/robotframework-realtimeresults
cd robotframework-realtimeresults
pip install poetry
poetry install
poetry run rt-robot tests/Run your Robot Framework tests with real-time results:
rt-robot tests/- Wrapper can auto-start services if not running (
enable_autoservices = true). - If no config file is found, an interactive setup wizard will guide you.
- Always start ingest service first (initializes database).
When services are started via CLI, and rt-robot is used, backend PIDs are stored in backend.pid. Stop them with:
rt-robot --killbackend# Terminal 1
rt-robot --runservice api.ingest.main:app --configfile config.json
# Terminal 2
rt-robot --runservice api.viewer.main:app --configfile config.json
# Terminal 3
rt-robot --runservice producers/log_producer/log_tails.py
# Terminal 4 (optional)
rt-robot --runservice producers/metrics/metric_scraper.py- Docker
docker compose upWhen running Docker:
Use
0.0.0.0as the backend hosts
Use 127.0.0.1 as client host for Ingest (Viewer can stay on 0.0.0.0). Use postgresql as database_url Set enable_autoservices: false
rt-robot --config config.json tests/{
"listener_sink_type": "http",
"database_url": "sqlite:///eventlog.db, postgresql://realtime:realtimepass@db:5432/realtime_db, etc",
"viewer_backend_host": "127.0.0.1",
"viewer_backend_port": 8002,
"ingest_backend_host": "127.0.0.1",
"ingest_backend_port": 8001,
"enable_autoservices": true,
"source_log_tails": [
{
"path": "../logs/app.log",
"label": "app",
"poll_interval": 1.0,
"event_type": "app_log",
"log_level": "INFO",
"tz_info": "Europe/Amsterdam"
}
],
"log_level": "INFO",
"log_level_listener": "",
"log_level_backend": "",
"log_level_cli": ""
}{
"database_url": "postgresql://realtime:realtimepass@postgres:5432/realtime_db",
"enable_auto_services": false,
"_comment": "use this for binding the services",
"ingest_backend_host": "0.0.0.0",
"ingest_backend_port": 8001,
"viewer_backend_host": "0.0.0.0",
"viewer_backend_port": 8002,
"_comment2": "use this to connect to the services",
"ingest_client_host": "127.0.0.1",
"ingest_client_port": 8001,
"viewer_client_host": "0.0.0.0",
"viewer_client_port": 8002,
"source_log_tails": [
{
"path": "results/debug.log",
"label": "rf-debug",
"poll_interval": 1.0,
"event_type": "rf-debug",
"log_level": "INFO",
"tz_info": "Europe/Amsterdam"
}
],
"listener_sink_type": "http",
"log_level": "DEBUG",
"log_level_listener": "",
"log_level_backend": "",
"log_level_cli": ""
}GET /eventsGET /logsGET /events/clearGET /dashboard
POST /logPOST /metricPOST /eventPOST /event/log_message
Go to: http://localhost:8002/dashboard
Displays:
- Real-time test status (PASS/FAIL/SKIP)
- Log messages
- Metrics (in future release)
- Failure stack traces
- Add sinks via
EventSink(in shared) orAsyncEventSink(in api/ingest) subclassing - Extend log parsing in
log_line_parser.py - Add new event types by updating
sql_definitions.py
.
├── api/
│ ├── ingest/
│ │ └ sinks/
│ └── viewer/
│ └ readers/
├── dashboard/
├── producers/
│ ├── listener/
│ ├── log_producer/
│ └── metrics/
├── shared/
│ ├── helpers/
│ │ └ cli.py (entrypoint)
│ └── sinks/
└── pyproject.toml
- Python 3.9+
- Windows, Linux, macOS
- Grafana Loki integration for log aggregation.
- TestID support
- Advanced dashboard filtering and tag support.
- Metric visualization.
- Optional authentication for APIs.
MIT
Contributions and feedback are welcome! Please open issues or pull requests on