UI Coverage Scenario Tool is an innovative, no-overhead solution for tracking and visualizing UI test coverage — directly on your actual application, not static snapshots. The tool collects coverage during UI test execution and generates an interactive HTML report. This report embeds a live iframe of your application and overlays coverage data on top, letting you see exactly what was tested and how.
- Live application preview: The report displays a real iframe of your app, not static screenshots. You can explore any page and see which elements were interacted with, what actions were performed, and how often.
- Flexible frame filters: Focus only on what matters — filter elements by specific actions (
CLICK
,FILL
,VISIBLE
, etc.), or action groups. Ideal for analyzing specific scenarios or regression areas. - Custom highlight & badge colors: Easily change the highlight and badge colors used in the iframe for different action types or UI states. Great for tailoring the report to your team's visual style or accessibility needs.
- No framework lock-in: Works with any UI testing framework (Playwright, Selenium, etc.) by simply logging actions
via the
track_element()
method. - Element-level statistics: View detailed statistics by selector: type of action, count of actions, and a timeline graph of coverage.
- Global history overview: Track historical trends of total coverage and action types across time.
- Per-element timeline: Dive deep into the history of interactions for each element — when and how it was used.
- Full element index: Searchable table of all elements interacted with during tests, even if you're not sure where they are in the UI.
- Support for visualizing pages in a graph: The tool offers a unique capability to build a graph of the pages involved in tests, as well as the transitions between them.
- Multi-app support: Testing multiple domains? No problem. Just list your apps in the config — the report will let you switch between them.
- Features
- Links
- Preview
- About the Tools
- Why Two Tools?
- Installation
- Embedding the Agent Script
- Usage
- Configuration
- Command-Line Interface (CLI)
You can view an example of a coverage report generated by the tool here.
If you have any questions or need assistance, feel free to ask @Nikita Filonov.
There are two separate tools, each with its own purpose, strengths, and philosophy:
🟢 ui-coverage-tool — Simple & Instant Coverage This is the original tool. It’s designed to be:
- Extremely simple and fast to integrate
- Ideal for quick visibility into which elements your UI tests are interacting with
- Perfect for prototyping or smoke-checks, where deep scenario structure isn’t needed
Think of ui-coverage-tool as the lightweight, no-frills solution for getting instant test coverage insights with minimal setup.
🔵 ui-coverage-scenario-tool — Scenario-Based & Insightful This is the advanced version of the original tool, built on top of all its features — and more:
- Includes everything from
ui-coverage-tool
- Adds scenario-level structure, so your coverage report shows:
- Which scenarios were executed
- Which elements were used in each scenario
- Which scenarios interacted with a given element
- Lets you link scenarios to TMS test cases or documentation (e.g. via URLs)
- Offers additional options like:
- Iframe zoom settings
- Scenario metadata
- Advanced filtering and analysis
If your team needs deeper visibility into business processes and scenario coverage, ui-coverage-scenario-tool is the way to go.
While ui-coverage-scenario-tool
is more powerful, the original ui-coverage-tool
still has a place.
They serve different purposes:
Tool | Best For | Strengths |
---|---|---|
ui-coverage-tool |
Quick setup, lightweight testing environments | Easy to integrate, minimal overhead |
ui-coverage-scenario-tool |
Structured E2E scenarios, business test cases | Rich detail, scenario linkage, deeper insight |
Keeping them separate allows users to choose based on project needs, team maturity, and desired complexity.
Requires Python 3.11+
pip install ui-coverage-scenario-tool
To enable live interaction and visual highlighting in the report, you must embed the coverage agent into your application.
Add this to your HTML:
<script src="https://nikita-filonov.github.io/ui-coverage-scenario-report/agent.global.js"></script>
That’s it. No other setup required. Without this script, the coverage report will not be able to highlight elements.
Below are examples of how to use the tool with two popular UI automation frameworks: Playwright
and Selenium
. In
both cases, coverage data is automatically saved to the ./coverage-results
folder after each call to track_element
.
from playwright.sync_api import sync_playwright
# Import the main components of the tool:
# - UICoverageTracker — the main class for tracking coverage
# - SelectorType — type of selector (CSS, XPATH)
# - ActionType — type of action (CLICK, FILL, CHECK_VISIBLE, etc.)
from ui_coverage_scenario_tool import UICoverageTracker, SelectorType, ActionType
# Create an instance of the tracker.
# The `app` value should match the name in your UI_COVERAGE_APPS config.
tracker = UICoverageTracker(app="my-ui-app")
with sync_playwright() as playwright:
browser = playwright.chromium.launch()
page = browser.new_page()
page.goto("https://my-ui-app.com/login")
# Start a new scenario with metadata:
# - url: a link to the test case in TMS or documentation
# - name: a descriptive scenario name
tracker.start_scenario(
url="http://tms.com/test-cases/1",
name="Successful login"
)
username_input = page.locator("#username-input")
username_input.fill('user@example.com')
# Track this interaction with the tracker
tracker.track_element(
selector='#username-input', # The selector (CSS)
action_type=ActionType.FILL, # The action type: FILL
selector_type=SelectorType.CSS # The selector type: CSS
)
login_button = page.locator('//button[@id="login-button"]')
login_button.click()
# Track the click action with the tracker
tracker.track_element(
selector='//button[@id="login-button"]', # The selector (XPath)
action_type=ActionType.CLICK, # The action type: CLICK
selector_type=SelectorType.XPATH # The selector type: XPath
)
# End the current scenario.
# This finalizes and saves the coverage data for this test case.
tracker.end_scenario()
Quick summary:
- Call
tracker.start_scenario()
to begin a new scenario. - Use
tracker.track_element()
after each user interaction. - Provide the selector, action type, and selector type.
- The tool automatically stores tracking data as JSON files.
- Once the scenario is complete, call
tracker.end_scenario()
to finalize and save it.
from selenium import webdriver
from ui_coverage_scenario_tool import UICoverageTracker, SelectorType, ActionType
driver = webdriver.Chrome()
# Initialize the tracker with the app key
tracker = UICoverageTracker(app="my-ui-app")
# Start a new scenario
tracker.start_scenario(url="http://tms.com/test-cases/1", name="Successful login")
driver.get("https://my-ui-app.com/login")
username_input = driver.find_element("css selector", "#username-input")
username_input.send_keys("user@example.com")
# Track the fill action
tracker.track_element('#username-input', ActionType.FILL, SelectorType.CSS)
login_button = driver.find_element("xpath", '//button[@id="login-button"]')
login_button.click()
# Track the click action
tracker.track_element('//button[@id="login-button"]', ActionType.CLICK, SelectorType.XPATH)
# End the current scenario
tracker.end_scenario()
This setup shows how to integrate ui-coverage-scenario-tool
into a Python Playwright project using a custom tracker
fixture. The UICoverageTracker
is injected into each test and passed to page objects for interaction tracking.
We define a tracker fixture that:
- Creates a new
UICoverageTracker
per test - Starts a scenario using the test's name from
request.node.name
- Yields the tracker into the test
- Ends the scenario automatically after the test completes
./tests/conftest.py
from typing import Generator, Any
import pytest
from ui_coverage_scenario_tool import UICoverageTracker
@pytest.fixture
def ui_coverage_tracker(request) -> Generator[UICoverageTracker, Any, None]:
# Instantiate the UI coverage tracker with your app name
tracker = UICoverageTracker(app="ui-course")
# Start a new scenario using the test name for traceability
tracker.start_scenario(
url=None, # Optional external URL (e.g., link to TMS); can be set dynamically
name=request.node.name # Use pytest's node name (test function name)
)
# Provide the tracker to the test and any dependent components
yield tracker
# End the scenario after the test has run
tracker.end_scenario()
This fixture ensures a new, isolated tracker per test, which helps maintain clean test boundaries and supports parallel execution.
Here, we define a LoginPage
class that performs a user action and tracks it via the provided UICoverageTracker
.
./pages/login_page.py
from playwright.sync_api import Page
from ui_coverage_scenario_tool import ActionType, SelectorType, UICoverageTracker
class LoginPage:
def __init__(self, page: Page, tracker: UICoverageTracker):
self.page = page
self.tracker = tracker
# Track that the test has opened this page.
# Useful for identifying which pages were actually visited during test execution.
self.tracker.track_page(
url="/auth/login", # Logical or real URL of the page
page="LoginPage", # Human-readable name of the page
priority=0 # Used to indicate order on the pages graph
)
def click_login_button(self):
# Perform the UI interaction
self.page.click('#login')
# Track the interaction using the coverage tool
self.tracker.track_element(
selector='#login', # The CSS selector that was used
action_type=ActionType.CLICK, # Type of user action
selector_type=SelectorType.CSS # Type of selector used (CSS in this case)
)
# Track the navigation that follows this interaction.
# Helps build a picture of the flow between pages.
self.tracker.track_transition(from_page="LoginPage", to_page="DashboardPage")
This makes interaction tracking an integral part of your UI logic and encourages traceable, observable behavior within your components and flows. By logging page visits, element interactions, and navigation transitions, your test coverage becomes more transparent, measurable, and auditable.
Here’s a sample test that uses both the page
fixture (from Playwright) and the tracker
fixture you defined:
./tests/test_important_feature.py
from pages.login_page import LoginPage
from playwright.sync_api import Page
from ui_coverage_scenario_tool import UICoverageTracker
def test_login(page: Page, ui_coverage_tracker: UICoverageTracker):
# Pass both the Playwright page and tracker to your page object
login_page = LoginPage(page, ui_coverage_tracker)
# Perform the action — tracking happens automatically within the method
login_page.click_login_button()
The test itself stays clean and focused. Thanks to fixtures, all setup and teardown logic is handled automatically.
- Pytest idiomatic — Uses
@pytest.fixture
for clean, composable test setup. - Per-test isolation — Every test has its own
UICoverageTracker
instance and scenario context. - Explicit injection — The tracker is passed explicitly to page objects, making dependencies easy to trace and mock.
- Supports concurrency — No global state is used, so tests can run in parallel safely.
- Minimal boilerplate — No need for additional lifecycle hooks or monkeypatching.
After every call to tracker.track_element(...)
, the tool automatically stores coverage data in
the ./coverage-results/
directory as JSON files. You don’t need to manually manage the folder — it’s created and
populated automatically.
./coverage-results/
├── 0a8b92e9-66e1-4c04-aa48-9c8ee28b99fa-element.json
├── 0a235af0-67ae-4b62-a034-a0f551c9ebb5-element.json
└── ...
When you call tracker.start_scenario(...)
, a new scenario automatically begins. All subsequent actions, such as
tracker.track_element(...)
, will be logged within the context of this scenario. To finalize and save the scenario,
you need to call tracker.end_scenario()
. This method ends the scenario and saves it to a JSON file.
./coverage-results/
├── 0a8b92e9-66e1-4c04-aa48-9c8ee28b99fa-scenario.json
├── 0a235af0-67ae-4b62-a034-a0f551c9ebb5-scenario.json
└── ...
Once your tests are complete and coverage data has been collected, generate a final interactive report using this command:
ui-coverage-scenario-tool save-report
This will generate:
index.html
— a standalone HTML report that you can:- Open directly in your browser
- Share with your team
- Publish to GitHub Pages / GitLab Pages
coverage-report.json
— a structured JSON report that can be used for:- Printing a coverage summary in CI/CD logs
- Sending metrics to external systems
- Custom integrations or dashboards
Important! The ui-coverage-scenario-tool save-report
command must be run from the root of your project, where
your config files (.env
, ui_coverage_scenario_config.yaml
, etc.) are located. Running it from another directory may
result in missing data or an empty report.
Signature: start_scenario(url: str | None, name: str)
What it does: Begins a new UI coverage scenario. This groups all tracked interactions under a single logical test case.
When to use: Call this at the beginning of each test, typically in a fixture or setup block.
Parameters:
url
: (Optional) External reference to a test case or issue (e.g., link to TMS or ticket)name
: A unique name for the scenario — for example, userequest.node.name
inpytest
to tie it to the test function
Signature: end_scenario()
What it does: Closes the current scenario and finalizes the coverage data collected for that test case.
When to use: Call this at the end of each test, usually in teardown logic or after yield
in a fixture.
Signature: track_page(url: str, page: str, priority: int)
What it does: Marks that a particular page was opened during the test. Useful for identifying what screens were visited and when.
When to use: Call once in the constructor of each Page Object, or at the point where the test navigates to that page.
Parameters:
url
: Logical or actual route (e.g./auth/login
)page
: Readable identifier like"LoginPage"
priority
: Optional number to order or weigh pages in reports
Signature: track_element(selector: str, action_type: ActionType, selector_type: SelectorType)
What it does: Tracks interaction with a specific UI element (e.g., click, fill, select).
When to use: Call it immediately after performing the user action — so that the test log reflects actual UI behavior.
Parameters:
selector
: The selector used in the action (e.g.#login
)action_type
: The type of action (CLICK
,FILL
, etc.)selector_type
: Type of selector (CSS
,XPATH
)
Signature: track_transition(from_page: str, to_page: str)
What it does: Marks a transition between two logical pages or views.
When to use: After an action that leads to navigation (e.g., after login button click that brings you to dashboard).
Parameters:
from_page
: Page before the transitionto_page
: Page after the transition
These methods work together to give a complete picture of what pages, elements, and flows are covered by your tests — which can be visualized or analyzed later.
You can configure the UI Coverage Tool using a single file: either a YAML, JSON, or .env
file. By default, the
tool looks for configuration in:
ui_coverage_scenario_config.yaml
ui_coverage_scenario_config.json
.env
(for environment variable configuration)
All paths are relative to the current working directory, and configuration is automatically loaded via get_settings().
Important! Files must be in the project root.
All settings can be declared using environment variables. Nested fields use dot notation, and all variables must be
prefixed with UI_COVERAGE_SCENARIO_
.
Example: .env
# Define the applications that should be tracked. In the case of multiple apps, they can be added in a comma-separated list.
UI_COVERAGE_SCENARIO_APPS='[
{
"key": "my-ui-app",
"url": "https://my-ui-app.com/login",
"name": "My UI App",
"tags": ["UI", "PRODUCTION"],
"repository": "https://github.com/my-ui-app"
}
]'
# The directory where the coverage results will be saved.
UI_COVERAGE_SCENARIO_RESULTS_DIR="./coverage-results"
# The file that stores the history of coverage results.
UI_COVERAGE_SCENARIO_HISTORY_FILE="./coverage-history.json"
# The retention limit for the coverage history. It controls how many historical results to keep.
UI_COVERAGE_SCENARIO_HISTORY_RETENTION_LIMIT=30
# Optional file paths for the HTML and JSON reports.
UI_COVERAGE_SCENARIO_HTML_REPORT_FILE="./index.html"
UI_COVERAGE_SCENARIO_JSON_REPORT_FILE="./coverage-report.json"
Example: ui_coverage_scenario_config.yaml
apps:
- key: "my-ui-app"
url: "https://my-ui-app.com/login",
name: "My UI App"
tags: [ "UI", "PRODUCTION" ]
repository: "https://github.com/my-ui-app"
results_dir: "./coverage-results"
history_file: "./coverage-history.json"
history_retention_limit: 30
html_report_file: "./index.html"
json_report_file: "./coverage-report.json"
Example: ui_coverage_scenario_config.json
{
"apps": [
{
"key": "my-ui-app",
"url": "https://my-ui-app.com/login",
"name": "My UI App",
"tags": [
"UI",
"PRODUCTION"
],
"repository": "https://github.com/my-ui-app"
}
],
"results_dir": "./coverage-results",
"history_file": "./coverage-history.json",
"history_retention_limit": 30,
"html_report_file": "./index.html",
"json_report_file": "./coverage-report.json"
}
Key | Description | Required | Default |
---|---|---|---|
apps |
List of applications to track. Each must define key , name , and url . |
✅ | — |
services[].key |
Unique internal identifier for the service. | ✅ | — |
services[].url |
Entry point URL of the app. | ✅ | — |
services[].name |
Human-friendly name for the service (used in reports). | ✅ | — |
services[].tags |
Optional tags used in reports for filtering or grouping. | ❌ | — |
services[].repository |
Optional repository URL (will be shown in report). | ❌ | — |
results_dir |
Directory to store raw coverage result files. | ❌ | ./coverage-results |
history_file |
File to store historical coverage data. | ❌ | ./coverage-history.json |
history_retention_limit |
Maximum number of historical entries to keep. | ❌ | 30 |
html_report_file |
Path to save the final HTML report (if enabled). | ❌ | ./index.html |
json_report_file |
Path to save the raw JSON report (if enabled). | ❌ | ./coverage-report.json |
Once configured, the tool automatically:
- Tracks test coverage during UI interactions.
- Writes raw coverage data to
coverage-results/
. - Stores optional historical data and generates an HTML report at the end.
No manual data manipulation is required – the tool handles everything automatically based on your config.
The UI Coverage Tool provides several CLI commands to help with managing and generating coverage reports.
Generates a detailed coverage report based on the collected result files. This command will process all the raw coverage
data stored in the coverage-results
directory and generate an HTML report.
Usage:
ui-coverage-scenario-tool save-report
- This is the main command to generate a coverage report. After executing UI tests and collecting coverage data, use this command to aggregate the results into a final report.
- The report is saved as an HTML file, typically named index.html, which can be opened in any browser.
This is an internal command mainly used during local development. It updates the report template for the generated coverage reports. It is typically used to ensure that the latest report template is available when you generate new reports.
Usage:
ui-coverage-scenario-tool copy-report
- This command updates the internal template used by the save-report command. It's useful if the template structure or styling has changed and you need the latest version for your reports.
- This command is typically only used by developers working on the tool itself.
Prints the resolved configuration to the console. This can be useful for debugging or verifying that the configuration file has been loaded and parsed correctly.
Usage:
ui-coverage-scenario-tool print-config
- This command reads the configuration file (
ui_coverage_scenario_config.yaml
,ui_coverage_scenario_config.json
, or.env
) and prints the final configuration values to the console. - It helps verify that the correct settings are being applied and is particularly useful if something is not working as expected.
- Ensure that
start_scenario()
is called before the test. - Ensure that
end_scenario()
is called after the test. - Ensure that
track_page()
,track_element()
,track_transition()
is called during your test. - Make sure you run
ui-coverage-scenario-tool save-report
from the root directory. - Make sure to setup configuration correctly.
- Check that the
coverage-results
directory contains.json
files.