Description
Proposed sub-tasks
Jaeger-Query
Owners: @albertteoh
- Add metrics query API spec #2946: Add metrics query API spec
- Add metrics reader interface and gen proto #2977: Add Metrics Reader interface
- Add prom and m3 storage implementation skeleton #2983, Implement Prometheus metrics reader constructor #2988, Implement metrics reader #3004: Add M3 reader implementation
- Add Prometheus metrics reader factory and config #3049: Add factory and flags
- Add TLS support for Prometheus Reader #3055: Add TLS support
- Add MetricsQueryService grcp handler #3091: Add GRPC handler
- Add HTTP handler for metrics querying #3095: Add HTTP handler
- Refactor QueryService tests #3060, Add metrics query capability to query service #3061: Update query service
- Hook up MetricsQueryService to main funcs #3079: Hookup metrics query to "main"
- Add ATM dev environment docker-compose and API doc #3171: Explore options to make monitor tab work for all-in-one (or create issue)
Jaeger-UI
Owners: @th3M1ke
- Approve UX & UI design (this issue)
- Atm monitoring UI jaeger-ui#815: Create Monitoring Tab for Jaeger UI
Documentation
Owners: @albertteoh
- Add metrics storage support to auto-generated CLI documentation documentation#539: Add usage documentation for metrics query API (noting that this is "experimental")
- Add usage documentation for Monitor tab UI documentation#553: Add usage documentation for Monitor tab UI
Requirement - what kind of business use case are you trying to solve?
The main proposal is documented in: #2736.
The motivation is to help identify interesting traces (high qps, slow or erroneous) without knowing the service or operations up-front.
Use cases include:
- Post deployment sanity checks across the org, or on known dependent services in the request chain.
- Monitoring and root-causing when alerted of an issue.
- Better onboarding experience for new users of Jaeger UI.
- Long-term trend analysis of QPS, errors and latencies.
- Capacity planning.
Proposal - what do you suggest to solve the problem or improve the existing situation?
Add a new "Monitoring" tab situated after "Compare" containing service-level request rates, error rates, latency and impact (= latency * request rate
to avoid "false positives" from low QPS endpoints with high latencies).
The data will be sourced from jaeger-query's new metrics endpoints.
As the jaeger-query metrics endpoints require opt-in to be enabled, the Monitor tab will have a sensible empty state, perhaps a link to documentation on how to enable metrics querying capabilities.
Workflow
The screen will open to a per-service level set of metrics sorted, by default, on Impact. Columns are configurable by the user with other latency percentiles available, among others. A search box will be available to filter on service names.
The user need only supply the time period to fetch metrics on (similar to Find Traces), defaulting to a 1 hour lookback.
Note the user is not required to define the step size (the period between data points), at least in this iteration, to keep the user experience as simple as possible. Instead we propose to define the step size based on a sensible heuristic based on the query period and/or the width of the chart. For example:
< 30m
search period -> 15s step< 1h
search period -> 1m step, etc.
There are two possible actions from here in this tab:
- Click on a service to drill down to per-operation metrics.
- Click on "View all traces" to go to the Search tab with the service pre-populated and Operation filter set to "all".
Service metrics page
If drilling down into the service-level metrics, the page will show a summary of the RED metrics at the top along with the per-operation equivalent metrics as with the per-service metrics above. Also similarly, there will be a search box to filter on operations, and the user has the option to "View all traces" for a given operation.
Search tab
The search tab will be the final stage in the workflow (except of course if going back to a previous state), which is pre-populated with the service and/or operation as well as the search period.
The search period will be sticky between each of these screens to maintain consistency in search results.
Demo
Courtesy of @Danafrid.
Screen.Recording.2021-04-14.at.11.52.58.mov
Any open questions to address
- Any suggestions on charting libraries use for the larger detailed charts and the smaller row-level graphs in the table views?
- Any requirement to maintain consistency with the trace statistics table view?
- What is the preferred behaviour when a large number of services/operations are returned?
- Show the top n results ordered by Impact by default? What if the user sorts on a different metric like errors? Just sort on the current n results or refetch from jaeger-query?
- Show everything?
- Paginate? (probably want to avoid this as it would require maintaining state in UI or jaeger-query)