Skip to content

feat(tracing): Improve structure for tracing errors #585

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
May 25, 2023

Conversation

jan-auer
Copy link
Member

@jan-auer jan-auer commented May 24, 2023

Changes how the tracing integration tracks errors with messages and provides more structured data for fields passed via tracing.

First, this introduces the ability to set Sentry tags via a tags.* prefix. The prefix is removed and does no longer occur in the tracing context. The following example will create a tag called foo:

tracing::error!(tags.foo = "bar", "this is an error");

Next, the custom context is renamed to Rust Tracing Fields. This is closer to the original naming in the tracing crate and also disambiguates this data more from Sentry tags.

The top-level exception passed to tracing is now marked with a tracing mechanism to annotate in the UI how the exception was captured.

Finally, if both a message and an error are passed to tracing, the message gets moved to a synthetic exception on the top of the stack. Most importantly, this allows for proper grouping and display in the issue stream. Additionally, if attach_stacktrace is enabled, the stack trace is placed on the synthetic exception rather than the thread interface, so it is shown inline with the chained exception.

Example Rendering

Due to the synthetic exception, the title of the issue is now the location of the capture. The captured message is shown underneath. This is also the case in the title bar on issue details:

Screenshot 2023-05-24 at 14 22 05

The exception stack now shows an entry for tracing::error! with the logged message, marked with the tracing mechanism. The stack trace is captured through the backtrace integration and points to the capture:

Screenshot 2023-05-24 at 14 22 20

If the source errors had backtraces, those would be shown underneath.

Background

The tracing integration converts fields containing a dyn Error into Sentry exception records. This also works with error sources and creates a stack of nested exceptions on Sentry. Additionally, the formatted message from the error! invocation is added as a message. Finally, if attach_stacktrace is enabled on the client, the integration also adds a stacktrace.

For both grouping and rendering, this is suboptimal for the following reasons:

  • The logged message usually contains a more specific or useful description than the logged error. However, Sentry does not render the attached message in issue stream, which makes it hard to disambiguate places where the same error is tracked.
  • Neither the message nor the stack trace contribute to grouping if there is an exception list. The exceptions are generic, however, which means unrelated problems would be grouped together.
  • Likewise, the message does not end up in the issue title.
  • The Sentry UI currently does not even display the exception list if there is a message and a stack trace on the issue. This will have to be fixed in the product, however.

A synthetic exception in Sentry is an exception record that was created from something that's not a language-level exception. It can carry a stack trace and a value, but unlike regular exceptions, the "type" has no meaning and is not used for grouping or issue title. Instead, Sentry will use the source location from the stack trace to for both.

With this, we receive the behavior that is most likely desirable in such a scenario:

  • A chain of exceptions is shown with the logged message and stack trace on top, pointing to the log location.
  • Grouping does not use the synthetic "tracing::error!" type, it uses the source location.
  • Issue details show a descriptive location and the message.

@jan-auer
Copy link
Member Author

I'm planning to follow up with a separate PR that improves docs around tracking dyn Error with tracing for Sentry.

@codecov
Copy link

codecov bot commented May 24, 2023

Codecov Report

Merging #585 (17c0a85) into master (9ca275f) will decrease coverage by 0.28%.
The diff coverage is 47.29%.

❗ Current head 17c0a85 differs from pull request most recent head ce46b16. Consider uploading reports for the commit ce46b16 to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #585      +/-   ##
==========================================
- Coverage   70.75%   70.48%   -0.28%     
==========================================
  Files          67       67              
  Lines        6744     6809      +65     
==========================================
+ Hits         4772     4799      +27     
- Misses       1972     2010      +38     

Comment on lines +197 to +198
if client.options().attach_stacktrace {
thread = sentry_backtrace::current_thread(true);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two changes I'm not happy with. In order to attach the stack trace, I need to depend on sentry_backtrace and also grab the client to check its options.

The backtrace integration could detect this case (attach_stacktrace enabled + top-level exception is synthetic) and write there. Would that be a better alternative? It composes better, but I'm not sure if we're happy with these semantics.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe its a fair thing to do. However turning on attach_stacktrace is a rather hard hammer, and may have high overhead if you just want to log random messages, and not necessarily errors.

As I said in another comment. Another alternative would be to add the "Rust Tracing Location" as it is now as the stack trace here. That is definitely better than it being hidden somewhere in a random context.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking about adding the tracing location, too. However, it doesn't have enough information to create even a single stack frame that is eligible for grouping, most notably because of the missing function name.

We could also pass this as an option to the tracing integration and pass it down to the converter function, so you can control this separately. At least in Relay, we've enabled stack traces for everything (they are just added for error messages anyway, otherwise we're not tracking stuff in Sentry), so the behavior is intended. This is a good consideration for what attach_stacktraces is actually supposed to mean; and I think in this specific case (we have an exception entry) it's seems exactly as intended.

@jan-auer jan-auer marked this pull request as ready for review May 24, 2023 13:09
@jan-auer jan-auer requested a review from Swatinem May 24, 2023 13:09
@Swatinem
Copy link
Member

Some questions from my end:

Neither the message nor the stack trace contribute to grouping if there is an exception list. The exceptions are generic, however, which means unrelated problems would be grouped together.

While the issue title that ends up in sentry is crap, the separate grouping depending on the chained errors is actually a good thing, at least for the use-case in symbolicator, where we get different groups for different errors with broken debug files. If all those would be grouped as a single issue, that would be worse. Though I have to look up what mechanism we end up using to capture these in symbolicator.

A synthetic exception in Sentry is an exception record that was created from something that's not a language-level exception. It can carry a stack trace and a value, but unlike regular exceptions, the "type" has no meaning and is not used for grouping or issue title. Instead, Sentry will use the source location from the stack trace to for both.

Depending on the question related to sentry-backtrace: We could also just put the tracing location in there as the a single-frame stack trace maybe, in case when we don’t have a proper stack trace.

Comment on lines +197 to +198
if client.options().attach_stacktrace {
thread = sentry_backtrace::current_thread(true);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe its a fair thing to do. However turning on attach_stacktrace is a rather hard hammer, and may have high overhead if you just want to log random messages, and not necessarily errors.

As I said in another comment. Another alternative would be to add the "Rust Tracing Location" as it is now as the stack trace here. That is definitely better than it being hidden somewhere in a random context.

@jan-auer
Copy link
Member Author

jan-auer commented May 25, 2023

separate grouping depending on the chained errors is actually a good thing

Agreed, and this is also what's happening with the changes in this PR. Even more so, it now groups by the stack trace (ie. the log location) plus the chained exceptions (and their stack traces if they have any).

Edit: I've attached the matching grouping components from the above error for comparison:

image

@jan-auer jan-auer merged commit 8586466 into master May 25, 2023
@jan-auer jan-auer deleted the feat/tracing-exc-message branch May 25, 2023 12:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants