Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add integration test for Open Telemetry #38

Closed
BrynCooke opened this issue Oct 20, 2021 · 6 comments · Fixed by #276
Closed

Add integration test for Open Telemetry #38

BrynCooke opened this issue Oct 20, 2021 · 6 comments · Fixed by #276
Assignees
Labels
enhancement An enhancement to an existing feature

Comments

@BrynCooke
Copy link
Contributor

Describe the solution you'd like
Open telemetry works always.

Additional context
We recently had an uncaught regression as there were no integration tests for Otel.
We should take the time to add a test for each type of collector.

@o0Ignition0o
Copy link
Contributor

I wrote a variant of https://docs.rs/tracing-test/0.1.0/tracing_test/ and then switched to it at a former company, I'd love to have a stab at this, and try to provide us with nice regression test primitives

@o0Ignition0o o0Ignition0o transferred this issue from another repository Nov 4, 2021
@Geal
Copy link
Contributor

Geal commented Nov 29, 2021

that would be useful, I just saw a regression while debugging #180, where trace ids change in the middle of a query

@o0Ignition0o
Copy link
Contributor

o0Ignition0o commented Dec 15, 2021

#276 Is a first step towards integration testing, we now have a Tree like structure that represents the spans we're working on, here's a simple example:

this code:

async fn main() {
    let res = do_stuff().await;

    assert_eq!(res, 104);
}


#[tracing::instrument(name = "do_stuff", level = "info")]
async fn do_stuff() -> u8 {
    let number = do_stuff_2(42).await;

    tokio::task::spawn_blocking(|| async { tracing::warn!("in a separate context!") })
        .await
        .unwrap()
        .await;

    tracing::info!("here i am!");

    tracing::info!(number);

    number * 2
}

#[tracing::instrument(
    name = "do_stuff2",
    target = "my_crate::an_other_target",
    level = "info"
)]
async fn do_stuff_2(number: u8) -> u8 {
    tracing::info!("here i am again!");

    number + 10
}

outputs this structure:

{
  "id": 0,
  "target": "root",
  "records": [
    [],
    null
  ],
  "children": {
    "integration_tests::do_stuff": {
      "id": 1,
      "target": "integration_tests::do_stuff",
      "records": [
        [],
        {
          "name": "do_stuff",
          "target": "integration_tests",
          "level": "INFO",
          "module_path": "integration_tests",
          "fields": {
            "names": []
          }
        }
      ],
      "children": {
        "my_crate::an_other_target::do_stuff2": {
          "id": 2,
          "target": "my_crate::an_other_target::do_stuff2",
          "records": [
            [
              [
                "number",
                {
                  "Value": 42
                }
              ]
            ],
            {
              "name": "do_stuff2",
              "target": "my_crate::an_other_target",
              "level": "INFO",
              "module_path": "integration_tests",
              "fields": {
                "names": [
                  "number"
                ]
              }
            }
          ],
          "children": {}
        }
      }
    }
  }
}

The next steps for me would be:

  • add span IDs or something to the keys, so several calls are displayed
  • update our code with the latest ADR best practices (especially instrument(skip_all)) and generate snapshots based on this.
  • generate snapshots
  • add functions that allow us to assert the spans we get "kinda" match a shape, because we probably don't care what the span contents exactly look like, but we do care that a specific span has a specific field / value, and that spans are nested the right way.

This should cover most of our use cases, and open up for regression testing, and maybe let this bit of code live somewhere else, maybe a separate crate eventually?

@o0Ignition0o
Copy link
Contributor

I still need to flesh things out a bit, but this pr turns this:

    #[test_span(tokio::test)]
    async fn async_tracing_macro_works() {
        let res = do_stuff().await;

        assert_eq!(res, 104);

        let res2 = do_stuff().await;

        assert_eq!(res2, 104);

        let logs = get_logs();

        assert!(logs.contains_message("here i am!"));
        assert!(logs.contains_value("number", RecordedValue::Value(52.into())));
        assert!(logs.contains_message("in a separate context!"));

        insta::assert_json_snapshot!(logs);
        insta::assert_json_snapshot!(get_span());
    }

into logs and spans that are ready to be snapshotted 📸

Here's what it looks like on a basic router request.

Team how do we feel about this? Would this be a good first step, or is it a terrible idea and we should use an other approach?

@Geal
Copy link
Contributor

Geal commented Dec 16, 2021

looks great! Why are the logs separated? When looking at traces in jaeger, the logs are attached to spans, is it possible to do it here too?

@o0Ignition0o
Copy link
Contributor

o0Ignition0o commented Dec 16, 2021

Oh indeed, I think we can! Lemme check

EDIT: I d have to make sure log events always apply to the current span, but I can definitely test that!

@abernix abernix added 2022-01 and removed 2021-12 labels Jan 6, 2022
@abernix abernix added the enhancement An enhancement to an existing feature label Feb 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement An enhancement to an existing feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants