Skip to content

feat(trace-viewer): Added test results to status line #35742

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

andreaslarssen
Copy link

This adds test results to the status line, showing how many tests passed, failed and was skipped.
It also changes the UI logic in that it show the same UI while running tests and after.

Fixes #34444

This comment has been minimized.

@pengooseDev
Copy link
Contributor

  1. It looks like the current code formatter is set to Prettier. Please press Cmd + , and switch it to ESLint!
    1-1. The current code needs to be updated to follow the existing JSX syntax conventions.

  2. Regression tests must not fail. If there are failing tests, check the test code and update it if the changes are intentional.
    2-1. Therefore, you need to identify the cause of the current test failures. Check the failing tests in the following files to understand which changes caused them and why, then fix accordingly:

playwright/tests/playwright-test/{test}.tsx
  1. If the line is unnecessary, it should be removed.
packages/trace-viewer/src/ui/uiModeView.css
+ 100 ~ + 103
  1. You need to agree to @microsoft-github-policy-service agree.
    Simply leave a comment with that exact phrase.

@pengooseDev
Copy link
Contributor

  1. As discussed in the issue, the test progress and status icon should be displayed in a single line, just like the previous UI.
Current Should be
image image
  1. I believe removing the "complete" text could help save some space. The UI already communicates the state clearly enough. :)

  2. After the test ends and the running icon disappears, the test result UI shifts to the left. It would be better if the UI remained consistent regardless of the test status — perhaps by keeping the icon or adding a fixed width.

@andreaslarssen
Copy link
Author

@pengooseDev Sorry, wasn't ready for a review just yet. Didn't expect the tests to run in draft mode, and just playing around with it.

Wanted to ask you how to get HMR to work? Not sure I understood the docs.

@pengooseDev
Copy link
Contributor

@andreaslarssen

Just test

PW_HMR=1 npm run watch # wait to build end
PW_HMR=1 npm run ctest -- --ui

etc

PW_HMR=1 npx playwright show-trace
PW_HMR=1 npx playwright codegen
PW_HMR=1 npx playwright show-report

@andreaslarssen
Copy link
Author

@pengooseDev I'm sorry, I'm not familiar with that syntax. Is this for the terminal? I tried that and it didn't get recognized

@pengooseDev
Copy link
Contributor

@andreaslarssen It is for the terminal :) Docs

This comment has been minimized.

@andreaslarssen
Copy link
Author

@microsoft-github-policy-service agree

@andreaslarssen
Copy link
Author

@pengooseDev What's the script for running the tests locally?

This comment has been minimized.

@pengooseDev
Copy link
Contributor

All the information is included in the Docs and the script field of the package.json. :)
If a specific test fails, you can press cmd + f to search for and check the test directly!

Copy link
Contributor

Test results for "tests 1"

10 failed
❌ [playwright-test] › ui-mode-test-run.spec.ts:88:5 › should show running progress @macos-latest-node18-1
❌ [playwright-test] › ui-mode-test-watch.spec.ts:257:5 › should queue watches @macos-latest-node18-1
❌ [playwright-test] › ui-mode-test-run.spec.ts:88:5 › should show running progress @ubuntu-latest-node18-1
❌ [playwright-test] › ui-mode-test-watch.spec.ts:257:5 › should queue watches @ubuntu-latest-node18-1
❌ [playwright-test] › ui-mode-test-run.spec.ts:88:5 › should show running progress @ubuntu-latest-node20-1
❌ [playwright-test] › ui-mode-test-watch.spec.ts:257:5 › should queue watches @ubuntu-latest-node20-1
❌ [playwright-test] › ui-mode-test-run.spec.ts:88:5 › should show running progress @ubuntu-latest-node22-1
❌ [playwright-test] › ui-mode-test-watch.spec.ts:257:5 › should queue watches @ubuntu-latest-node22-1
❌ [playwright-test] › ui-mode-test-run.spec.ts:88:5 › should show running progress @windows-latest-node18-1
❌ [playwright-test] › ui-mode-test-watch.spec.ts:257:5 › should queue watches @windows-latest-node18-1

4 flaky ⚠️ [firefox-library] › library/inspector/cli-codegen-1.spec.ts:986:7 › cli codegen › should not throw csp directive violation errors @firefox-ubuntu-22.04-node18
⚠️ [firefox-page] › page/page-evaluate.spec.ts:424:3 › should throw for too deep reference chain @firefox-ubuntu-22.04-node18
⚠️ [playwright-test] › ui-mode-test-watch.spec.ts:146:5 › should watch all @ubuntu-latest-node20-1
⚠️ [webkit-page] › page/page-screenshot.spec.ts:345:5 › page screenshot › should work while navigating @webkit-ubuntu-22.04-node18

39077 passed, 803 skipped
✔️✔️✔️

Merge workflow run.

@pengooseDev
Copy link
Contributor

@andreaslarssen
We are almost there.

we need to update the tests in playwright/tests/playwright-test/ui-mode-test-run.spec.ts and ui-mode-test-watch.spec.ts.
Before modifying the tests, make sure to understand their original intent and update them accordingly.

Additionally, since a new feature has been added, we should add regression tests to cover it.

Example of a currently failing test:

test('should show running progress', async ({ runUITest }) => {
  const { page } = await runUITest({
    'a.test.ts': `
      import { test, expect } from '@playwright/test';
      test('test 1', async () => {});
      test('test 2', async () => new Promise(() => {}));
      test('test 3', async () => {});
      test('test 4', async () => {});
    `,
  });

  await page.getByTitle('Run all').click();
  await expect(page.getByTestId('status-line')).toHaveText('Running 1/4 passed (25%)');
  await page.getByTitle('Stop').click();
  await expect(page.getByTestId('status-line')).toHaveText('1/4 passed (25%)');
  await page.getByTitle('Reload').click();
  await expect(page.getByTestId('status-line')).toBeHidden();
});
test('should queue watches', async ({ runUITest, writeFiles, createLatch }) => {
  const latch = createLatch();
  const { page } = await runUITest({
    'a.test.ts': `import { test } from '@playwright/test'; test('test', () => {});`,
    'b.test.ts': `import { test } from '@playwright/test'; test('test', async () => {
      ${latch.blockingCode}
    });`,
    'c.test.ts': `import { test } from '@playwright/test'; test('test', () => {});`,
    'd.test.ts': `import { test } from '@playwright/test'; test('test', () => {});`,
  });

  await expect.poll(dumpTestTree(page)).toBe(`
    ▼ ◯ a.test.ts
        ◯ test
    ▼ ◯ b.test.ts
        ◯ test
    ▼ ◯ c.test.ts
        ◯ test
    ▼ ◯ d.test.ts
        ◯ test
  `);

  await page.getByTitle('Watch all').click();
  await page.getByTitle('Run all').click();

  await expect(page.getByTestId('status-line')).toHaveText('Running 1/4 passed (25%)');

  await writeFiles({
    'a.test.ts': `import { test } from '@playwright/test'; test('test', () => {});`,
    'b.test.ts': `import { test } from '@playwright/test'; test('test', () => {});`,
    'c.test.ts': `import { test } from '@playwright/test'; test('test', () => {});`,
  });

  // Now watches should not kick in.
  await new Promise(f => setTimeout(f, 1000));
  await expect(page.getByTestId('status-line')).toHaveText('Running 1/4 passed (25%)');

  // Allow test to finish and new watch to kick in.
  latch.open();

  await expect(page.getByTestId('status-line')).toHaveText('3/3 passed (100%)');
});

@andreaslarssen
Copy link
Author

@pengooseDev Yeah, I'm working on this in-between other stuff, so it's going to take some time. Took a quick look at one of the failing tests yesterday, but couldn't figure out (just by looking at code) why it would fail. I struggle to run the test files locally, so that affects my effectivity.

@pengooseDev
Copy link
Contributor

What do you think about matching the text color with the test icon color, as shown in the image below?
I believe that with this simple change, we can offer a more intuitive UI. :)

Current Suggestion
image image

@andreaslarssen
Copy link
Author

Yeah, looks good, will fix.

@pengooseDev
Copy link
Contributor

1. Icon size

nit: It looks like the size of the passed icon is different from the failed and skipped icons.
If possible, it would be great to fix the 1px vertical gap as well.

Current
image

2.

@pengooseDev Yeah, I'm working on this in-between other stuff, so it's going to take some time. Took a quick look at one of the failing tests yesterday, but couldn't figure out (just by looking at code) why it would fail. I struggle to run the test files locally, so that affects my effectivity.

You can run local tests using the following terminal commands:

Run all core tests

npm ci
npm run build
npm run ttest

Run a specific test

npm run ttest -- --grep "{name}"

Example:

npm run ttest -- --grep "should queue watches"

👍

@pengooseDev
Copy link
Contributor

When writing commit messages, please make sure to follow the convention!

Example:

git commit -m "test: add regression tests"

@andreaslarssen
Copy link
Author

andreaslarssen commented Apr 30, 2025

When writing commit messages, please make sure to follow the convention!

Example:

git commit -m "test: add regression tests"

Not sure I fully understand the convention. The examples seems to indicate that every commit is a complete task?
What msg would you for example write for giving the icon text colors?

I also checked the commits of other PR's, and they didn't seem to follow the convention either, so I thought I might be the PR and not the commit messages that should follow the convention.

@andreaslarssen
Copy link
Author

@pengooseDev Also: I changed the UI a bit so that you don't have to wait until the tests have finished before seeing the result of the tests. I don't see why you would need to wait to after all the tests have run to see that. Also, it seems like there was a duplicate test-id (both the progress line and the tests result line had test id 'status-line'), and this change fixed that.

The only downside of this, I guess, is that this change is (probably) why the last tests are failing. So before fixing them, I just wanted to check if you thought the UI changes I did was ok, or if I'm going to be asked to change it back after finishing the PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature]: Add skipped tests to run report in UI
2 participants