Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Component tests failing intermittently with uncaught Vite "failed to fetch dynamically imported module" error in CI #25913

Open
synaptiko opened this issue Feb 22, 2023 · 166 comments
Assignees
Labels
CI General issues involving running in a CI provider CT Issue related to component testing npm: @cypress/vite-dev-server @cypress/vite-dev-server package issues prevent-stale mark an issue so it is ignored by stale[bot] Triaged Issue has been routed to backlog. This is not a commitment to have it prioritized by the team.

Comments

@synaptiko
Copy link

synaptiko commented Feb 22, 2023

Current behavior

We started migrating our project from CRA to Vite and we have almost 300 component and 40 E2E Cypress tests in place. Unfortunately, after fixing all the other issues, we are still not able to stabilize our tests since there is always 1 or 2 failing randomly on "Failed to fetch dynamically imported module" errors.

We noticed that it's somehow related to the load of CI. Under some conditions more tests are failing like this, in other times it succeeds. But it's random. We checked our tests and we are pretty sure it's not caused by any logic we have.

We've checked some of the existing issues on Cypress & Vite, tried various workaround but no luck with any of them.

What we think is happening is that Cypress is not waiting for Vite to "boot up" properly and retries don't help with it, only when new spec is run, it works.

Note: it only happens with component tests. For E2E tests we had similar stability issues but we solved them by building production version and then just serving it with vite preview. This made integration tests faster and very stable. Previously they were also timeouting.

Note 2: we have a lot of components and lot of dependencies in our project, we also use MUI as our base. But with CRA we were able to have stable tests, it was just around two times slower. That's why we want to use Vite now.

Note 3: we are running "Component tests" in parallel, currently in 4 groups.

Desired behavior

No random problems with Cypress + Vite combo.

Test code to reproduce

Unfortunately can't provide this since our project is private. And I'm afraid that it's related to project's complexity and that's why we can't easily create another one for reproduction.

Cypress Version

v12.6.0

Node version

v16.18.1

Operating System

Docker Hub image: cypress/included for v12.6.0 version

Debug Logs

No response

Other

No response

@mschaefer-gresham
Copy link

mschaefer-gresham commented Feb 22, 2023

We are having the exact same issue with component tests ( I was just coming here to open a new issue ) . We just migrated from webpack to vite. The error we get is:

The following error originated from your test code, not from Cypress. > Failed to fetch dynamically imported module: http://localhost:3000/__cypress/src/cypress/support/component.ts When Cypress detects uncaught errors originating from your test code it will automatically fail the current test. Cypress could not associate this error to any specific test. We dynamically generated a new test to display this failure.

We are getting this despite the fact that the file does exist. And this occurs randomly.

Vite: 4.1.1
Cypress: 12.5.1
Node: 19.3.0

One related issue: our cypress directory is on the same level as our src directory. When running locally, Cypress correctly expects component.ts to be where we have it in the cypress directory. But when we run the tests in docker, Cypress expects component.ts to be in a cypress directory under the src directory (see above error). Even if we use the 'supportFolder' config setting, Cypress still looks for it in the src directory (src is the vite root folder). So I just copied component.ts to the location cypress expects it to be (but still fails randomly despite this).

So locally this is not reproducable for us (so far). This only occurs in a docker container.

We are also running the tests in parallel.

@synaptiko
Copy link
Author

Update: we found an ugly workaround by using https://docs.cypress.io/guides/guides/module-api#cypressrun and writing our own parallelization mechanism and runner which wraps Cypress. We are checking for errors and we will retry failed tests again.

It works but it requires some "balance" in the number of workers. We also made the first group run first, and only when it's successful, we run the other groups in parallel. This seems to help a lot. But there are still random failures from time to time but because we retry them a few times, it will eventually settle down.

It looks to be somehow related to "background" Vite's compilation & deps optimization because we observed that it usually gets stable once some of Vite's log messages appear.

@lmiller1990
Copy link
Contributor

Is this only in Docker? We use Vite heavily internally - everything is really stable. I hope someone can reproduce it reliably.

I wonder if adding entries to https://vitejs.dev/config/dep-optimization-options.html#optimizedeps-entries helps?

@mschaefer-gresham
Copy link

mschaefer-gresham commented Feb 24, 2023 via email

@synaptiko
Copy link
Author

Is this only in Docker?

Yes, it seems to be only happening on CI, which means Docker. And we tried optimizeDeps.entries but it didn't seem to really help.

It is somehow related to CI being under load. We observed a few more different errors happening. It just seems that loading of some resources is either failing or incomplete causing various strange issues (sometimes with loading the import, sometimes errors on Cypress level, sometimes in app, usually something about missing context). But all these are usually resolved after 1-3 retries, depending on the CI load.

@mschaefer-gresham
Copy link

mschaefer-gresham commented Feb 24, 2023

We are experiencing more or less exactly the same thing. In particular the missing context error from time to time and failing to load component.ts.

@lmiller1990
Copy link
Contributor

lmiller1990 commented Feb 27, 2023

Maybe related: #18124 (comment)

The OP says "Latest Cypress + Vite" - is this suggesting this is a regression? If so, knowing which version introduced this regression would be very useful.

I haven't seen this in our internal suite, but it sounds like I may be able to reproduce by adding lots of large dependencies, like MUI? Or by increasing the load, eg Docker?

@mschaefer-gresham
Copy link

mschaefer-gresham commented Feb 27, 2023

We use MUI. We have split approx 700 component tests into 8 groups of varying sizes which run in parallel in docker containters. The machine is sufficient for the load. Since writing the first post above we had 7 successful runs and then today we experienced the same failure across several of the groups. This time it was component.ts in one group, but also random test files in other groups. I tried the suggestion that @synaptiko gave to run one small group first before running the other tests in parallel. I have since had two successful test runs in a row. Hopefully this will stabalize the build more, but I have seen in the past that making a change can can improve things temporarily only to have a regression.

I have also tried running all the tests sequentially and get the same error btw.

@synaptiko
Copy link
Author

@lmiller1990

The OP says "Latest Cypress + Vite" - is this suggesting this is a regression? If so, knowing which version introduced this regression would be very useful.

We are transitioning from CRA and much older Cypress, so I can really say if it's regression or not. We just started using versions from the last week.

it sounds like I may be able to reproduce by adding lots of large dependencies, like MUI? Or by increasing the load, eg Docker?

This correlates with our observations. We have MUI but also Syncfusion and a few other bigger dependencies. Our CI uses quite powerful machines but we are running a lot of things in parallel, the load changes over the day, which would explain why it happens randomly (but we observed it's like ~50% of cases).

@synaptiko
Copy link
Author

synaptiko commented Feb 27, 2023

For anyone interested, here's our "workaround" solution. We have a script called cypress-ct-runner.mjs (under bin/vite-migration:

#!/usr/bin/env node
import { Chalk } from 'chalk';
import cypress from 'cypress';
import fs from 'fs';
import path, { dirname } from 'path';
import { fileURLToPath } from 'url';

const chalk = new Chalk({ level: 1 });
const groupId = process.argv[2];

if (groupId === undefined) {
  console.error('Please provide a group id parameter');
  process.exit(1);
}

const __dirname = dirname(fileURLToPath(import.meta.url));
const groups = JSON.parse(fs.readFileSync(path.join(__dirname, '..', '..', 'component-test-groups.json')));
const group = groups[groupId - 1];
const maxRetries = 5;
let specs = group.join(',');
let shouldFail = false;
let shouldRetry = true;
let retries = 0;

while (shouldRetry) {
  if (retries > 0) {
    console.log(chalk.bgRed(`Retry number: ${retries}`));
    console.log(chalk.bgRed('Retrying failed tests:'));
    console.log(
      chalk.red(
        specs
          .split(',')
          .map((spec) => `- ${spec}`)
          .join('\n')
      )
    );
    console.log();
  }

  const result = await cypress.run({
    spec: specs,
    browser: 'chrome',
    headless: true,
    testingType: 'component',
  });

  const failedTests = result.runs.reduce((failed, { tests, spec }) => {
    return [...failed, ...tests.filter(({ state }) => state === 'failed').map((test) => ({ ...test, spec }))];
  }, []);

  specs = failedTests
    .map(({ spec }) => spec.relative)
    .reduce((result, spec) => {
      if (!result.includes(spec)) result.push(spec);
      return result;
    }, [])
    .join(',');

  if (failedTests.length === 0) {
    shouldRetry = false;
  } else if (retries >= maxRetries) {
    shouldRetry = false;
    shouldFail = true;
  } else {
    retries++;
  }
}

if (shouldFail) {
  console.log(chalk.bgRed("Couldn't recover these failing tests:"));
  console.log(
    chalk.red(
      specs
        .split(',')
        .map((spec) => `- ${spec}`)
        .join('\n')
    )
  );
  process.exit(1);
} else {
  console.log(chalk[retries === 0 ? 'bgBlue' : 'bgRed'](`Finished running tests with ${retries} retries`));
}

Under the same path we also have another script which collects paths of all our component test specs and "groups" them randomly, it looks like this:

#!/usr/bin/env node
const fs = require('fs');
const path = require('path');
const glob = require('glob');

const groupCount = process.argv[2];

if (groupCount === undefined) {
  console.error('Please provide a group count parameter');
  process.exit(1);
}

glob('src/**/*.cytest.tsx', (err, files) => {
  if (err) {
    console.error('An error occurred:', err);
    process.exit(1);
  }

  console.log('Found', files.length, 'files');

  // sort files randomly
  files.sort(() => Math.random() - 0.5);

  // take 1/3 of the files and put them in the first group to warm up Vite's cache
  const firstGroupAmount = Math.ceil(files.length / groupCount / 3);
  const groups = [files.splice(0, firstGroupAmount)];

  // ignore the first group in this loop
  for (let i = 0; i < groupCount - 1; i++) {
    groups.push(files.slice((i * files.length) / (groupCount - 1), ((i + 1) * files.length) / (groupCount - 1)));
  }

  console.log(
    'Created',
    groups.length,
    'groups with following distribution:',
    groups.map((group) => group.length)
  );

  fs.writeFileSync(path.join(__dirname, '..', '..', 'component-test-groups.json'), JSON.stringify(groups, null, 2));
});

On CI we have a following setup of steps:

  • Distribute component tests randomly
    • ./bin/vite-migration/distribute-component-tests.js 5
  • Component tests [1]
    • ./bin/vite-migration/cypress-ct-runner.mjs 1
  • Component tests [2-5]
    • ./bin/vite-migration/cypress-ct-runner.mjs 2-5

Our whole CI flow looks like this:
image

I hope that someone will find this useful until the root cause gets fixed.

@lmiller1990
Copy link
Contributor

lmiller1990 commented Feb 27, 2023

Wow, what a hack - the fact you needed to do that really isn't ideal, I hope we can isolate and fix this soon. I wonder if we need to reach out to the Vite team - they'd probably have more insight than we would for the Vite internals.

Seems a lot of similar issues in Vite: https://github.com/vitejs/vite/issues?q=is%3Aissue+Failed+to+fetch+dynamically+imported+module

FYI we do the import here:

load: () => import(`${devServerPublicPathRoute}${supportRelativeToProjectRoot}`),

I wonder if we can add some pre-optimzation logic during CI mode to force Vite to pre-compile all dependencies, side stepping this issue entirely (which is sort of what the workarounds here are doing).

@lmiller1990
Copy link
Contributor

Does anyone know any large OSS React + Vite + MUI projects I could try using to reproduce? I tried moving MUI core to Vite but it's not straight forward.

@matanAlltra
Copy link

matanAlltra commented Mar 7, 2023

This seems like a duplication of this thread so writing here also:

I have tried to run the tests: locally, on our Github Actions CI, cypress IO cloud and currents.dev(cypress cloud competitor). Out project is using React + Typescript.

locally:
env: Macbook Pro 2021 12.5.1 (21G83) (M1 chip)
running locally without docker
result: failing after a few test runs

Github Actions:
env: ubuntu-latest-4-cores
docker: Github Worker Ubuntu 4 cores
result: failing after a few test runs

Cypress.io
env: unknown
docker: unknown
result: success - it seems to work as expected

currents.dev(cypress alternative)
env: unknown
docker: unknown
result: success - it seems to work as expected

Haven't mention it until now but on the cloud solution it does seem to work. which leads me to the conclusion that it is something with my environment that I'm doing wrong

here is an example of a test I notice that fails more often

ApproveRun.spec.cy.tsx

import { ROUTES } from 'consts';
import { IN_PROGRESS } from 'consts/RUN';
import { MemoryRouter, Route, Routes } from 'react-router-dom';
import DynamicPage from 'routes/DynamicPage/DynamicPage';

import { beforeEachInit, run } from './util';

describe('Approve run', () => {
	beforeEach(() => {
		beforeEachInit();
		cy.wrappedMount(
			<MemoryRouter initialEntries={[`/run?runId=${run.id}`]}>
				<Routes>
					<Route path="/:page" element={<DynamicPage />} />
				</Routes>
			</MemoryRouter>,
		);
	});

	it('should approve run action', () => {
		cy.getByTestId('ApprovalHeader__decline-button').click();

		cy.intercept(`${ROUTES.runRoute}/${run.id}`, {
			ok: true,
			run: { ...run, status: IN_PROGRESS },
		});

		cy.getByTestId('JsonFormModal_SubmitButton').click();

		cy.getByTestId('TagCell__turquoise').contains('In progress...');
	});
});

@lmiller1990 regarding the complexity of the test - I'd say it is pretty complex and heavy test because we render almost the top component in a SPA application
regarding the imports - we are using absolute paths all across the codebase so it will be difficult to refactor. In our tests I did suspected that absolute paths imports may cause the issue so I tried to refactor the imports in the test to be relative instead of absolute but it didn't seem to solve the problem. It is pretty difficult to see if it actually helps tho because the error is not occurring in consistent way(not on a specific test).

@Murali-Puvvada the bash scripts is a workaround. tests that should take 300ms for example with the cypress run command takes 1000-3000ms instead because it rerun the chrome instance every time. Also: we don't have cypress e2e in our application(only component tests.)

I've uploaded the debug output to this public file(700mb log file) when I run all the tests: https://drive.google.com/file/d/1-2KOb6KV1SyOc_hBi2c2DK40gRtFeeop/view?usp=sharing

Would love any help regarding the issue.

here is the error I'm talking about:
image

@KaelWD
Copy link

KaelWD commented Mar 7, 2023

We're seeing this in Vuetify's CI too: https://github.com/vuetifyjs/vuetify/actions/runs/4354367755/jobs/7609586657
It isn't always the same file, usually src/cypress/support/index.ts but I've seen spec files too like src/components/VInput/__tests__/VInput.spec.cy.tsx
We're still on Vite 3.2.5, I'll try updating to 4.1.4 and see if that helps.

@matanAlltra
Copy link

matanAlltra commented Mar 7, 2023

@KaelWD
I'm using Vite 4 and it doesn't seem to help

@chojnicki
Copy link

chojnicki commented Mar 8, 2023

@lmiller1990 For us it's in docker and component testing. For months now... But what's weird:

It only happens on CI/CD (GH actions) where we using exact same docker container as on devs computers. On PC (docker) it works always.
We have around 200 tests and in around 1/4 deploys it crashes on random test.
It fails always on just single test, that is usually in the middle of list, so around 100th test. But not always.
Sometimes it works days without any issue, and then fails multiple times per day.
If I repeat workflow, it passes fully, but if it crashes again it is usually on same test.
It's really annoying because it slows deployment heavily, If I have to repeat testing in CI/CD multiple times, when I know tests are just fine....

I was using Vite from beginning, and had this issue for months, trying multiple Cypress, browers and Vite versions and any workaround I could find. Everything is up to date.

It's hard to debug because like I said in never happens locally, just on GH. I was suspecting memory leak, but cypress/docker is running with stable ram usage, low for GH limits.

What I tried recently was configuring "attempts" to tests, so even if it fails, it should work fine second time. But this is useless option in that scenario, because when Failed to fetch dynamically imported module happens, Cypress just stops running that test and second attempt never happens........

Recently I added 2 packages that constantly optimized itself to optimizationDeps.entries. And I think that helped a little, but random failed tests still occurs.

Now I'm expirecing "failed to fetch" not component file, but cypress index file.
Failed to fetch dynamically imported module: http://localhost:3000/__cypress/src/cypress/support/index.js Still on single test, and after rerun everything works.

@matanAlltra
Copy link

matanAlltra commented Mar 8, 2023

@chojnicki for me it's happening a lot of constantly. Currently the only workaround I've found was this bash script which run the tests separately one by one and if it fails than it retries for two times. Its significantly slower, but it does the job to only fail when a test actually fails. (we're running this bash script in GH actions too)

#!/bin/bash
ARGS=$@
all_failed=0

for file in $( find . -type f -name '*.spec.cy.tsx' ); do
    attempts=0
    while [ $attempts -lt 3 ]; do # Retry up to 3 times
        attempts=$((attempts+1))
        echo "Running $file (attempt $attempts)..."
        yarn cypress run $ARGS --component --browser chrome --spec $file && break
    done

    if [ $attempts -eq 3 ]; then # All attempts failed
        all_failed=1
        break
    fi
done

if [ $all_failed -eq 1 ]; then
    exit 1
fi

@lmiller1990
Copy link
Contributor

lmiller1990 commented Mar 8, 2023

We have an internal repro 🔥 FINALLY!

It's Cypress org private project, but dropping the link here so someone at Cypress can internally... https://cloud.cypress.io/projects/d9sdrd/runs/4193/test-results/2f924501-7fb2-4c80-b69f-819010c67c87

Now we've got a repro I can dig into it... for us it's CI only too, so sounds like a resources/race condition.

@lmiller1990
Copy link
Contributor

Related: vitejs/vite#11804

@matanAlltra
Copy link

matanAlltra commented Mar 8, 2023

FYI It seems I don't have a access to the link @lmiller1990

@chojnicki
Copy link

@lmiller1990 hope that repo you got will help, but if not, I think I could get access for you/cypress to our repo too.

@lmiller1990
Copy link
Contributor

@lmiller1990
Copy link
Contributor

lmiller1990 commented Mar 8, 2023

@chojnicki thanks a lot, I'll ping you if we need access, I think our reproduction should be enough. Is your reproduction consistent? CI only?

@matanAlltra sorry this reproduction repo is in our Cypress org but a private repo, I can't make it public it right now, but someone on our team will be able to see it and debug it. If anyone else can share a public repo with this issue, that would sure help, too!

@OliJeffery
Copy link

I was encountering this and I think the message is a red herring. In switching between branches, I'd uninstalled @pinia/testing, which I frequently use on component tests. Once I reinstalled that, it worked.

@trevordavidlawrence
Copy link

trevordavidlawrence commented Dec 8, 2023

Chiming in here in the hopes of saving others some time.

We had had intermittent issues with component tests failing in CI. The errors always came near a [vite] ✨ new dependencies optimized line in the job log.

I was able to reproduce the issue locally by clearing node_modules/.vite/deps before running the tests, and adding:

optimizeDeps: {
  entries: ['cypress/**/*'],
},

to our vite.config.js solved those issues. Our test files (.cy.js) are all located within that directory.

That was fine for a while, but I just now had to address another similar issue, this time with a new dependency (vue-router) that was not previously imported directly by our tested components. I eventually figured out that the issue was due to the fact that we cache the node_modules directory in between jobs (we use GitLab CI/CD). I don't have a full understanding of the Vite internals, but I think it saw that (now stale) .vite/deps data and assumed it didn't need to optimize (pre-optimize?) the Cypress test files again. I added a simple rm -rf node_modules/.vite/deps to the Cypress test job, and that fixed the issue.

@OliJeffery
Copy link

I experienced this too - for me it was because I was switching between branches with different dependencies, and one of them somehow hadn't been reinstalled inbetween changing branches (in my case @pinia/testing). So while Cypress was reporting that it couldn't import components.js, it wasn't because that file was missing, it was because that file was erroring because of the missing import of @pinia/testing within it. Hope this helps people who end up here in the future (or me, when this happens again and see my own comment).

@asteed21
Copy link

asteed21 commented Feb 5, 2024

I was also unable to see any impact from manipulating the optimizeDeps option in the vite config, but I did want to document what worked for us in case it's useful to someone else as it turned out to be tertiary to the original issue documented. The issue for us turned out to be tied to CircleCI and the use of a static port in our vite config. Our CI config used the same executor to run both E2E tests and Component tests in parallel after a build/install job. The static port was required for the E2E tests to serve reliably for the cypress run, but Cypress component tests were also picking up the static port from the default vite config.

Though I initially thought the executors would be isolated, this post made me reconsider. Explicitly overriding the port in our cypress config file fixed this issue and prevented the error from occurring due to host conflicts on the same port.

import viteConfig from './vite.config.ts';

// override default config
const customViteConfig = {
  ...viteConfig({ mode: 'dev' }),
  server: {
    port: 3001,
  },
};

...

// add configuration for component dev server
  component: {
    devServer: {
      framework: 'react',
      bundler: 'vite',
      viteConfig: customViteConfig,
    },
... 

@duartmig
Copy link

I am facing same issue

@marcel-foobar
Copy link

marcel-foobar commented May 23, 2024

Fixed by deleting node_modules then npm i - the oldest trick in the book - excluding restarting the device

@leho-dev
Copy link

leho-dev commented Jul 5, 2024

In my case, I try cypress open, run all component test and the CI working.

@adamscybot
Copy link

adamscybot commented Jul 6, 2024

Something interesting is happening here. After I claimed victory on this issue, we did indeed continue error-free for many months (thousands of runs), which I think is solid evidence of optimizeDeps having something to do with it, if only in part.

However. Yesterday, due to some other requirement, I upgraded Cypress from v12 to v13, and at the same time, I upgraded Vite from v3 to v5. It is probably tertiary, but it is also worth noting this also involved an upgrade of the Cypress docker image, which would have also resulted in a Chrome upgrade,

It was only once, and I hardcore logged off shortly after in total denial as Friday evening drew to a close -- but after doing this, the issue re-emerged for the first time in months on a single CI run. Deep down, I know the curse has returned.

Perhaps an ever-present race condition has suddenly reactivated due to the subtle execution speed differences all these upgrades made, or perhaps there was a relevant behavioural change that affected this issue amongst those changes.

I think it would be useful for everyone to post their exact Vite and Cypress versions and their optimizeDeps settings when posting their analysis. if you've already posted, it would be super helpful to just say "Im using Cypress x with Vite x and this optimizeDeps deps setting and it's happening for me". Perhaps also note what docker image it runs in CI (or if you don't do that and it's on bare metal).

Monday will probably be interesting, but England just got to the semi-finals so I'm pretending not to be affected by the blaze of inevitable red all over CI.

This is a total stretch but I'm also someone who is loading service workers (MSW) as part of the test story. I doubt its related, but would be interested to know in the rare event there's some consistency between that cause and this affect.

@adamscybot
Copy link

adamscybot commented Aug 13, 2024

This finally flared again enough for me to attend to it. The recurrence was somehow related to a Vite 5 upgrade. Vite 5 now has a new option server.warmup which can get the cache ready in advance.

We removed configuration around optimizeDeps and tried a (brutal) wildcard glob in warmup:

 // ...etc
 component: {
    // ...etc
    devServerPublicPathRoute: '',
    devServer: {
      bundler: 'vite',
      framework: 'react',
      viteConfig: {
           // ...etc
          server: {
              warmup: {
                   clientFiles: ['**/*'],
              }
          }
      }
    },

The problem is now gone again. If you are on Vite 5, I recommend trying this. If you are not, I recommend upgrading Vite (you also need at least Cypress 13.10 to be compatible) and then trying it.

Probably this glob pattern can be reduced to just my test file, but just getting this information out there. We also haven't done enough testing to know if it was actually the removal of optimizeDeps which is the key ingredient.

Either way, we defintely need to be clear what Vite versions we have when reporting back.

@moritz-baecker-integra
Copy link

Got the same problem on CI/CD pipeline:

Vite: 5.2.0
Cypress: 13.13.0

Had no entries in optimizeDeps.
Will evaluate if server.warmup.clientFiles works

@apdrsn
Copy link

apdrsn commented Aug 27, 2024

Got the same problem on CI/CD pipeline:

Vite: 5.2.2
Cypress: 13.13.2

Had no entries in optimizeDeps as it gives a lot of errors.

Another observation:
During the test run in pipeline the message three times:

[TypeScript] Found 0 errors. Watching for file changes.

Adding

viteConfig: {
  // ...etc
  server: {
    warmup: {
      clientFiles: ['**/*'],
    },
  },
},

Gives a lot of these errors:
[vite] Pre-transform error: Failed to resolve import ...

@jennifer-shehane
Copy link
Member

We recently released the experimentalJustInTimeCompile flag for component testing in 13.14.0. We have not completely evaluated whether that flag fixes this issue, but it does address performance issues in some cases when running tests in CT, so I wanted to mention it as something those in this thread might want to try out.

@adamscybot
Copy link

We recently released the experimentalJustInTimeCompile flag for component testing in 13.14.0. We have not completely evaluated whether that flag fixes this issue, but it does address performance issues in some cases when running tests in CT, so I wanted to mention it as something those in this thread might want to try out.

Ah yes! I had seen this just today also. The warmup trick is still working for me, but I will give this a go as well.

@GAK-NOX
Copy link

GAK-NOX commented Oct 9, 2024

We're having this problem still, after adding the experimentalJustInTimeCompile flag and optimizeDeps stuff. It's random and intermittent, which somehow makes it even worse. The warmup stuff requires us to upgrade to Vite 5, which I don't want to do just yet, so I'm stumped as what to try next.

npm ls vite
└── vite@4.5.3

npm ls cypress
└── cypress@13.14.0

vite.config.js:
...
optimizeDeps: {
      entries: ['cypress/**/*'],
    },
cypress.config.ts:
...
experimentalJustInTimeCompile: true,

Error:

 Running:  MyTest.cy.tsx                                          (1 of 3)
  Estimated: 0 seconds
Browserslist: caniuse-lite is outdated. Please run:
  npx update-browserslist-db@latest
  Why you should do it regularly: https://github.com/browserslist/update-db#readme
[TypeScript] Found 0 errors. Watching for file changes.
[TypeScript] Found 0 errors. Watching for file changes.
5:16:42 AM [vite] ✨ new dependencies optimized: cypress/react18, @emotion/cache, @mui/material/styles, @mui/material, notistack, @emotion/react, tss-react, react-router-dom, tss-react/mui, react-i18next, react-draggable, underscore, html2canvas, i18next, dayjs, highcharts/highstock, highcharts/modules/draggable-points, webgl-plot, graphql-tag, @segment/analytics-next, @mui/icons-material/Close, @mui/icons-material/CheckCircleOutlined, @mui/icons-material/CancelOutlined, @mui/icons-material/InfoOutlined, @mui/material/Card, @mui/material/Collapse, @mui/material/CardActions, @mui/icons-material/ExpandMore, @mui/icons-material/AccountCircle, @mui/material/IconButton, @mui/icons-material/CloseOutlined, @mui/icons-material/CheckCircleOutline, @mui/icons-material/ArrowBack, dayjs/plugin/utc, dayjs/plugin/timezone, dayjs/plugin/advancedFormat, dayjs/plugin/duration, react-dom, @apollo/client, @apollo/client/utilities, @apollo/client/link/context, @apollo/client/link/retry, @apollo/client/link/subscriptions, graphql-ws, @growthbook/growthbook-react, i18next-http-backend, i18next-browser-languagedetector, oidc-client-ts, dexie, highcharts-react-official, highcharts/modules/xrange, highcharts/modules/pattern-fill, @mui/icons-material/Height, @mui/material/Tooltip
5:16:42 AM [vite] ✨ optimized dependencies changed. reloading
  1) An uncaught error was detected outside of a test
  0 passing (879ms)
  1 failing
  1) An uncaught error was detected outside of a test:
     TypeError: The following error originated from your test code, not from Cypress.
  > Failed to fetch dynamically imported module: http://localhost:3000/__cypress/src/cypress/support/component.ts
When Cypress detects uncaught errors originating from your test code it will automatically fail the current test.
Cypress could not associate this error to any specific test.
We dynamically generated a new test to display this failure.

The other 2 runs did not fail, only the first one.

@GAK-NOX
Copy link

GAK-NOX commented Oct 9, 2024

The other 2 runs did not fail, only the first one.

So, I decided to make a stress test where the same 3 spec files, with a total of 9 tests are run in parallel on 18 different runners, and the results are ... interesting.

  • 8 Pass all 9 tests.
  • 1 Passes 7 tests but skips 1 spec file, I think that's a Cypress cloud thing. I'll try without the record flag next.
  • 9 Fail, all with the same error as in my original comment, just after a different amount of tests: > Failed to fetch dynamically imported module: http://localhost:3000/__cypress/src/cypress/support/component.ts

It's essentially a 50/50 occurrence, which is quite annoying. I don't really know what to try next, I want to try to tell vite not to use dynamic import for the cypress modules, if I can configure it that way and see if that helps.

I also was able to see this error occur when running locally on my dev machine. Strange, I thought this was only happening in Pipelines / CI environments.

image

@GAK-NOX
Copy link

GAK-NOX commented Oct 10, 2024

A monumentally stupid way to fix this for us is to just retry the cypress test run a couple of times if the failure is due to this dynamic import. Here is the code snip we use in our pipe, maybe it can help any of you. Again, this is stupid. And the bash script can probably be improved, but it at least works.

package.json ...  "scripts": 

"test:cypress-component-retry": "bash -c 'RETRY_NUM=3;RETRY_EVERY=1;NUM=$RETRY_NUM;TEST_FAIL=0;until (set -o pipefail; npm run test:cypress-component | tee outfile.txt); do cat outfile.txt | grep -q \"Failed to fetch dynamically imported module\"; if [ $? -eq 0 ]; then 1>&2 echo \"Dynamic import failure ... retrying $NUM more times\"; sleep $RETRY_EVERY; NUM=$((NUM-1)); if [ $NUM -eq 0 ]; then 1>&2 echo \"Command was not successful after $RETRY_NUM tries\"; exit 1; fi; else TEST_FAIL=1; break; fi; done; exit $TEST_FAIL;'",

You can probably get ChatGPT to reformat and explain the script to you, so I won't do that here. I'm adding bash at the beginning because for some reason our image uses dash instead of bash, which doesn't have pipefail.

@ShellyDCMS
Copy link

@GAK-NOX , thanks! I was waiting for a solution for weeks, this prolongs the build, but solves the issue.

@FelixNumworks
Copy link

I'm still having this problem. I'm not really satisfied with the last solution (retrying the run), so I explored the "warmup files" approach suggested earlier by @adamscybot.

The solution provided was

viteConfig: {
  server: {
    warmup: {
      clientFiles: ['**/*'],
    },
  },
},

However, as @apdrsn mentioned, this leads to numerous errors on my end:
[vite] Pre-transform error: Failed to resolve import ...

The Vite documentation also advises against warming up too many files, so I was reluctant to use a wildcard to preload everything.

Instead, I refined this solution by warming up only the specific file that was failing to load. This approach seems cleaner and has been working well so far:

viteConfig: {
  server: {
    warmup: {
      clientFiles: ['**/cypress/support/component.ts'],
    },
  },
},

Note that this still requires Vite 5 as the warmup config isn't available in earlier versions.

@adamscybot
Copy link

adamscybot commented Nov 1, 2024

Nice! Not many people have reported back on the warmup trick, but there's also an absence of negative reports. Its still working for us. I suspect though we are still tweaking around a still-existing race condition.

Glad to see it working for someone.

@moritz-baecker-integra
Copy link

Nice! Not many people have reported back on the warmup trick, but there's also an absence of negative reports. Its still working for us. I suspect though we are still tweaking around a still-existing race condition.

Glad to see it working for someone.

I report back positively back then as well.
Since we use the warmup method, it seems to work for us.

Got the same problem on CI/CD pipeline:

Vite: 5.2.0 Cypress: 13.13.0

Had no entries in optimizeDeps. Will evaluate if server.warmup.clientFiles works

After some time i can confirm that it works for me, but im afraid that this could change in the future with bigger components, etc.

@adamscybot
Copy link

adamscybot commented Nov 1, 2024

Thanks @moritz-baecker-integra. Excellent.

Due to my experience with previous attempts breaking down, I wont claim the smoking gun is definitely found this time 🙃. But I think its clear that anyone experiencing this should probably look to go down this road and report back.

Wether this is the "root cause" or not is up for debate. Though I do note that the warmup option generally speaking as a cypress-relevant optimization is perhaps something that should be being utilised anyway in the core cypress vite binding package. It feels to me that at the least, the support file should be included by default here -- since it fits the bill of what is intended in the vite docs:

Vite allows you to warm up files that you know are frequently used, e.g. big-utils.js, using the server.warmup option. This way big-utils.js will be ready and cached to be served immediately when requested.

This is possibly a good idea in isolation anyway, but could also have positive effects on this difficult issue as well. Im not sure how practical this is, but you might argue that all tests in the queue (given suite) could be added as well, as we know they are about to be used. But with that one its a debate about suite startup cost vs individual test startup cost. The support file seems more cut and dry.

I therefore want to grab the attention of the Cypress team. Not sure who is relevant but pinging @AtofStryker as looks like he did the Vite 5 integration work. Please redirect if wrong.

The upgrade to Vite 5 and Cypress 13.10 is obviously a bit of work for some people -- though I note in my case (with many many tests) the work needed was quite minimal.

I'll also add I developed a version of the crude "retry when the cursed log line is detected" solution a bit further up some time ago out of desperation. Eventually, even that broke with the retry set to three times. It did buy us an additional few months though.

@duartmig
Copy link

duartmig commented Nov 2, 2024

In our case, to fix the CI/CD instability with Cypress component tests, we applied the following workaround: adding specific project dependencies to optimizeDeps.include in the viteConfig section of cypress.config.ts. This pre-bundles these dependencies before Cypress component tests run, resolving issues where tests would otherwise fail to start in the first spec file. Pre-bundling ensures these resources are loaded in advance, which seems to improve stability in the CI/CD pipeline.

@SimeonC
Copy link

SimeonC commented Nov 5, 2024

The only thing for us that completely killed this issue was when we switched to a non-shared CI runner. Our Infra team kindly gave us a dedicated machine so we don't share CPU/Memory with other CI jobs and that seems to have completely stopped the issue for us.

Don't know how much this would help as to me it points to an issue in Vite rather than cypress?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CI General issues involving running in a CI provider CT Issue related to component testing npm: @cypress/vite-dev-server @cypress/vite-dev-server package issues prevent-stale mark an issue so it is ignored by stale[bot] Triaged Issue has been routed to backlog. This is not a commitment to have it prioritized by the team.
Projects
None yet