Split Cypress specs across parallel CI machines for speed without using any external services
Add this plugin as a dev dependency and include in your Cypress config file.
# install using NPM
$ npm i -D cypress-split
# install using Yarn
$ yarn add -D cypress-split
Call this plugin from your Cypress config object setupNodeEvents
method:
// cypress.config.js
const { defineConfig } = require('cypress')
// https://github.com/bahmutov/cypress-split
const cypressSplit = require('cypress-split')
module.exports = defineConfig({
e2e: {
setupNodeEvents(on, config) {
cypressSplit(on, config)
// IMPORTANT: return the config object
return config
},
},
})
Important: return the config
object from the setupNodeEvents
function.
// cypress/plugins/index.js
// https://github.com/bahmutov/cypress-split
const cypressSplit = require('cypress-split')
module.exports = (on, config) => {
cypressSplit(on, config)
// IMPORTANT: return the config object
return config
})
Now update your CI script:
Run several containers and start Cypress using --env split=true
parameter
stages:
- build
- test
install:
image: cypress/base:16.14.2-slim
stage: build
script:
- npm ci
test:
image: cypress/base:16.14.2-slim
stage: test
parallel: 3
script:
- npx cypress run --env split=true
All specs will be split into 3 groups automatically. For caching details, see the full example in gitlab.com/bahmutov/cypress-split-gitlab-example.
parallelism: 3
command: npx cypress run --env split=true
See the full example in bahmutov/cypress-split-example
# run 3 copies of the current job in parallel
strategy:
fail-fast: false
matrix:
containers: [1, 2, 3]
steps:
- name: Run split Cypress tests 🧪
uses: cypress-io/github-action@v5
# pass the machine index and the total number
env:
SPLIT: ${{ strategy.job-total }}
SPLIT_INDEX: ${{ strategy.job-index }}
Note that we need to pass the SPLIT
and SPLIT_INDEX
numbers from the strategy
context to the plugin to grab. See the full example in bahmutov/cypress-split-example
Sample Jenkins File to run scripts on parallel:
pipeline {
agent {
// this image provides everything needed to run Cypress
docker {
image 'cypress/base:10'
}
}
stages {
// first stage installs node dependencies and Cypress binary
stage('build') {
steps {
// there a few default environment variables on Jenkins
// on local Jenkins machine (assuming port 8080) see
// http://localhost:8080/pipeline-syntax/globals#env
echo "Running build ${env.BUILD_ID} on ${env.JENKINS_URL}"
sh 'npm ci'
sh 'npm run cy:verify'
}
}
stage('start local server') {
steps {
// start local server in the background
// we will shut it down in "post" command block
sh 'nohup npm run start &'
}
}
// this stage runs end-to-end tests, and each agent uses the workspace
// from the previous stage
stage('cypress parallel tests') {
environment {
// Because parallel steps share the workspace they might race to delete
// screenshots and videos folders. Tell Cypress not to delete these folders
CYPRESS_trashAssetsBeforeRuns = 'false'
}
// https://jenkins.io/doc/book/pipeline/syntax/#parallel
parallel {
// start several test jobs in parallel, and they all
// will use Cypress Split to load balance any found spec files
stage('set A') {
steps {
echo "Running build ${env.BUILD_ID}"
sh "npx cypress run --env split=2,splitIndex=0"
}
}
// second thread runs the same command
stage('set B') {
steps {
echo "Running build ${env.BUILD_ID}"
sh "npx cypress run --env split=2,splitIndex=1"
}
}
}
}
}
post {
// shutdown the server running in the background
always {
echo 'Stopping local server'
sh 'pkill -f http-server'
}
}
}
If you are running N containers in parallel, pass the zero-based index and the total number to the plugin using the environment variables SPLIT_INDEX
and SPLIT
or via Cypress env option:
# using process OS environment variables
job1: SPLIT=3,SPLIT_INDEX=0 npx cypress run
job2: SPLIT=3,SPLIT_INDEX=1 npx cypress run
job3: SPLIT=3,SPLIT_INDEX=2 npx cypress run
# using Cypress env option
job1: npx cypress run --env split=3,splitIndex=0
job2: npx cypress run --env split=3,splitIndex=1
job3: npx cypress run --env split=3,splitIndex=2
This plugin finds the Cypress specs using find-cypress-specs and then splits the list into chunks using the machine index and the total number of machines. On some CIs (GitLab, Circle), the machine index and the total number of machines are available in the environment variables. On other CIs, you have to be explicit and pass these numbers yourself.
// it works something like this:
setupNodeEvents(on, config) {
const allSpecs = findCypressSpecs()
// allSpecs is a list of specs
const chunk = getChunk(allSpecs, k, n)
// chunk is a subset of specs for this machine "k" of "n"
// set the list as the spec pattern
// for Cypress to run
config.specPattern = chunk
return config
}
Suppose you want to run some specs first, for example just the changed specs. You would compute the list of specs and then call Cypress run
command with the --spec
parameter
$ npx cypress run --spec "spec1,spec2,spec3"
You can still split the specs across several machines using cypress-split
, just move the --spec
list (or duplicate it) to a process or Cypress env variable spec
:
# using process environment variables split all specs across 2 machines
$ SPEC="spec1,spec2,spec3",SPLIT=2,SPLIT_INDEX=0 npx cypress run --spec "spec1,spec2,spec3"
$ SPEC="spec1,spec2,spec3",SPLIT=2,SPLIT_INDEX=1 npx cypress run --spec "spec1,spec2,spec3"
# using Cypress "env" option
$ npx cypress run --env split=2,splitIndex=0,spec="spec1,spec2,spec3"
$ npx cypress run --env split=2,splitIndex=1,spec="spec1,spec2,spec3"
# for CIs with automatically index detection
$ npx cypress run --env split=true,spec="spec1,spec2,spec3"
To see diagnostic log messages from this plugin, set the environment variable DEBUG=cypress-split
Author: Gleb Bahmutov <gleb.bahmutov@gmail.com> © 2023
- @bahmutov
- glebbahmutov.com
- blog
- videos
- presentations
- cypress.tips
- Cypress Tips & Tricks Newsletter
- my Cypress courses
License: MIT - do anything with the code, but don't blame me if it does not work.
Support: if you find a problem, open an issue in this repository. Consider sponsoring my open-source work.