Closed
Description
Feature Description
I will continue to update this over time and ultimately aim to run a vote in the next few days if a clear contender doesn't emerge.
As discussed in #18302 and #14639, gitea is in need of a visual testing framework to prevent changes from breaking existing parts of the UI.
Requirements:
- Need to render JS, HTML, Vue, and CSS to detect if differences occur between accepted and proposed changes
- Need to have a method to store accepted state of UI (i.e. screenshot)
Considerations:
- Cross platform and mobile testing would be ideal, but might add too many permutations
- The ability to screencap specific UI elements would also be helpful
- Whatever we select we need to figure out where to save the screenshot files.
- We can't really commit images directly to the repo because it would simply be too much data
- One option is LFS, but we don't really want to add a new dev dependency
- Some of the testing frameworks have a "Dashboard" that should host the screencap images automatically in a database (ex. Cypress). Need to verify this.
- Can we use git notes to link to an S3/Minio bucket which contains the test images? Has anyone tried something like this?
- Does this play well with JS testing framework (ie. Jest)
Options
Tool | Description | License | Visual Test | JS Assertions | Pros | Cons |
---|---|---|---|---|---|---|
Cypress.io | Y | Y | ||||
Playwrite | Y | Y | ||||
End-to-end testing, simplified | N | No visual testing | ||||
Percy | ||||||
BackStopJS | Y | ? |
TBD
I haven't included Selenium here, as I figured that would require us building our own testing framework, which, given all of the options above, I don't think is necessary. If someone has examples of a good implementation I can include it.
Questions
- Can we use the existing integration test framework to emulate an instance of Gitea or do we need to spin up a new gitea instance in CI to run the visual tests? For example, we can have a separate Drone pipeline run in parallel that starts an instance of gitea and runs the front-end tests.
- How many permutations do we need / can we afford? For ex. Firefox, Chrome, Safari, Mobile + we should test on Windows/Ubuntu/Alpine? Will need to follow up with some tests.
Roadmap
TBD