-
Notifications
You must be signed in to change notification settings - Fork 329
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix and change a11y tests #1687
Conversation
cc @trallard |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added only a minor comment, but this LGMT plus this approach seems a lot more maintainable than the previous one so this is a +1
Will go ahead and approve and wait a day or so before merging to allow for any other comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks like a nice improvement, thanks @gabalafou!
Long-term, I might consider splitting out the mapping of pagename -> expected/tolerated violation
into a dict or other data structure that is external to the function, or maybe even storing it as a separate JSON or YAML file. I usually find that easier to maintain than the current design, where the various exceptional url_pathname
s are written into the function code body.
That said, the landmark-unique
situation is complicated and might not be easy to abstract out in the way I've just described. So I think for now let's go with what you've got here.
Co-authored-by: Tania Allard <taniar.allard@gmail.com>
* Fix and change a11y tests * Apply suggestions from code review Co-authored-by: Tania Allard <taniar.allard@gmail.com> * fix lint --------- Co-authored-by: Daniel McCloy <dan@mccloy.info> Co-authored-by: Tania Allard <taniar.allard@gmail.com>
Some of the passing accessibility tests broke because of the way I had implemented the data regression fixtures for pytest-regressions in #1501.
Specifically, some of the data regression fixture files contained the Axe-core version, so when Axe-core was updated, those fixtures became out of sync with the test run.
I decided to scrap the data regression fixture files and instead, I define a function in the test file that filters out violations for Axe-core rules that we want to ignore. Using a function is ultimately more flexible.