Skip to content

Add regression tests for voice commands #449

Closed
@pokey

Description

@pokey

We'd like to ensure that voice commands don't break. Probably the most comprehensive way to do this would be to convert all of our cursorless VSCode recorded tests into voice command tests. We'd basically take the spokenForm field of the test, call mimic on it, and ensure that its output matches the command in the test. We'd need to do some bulk processing to fix the spoken forms that didn't come from knausj

This effort would require talonvoice/talon#375 in order to run in CI, but will start by getting some tests running locally to function as a starting point for a conversation around our needs for a talon CI runner

See also knausj test setup, esp:

Also maybe worth having a look at contract / hypothesis testing?

Approach

  • Add Cursorless extension VSCode command cursorless.generate_spoken_form_tests that iterates recorded test cases and populates a tmp directory with spoken form tests, returning the path to that directory. Might look at
    export async function updateDefaults(spokenFormInfo: CheatsheetInfo) {
    for inspiration
  • Add Talon tag user.cursorless_default_vocabulary that switches all spoken forms to their normal forms (eg stock knausj alphabet, no modifier overrides)
  • Add Talon action that does the following:
    1. Calls the cursorless.generate_spoken_form_tests VSCode action
    2. Enables user.cursorless_default_vocabulary
    3. Enables a tag that overrides the cursorless command actions from https://github.com/cursorless-dev/cursorless/blob/main/cursorless-talon/src/command.py to just assert that the payload about to be sent matches expectation, and then doesn't send the action
    4. Iterates through the generated tests and runs them using mimic on the spoken form, setting up the payload checker with the payload. Possibly we should run 3) once per test case
    5. Disable both tags above
  • [/] Possibly add spoken form for above action to cursorless-talon-developer
  • For tests that fail due to our own custom spoken forms, we will normalize them in the PR so that they're correct in the actual recorded test. We can do that by using a custom transformation that just keeps a dictionary that maps from our spoken forms to default and keep adding to the dict as tests fail. Could call it normalizeSpokenForms and check it in to version control so we can use it if necessary in the future if spoken forms change for some reason
  • [/] Make it so that "cursorless record" enables the user.cursorless_default_vocabulary tag so that we don't end up recording tests with bad spoken forms in the future
  • [/] Add checkbox to PR template saying "I have run spoken form tests"

Metadata

Metadata

Labels

code qualityImprovements to code qualitytalonRelated to cursorless-talon

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions