Skip to content

PoC: A runner to load specs-based test cases from a central service and run it as part of the test automation #825

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: dev
Choose a base branch
from

Conversation

rayluo
Copy link
Collaborator

@rayluo rayluo commented May 30, 2025

This PR contains a runner that can load specs-based test cases from a central service. The schema of those test cases will be defined elsewhere.

The test runner is then integrated as ONE test case, into the existing test automation. Its effect will be one of the following three outcomes.

  • If all specs-based test cases, you will see a line like this in the pipeline log:
    tests/test_smile.py . [ xx%]
    That small dot after the file name means the multiple specs-based test cases as a whole passed.
    We don't normally need to find that log because, if all other traditional existing test cases also pass, the build will simply end with a green status, as it normally would be.
  • If some of the specs-based test cases failed, you will see a line like this in the pipeline log:
    tests/test_smile.py F [ xx%]
    That capital F after the file name means some of the multiple specs failed and, as a result, the entire test automation run will be marked as failed.
  • If for whatever reason the centralized specs-based test cases service is down, this PR implements it to automatically skip the specs-based test cases. You will see a line like this in the pipeline log:
    tests/test_smile.py s [ xx%]
    That lowercase s after the file name means the specs-based test cases were skipped.
    This way, the occasional central test service maintenance will not block existing test automation.

The test runner currently supports test cases for managed identity v1. Going forward, this repo will receive more changes to the test runner to support more scenarios such as CCA. Actual specs-based test cases will continue to be hosted elsewhere centrally.

try:
with requests.get(self.testcase_url) as response:
response.raise_for_status()
self.test_spec = yaml.safe_load(response.text)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed that the tests would be written in plain language (English)?

Interprets testcase file(s) to create and execute test cases using MSAL.

Initially created by the following prompt:
Write a python implementation that can read content from feature.yml, create variables whose names are defined in the "arrange" mapping's keys, and the variables' value are derived from the "arrange" mapping's value; interpret those value as if they are python snippet using MSAL library.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is feature.yml ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants