Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Test automation] Fix pytest collection errors and schedule daily CI/CD workflow run #737

Open
wants to merge 59 commits into
base: main
Choose a base branch
from

Conversation

ConnorHack
Copy link
Contributor

What does this PR do?

This PR introduces a new daily cron job scheduled to run at 2:30 am EST daily. It also modifies the following small items:

  • Fix an input reference.
  • Remove ollama installation dependency, as it is no longer needed after the test framework refactoring.
  • Update to v4 of GitHub Actions artifacts (link)

This PR also fixes some collection errors that were found when testing.

Test Plan

Workflow execution done within synced forked repository of llama-stack here.

Pre-commit job ran here.

ConnorHack and others added 30 commits November 6, 2024 09:17
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Jan 9, 2025
@sixianyi0721
Copy link
Contributor

sixianyi0721 commented Jan 10, 2025

hey @ConnorHack, from the log of this CI run, it seems tests were not passing (ex. "Run Tests: Loop"step) - how should we interpret the result? can you help me understand how the CI run result would be informative to developers?

@raghotham raghotham changed the title Fix pytest collection errors and schedule daily CI/CD workflow run [Test automation] Fix pytest collection errors and schedule daily CI/CD workflow run Jan 10, 2025
@ConnorHack
Copy link
Contributor Author

@sixianyi0721

hey @ConnorHack, from the log of this CI run, it seems tests were not passing (ex. "Run Tests: Loop"step) - how should we interpret the result? can you help me understand how the CI run result would be informative to developers?

If you're referring to the ChildFailedErrors: these are not actual errors. The (exit code: 5) failure occurs because no tests are collected, so this shouldn't be seen as tests not passing.

As for how to interpret the result...
Screenshot 2025-01-13 at 4 49 39 PM

...I need to take another look at what fixtures are enabled with the provider meta_reference and model llama_3b. Both test_model_registration.py and test_text_inference.py used to be collected and tested with these decorators, but this changed awhile ago ago and I need to investigate the reason.

So:

can you help me understand how the CI run result would be informative to developers?

TLDR: It isn't helpful in its current state, but it will be once I can identify which tests should be selected using the pytest parameters provider_id and llama_3b so there are tests both collected and selected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants