-
Notifications
You must be signed in to change notification settings - Fork 15
Added Sanity Automation Test for SFT - Takes 5 mins to run #177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cursor Bugbot has reviewed your changes and found 4 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| "\n", | ||
| "# Select a subset of the dataset for demo purposes\n", | ||
| "train_dataset=dataset[\"train\"].select(range(32))\n", | ||
| "eval_dataset=dataset[\"train\"].select(range(24,32))\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Notebook eval dataset overlaps with training dataset
Medium Severity
The notebook's eval_dataset uses range(24, 32) which fully overlaps with train_dataset using range(32) (indices 0–31). This is data leakage — the model is evaluated on examples it trained on, producing artificially inflated metrics. The .py script correctly uses range(32, 40) for eval, so the notebook is inconsistent and wrong.
| ) -> bool: | ||
| with open(log_path) as f: | ||
| content = f.read() | ||
| lines = content.splitlines() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| "text": [ | ||
| "Experiment exp1-chatqa-lite-1 created with Experiment ID: 5 and Metric Experiment ID: 5 at /home/ubuntu/rapidfireai/rapidfire_experiments/exp1-chatqa-lite-1\n" | ||
| ] | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Notebook outputs are from a different experiment
Low Severity
The notebook's saved cell outputs come from a different experiment named exp1-chatqa-lite-1, but the code now creates exp1-chatqa-sanity-1. The outputs also show "Started 4 worker processes" while the sanity test is designed for 1 GPU. These stale outputs from a prior run are misleading as documentation of expected behavior.
Additional Locations (1)
| " logging_steps=2,\n", | ||
| " eval_strategy=\"steps\",\n", | ||
| " eval_steps=4,\n", | ||
| " fp16=True,\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Notebook uses fp16 while script uses bf16
Low Severity
The notebook's RFSFTConfig uses fp16=True while the equivalent .py script uses bf16=True. These are different precision modes that produce different numerical results and have different hardware requirements (bf16 needs Ampere+ GPUs). Since both files represent the same sanity test, this config divergence means the notebook isn't a reliable reference for the script the test actually runs.
Additional Locations (1)
david-rfai
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comments both mine and bugbots and address them. Also, move sanity/ under the tests/ directory in the repo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Outputs in notebook need to be cleared
| SCRIPTS_DIR = SANITY_DIR / "scripts" | ||
| VALIDATIONS_DIR = SCRIPTS_DIR / "validations" | ||
| LOGS_DIR = SANITY_DIR / "logs" | ||
| LOGS_BASE = Path("/home/ubuntu/rapidfireai/logs") # RapidFire experiment logs (unchanged) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Base this variable off of Path.home()


Changes
sanity/
├── scripts/
│ ├── rf-tutorial-sft-chatqa-sanity-1.py
│ ├── rf-tutorial-sft-chatqa-sanity-1.ipynb
│ └── validations/
│ └── validate_log_sft.py
├── tests/
│ └── test_rf_sft_sanity.py
└── logs/
└── .gitkeep (test_rf_sft_sanity.log is written here when the test runs)
Testing
Screenshots (if applicable)
Checklist
Performance Impact
No
Related Issues
Fixes #(issue number)
Closes #(issue number)
Related to #(issue number)
Note
Medium Risk
Adds a ~5 minute integration test that downloads models/datasets, uses GPUs, and depends on specific log text and paths, which can introduce CI flakiness and environment sensitivity despite being isolated to
sanity/.Overview
Adds an end-to-end SFT sanity check under
sanity/that runs a short RapidFire SFT grid search (4 configs,max_steps=8) on a small customer-support dataset slice and includes a companion notebook.Introduces a log validator (
validate_log_sft.py) and a pytest (test_rf_sft_sanity.py) that executes the training script, then asserts the RapidFirerapidfire.logcontains expected worker/GPU setup messages, per-run step completion, and graceful worker shutdown; the script forces single-GPU execution viaCUDA_VISIBLE_DEVICES=0to avoid NCCL/DataParallel issues in CI.Written by Cursor Bugbot for commit 522f53c. This will update automatically on new commits. Configure here.