Conversation
…esettings with the help of pyproject.toml and a lock file, using ruff as default code-linter and formatter, customizing seaborn for more eye-candy plots, optimizing discord bot code with asynchronous code, buffer file sharing, LRU-cache implementation and CDN-exploits
…ed source-code link to discord-bot
📝 WalkthroughWalkthroughAdds uv-based tooling (pyproject.toml, uv flows), new GitHub Actions workflows (main/test/release), refactors analytics for parallel loading/cached image bytes, changes Discord bot display to cache-backed generator flow, standardizes quoting across codebase, and updates documentation and CI configs. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Analytics as analytics.py
participant DataSrc as Data Sources
participant Cache as LRU Cache
participant ImageGen as ImageGen
Client->>Analytics: plot_marks_by_campus(campus)
activate Analytics
Analytics->>Cache: check _get_campus_plot_bytes(campus)
alt Cache Hit
Cache-->>Analytics: cached image bytes
else Cache Miss
Analytics->>DataSrc: load_data_parallel(path_pattern)
DataSrc-->>Analytics: CSV data
Analytics->>ImageGen: _tabulate_to_bytes(filtered_data)
ImageGen-->>Analytics: PNG byte stream
Analytics->>Cache: store bytes
end
Analytics-->>Client: BytesIO(image_bytes) or None
deactivate Analytics
sequenceDiagram
participant User as Discord User
participant Bot as main.py
participant Cache as URL Cache
participant Analytics as analytics (generator)
participant DiscordAPI as Discord API
User->>Bot: /plot campus_name
activate Bot
Bot->>Cache: get_cached_url(cache_key)
alt Cache Hit
Cache-->>Bot: cached_url
Bot->>User: send embed with cached_url + disclaimer
else Cache Miss
Bot->>Analytics: run generator_func -> BytesIO
Analytics-->>Bot: image BytesIO
Bot->>DiscordAPI: upload file
DiscordAPI-->>Bot: message with attachment URL
Bot->>Cache: save_url_to_cache(cache_key, url)
Bot->>User: send message with attachment + disclaimer
end
deactivate Bot
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 12
🤖 Fix all issues with AI agents
In @.github/workflows/release.yml:
- Around line 19-37: The checkout step uses
fromJson(steps.release.outputs.pr).head_sha which creates a detached HEAD and
prevents git push; change the checkout ref to
fromJson(steps.release.outputs.pr).headBranchName (the PR branch name) in the
actions/checkout@v4 step and update the push command in the "update uv lockfile"
run block to explicitly push back to that branch (use the same headBranchName
value as the push target, e.g., push to origin HEAD:refs/heads/<headBranchName>)
so git push succeeds from a non-detached branch.
In @.github/workflows/test.yml:
- Around line 10-12: Update the workflow token permissions to least privilege by
changing the permissions entries for 'contents' and 'pull-requests' from write
to read; locate the permissions block that currently contains "contents: write"
and "pull-requests: write" and replace them with "contents: read" and
"pull-requests: read" so the test workflow only has read access.
In `@CONTRIBUTING.md`:
- Around line 92-134: Fix typos and awkward wording in CONTRIBUTING.md: correct
misspellings like "exisitng" → "existing" and "criterias" → "criteria",
standardize backtick/code formatting for filenames (data_pipeline.py,
helper_notebook.ipynb, predictions.py, main.py, analytics.py, /services,
/database), reword unclear phrases such as "bot's" to "bot" or "bot's code" and
"There is quite a bit of work to do here, but that will be halted till 3.0.0
release" to a clearer sentence about postponing work until v3.0.0, and fix
punctuation/grammar in the sections describing Roadmap, issue guidelines, and
the Pipeline so the document reads cleanly and consistently.
In `@data_pipeline.py`:
- Around line 19-23: The code assumes div = soup.find("div", id="2025-2026") is
present before calling div.find_all("table"), which will raise AttributeError if
the yearly id changes; update the code around the div variable to guard for a
missing cutoff container (check if div is None), and handle it explicitly (e.g.,
log a clear error/warning, raise a descriptive exception, or return an empty
data list) before calling div.find_all("table") so downstream code (the tables
processing) is not executed on None.
In `@main.py`:
- Around line 330-345: The error messages and usage/help strings currently
reference the wrong command name `!!exam` while the actual command is `!!time`;
update all user-facing strings in this block (the two ctx.send calls that prompt
for a date and handle ValueError) to use `!!time` and adjust examples
accordingly (e.g., `!!time -s DD-MM-YYYY`), ensuring the messages around parsing
date_str and the exam_tracker.set_exam_date call remain unchanged.
- Around line 39-56: The cache check in display() (function display and its use
of anal.get_cached_url) happens after the generator has already been scheduled
by callers, causing wasted executor work on cache hits; change the call pattern
so callers pass an un-started coroutine or a callable (not an already-submitted
Future/task) as generator_func, and move the executor scheduling/creation inside
display() after the cached_url check: if no cache, schedule and await the
generator (e.g., run_in_executor or await generator_func() / await
asyncio.create_task(generator_coroutine)) to produce image_buffer; update all
call sites that currently submit to the executor before calling display() to
instead pass the coroutine/callable so scheduling only occurs on cache miss.
- Around line 140-141: The parameter type annotations for command functions
(e.g., async def plot(ctx, *, args: str = None)) use an implicit Optional;
update them to explicit union syntax by changing args: str = None to args: str |
None = None (do the same for the other command functions that declare args with
a default None) so Ruff RUF013 is satisfied and the type is explicit.
In `@predictions.py`:
- Line 33: The code at line where df =
pd.read_csv("data/model_data/cutoff_2025.csv") will raise FileNotFoundError
because the path/data file doesn't exist; fix by either adding the missing CSV
into the repo at data/model_data/cutoff_2025.csv or updating the pd.read_csv
call in predictions.py to point to an existing dataset path (or a configurable
path/env var), and add a defensive check around the read (e.g., os.path.exists
or try/except) to raise a clear error message referencing the variable df and
the predictions.py loader so callers know which file is missing.
In `@README.md`:
- Around line 92-94: The contributor image tag (<img
src="https://contrib.rocks/image?repo=PranavU-Coder/bitsatards_bot">) in
README.md is missing alt text; add a descriptive alt attribute to the img
element (for example: a short phrase like "Contributors to bitsatards_bot on
GitHub") so the image is accessible and satisfies MD045.
- Line 68: The README contains a broken Markdown link " [dataset](!https://...)"
— remove the stray "!" and replace with proper Markdown link syntax so it reads
[dataset](https://www.kaggle.com/datasets/pranavunni/bitsat-cutoff-dataset-2017-2025);
update the line in README.md where the dataset link appears to use that
corrected form.
- Line 79: Replace the solitary hyphen-style horizontal rule '---' with the
project's chosen markdownlint-compliant style (e.g., '***' or '___') to satisfy
MD035 and ensure consistency; update the horizontal rule token '---' to the
selected alternative everywhere in the README (or normalize all horizontal rules
to the same character) so the file conforms to the markdownlint rule.
In `@tests/services_test.py`:
- Around line 16-28: The tests test_set_exam_date_today and test_past declare an
unused fixture parameter mock_repo which triggers Ruff ARG002; remove the unused
parameter or rename it to _mock_repo in the function signatures (e.g., def
test_set_exam_date_today(self, mock_date_class, tracker, _mock_repo):) so the
unused mock_repo is ignored—update both test_set_exam_date_today and test_past
accordingly to eliminate the lint warning while keeping the
`@patch`("services.exam_tracker.date") mock_date_class and existing calls to
tracker.set_exam_date intact.
🧹 Nitpick comments (7)
pyproject.toml (1)
1-23: Move dev-only tooling to optional dependencies per PEP 621.
pytest,pytest-mock, andruffshould be in[project.optional-dependencies]under adevgroup instead of runtimedependencies, reducing the production install footprint. This follows PEP 621 best practices.♻️ Suggested refactor
[project] name = "bitsatards-bot" version = "2.5.0" description = "Official project-typesettings for the bitsatards-bot project." readme = "README.md" requires-python = ">=3.14" dependencies = [ "beautifulsoup4>=4.14.3", "discord-py>=2.6.4", "lxml>=6.0.2", "matplotlib>=3.10.8", "numpy>=2.4.2", "pandas>=3.0.0", "psycopg2-binary>=2.9.11", - "pytest>=9.0.2", - "pytest-mock>=3.15.1", "python-dotenv>=1.2.1", "requests>=2.32.5", - "ruff>=0.14.14", "scikit-learn>=1.8.0", "seaborn>=0.13.2", "sqlalchemy>=2.0.46", ] + +[project.optional-dependencies] +dev = [ + "pytest>=9.0.2", + "pytest-mock>=3.15.1", + "ruff>=0.14.14", +]predictions.py (2)
42-42: Avoid writing the same CSV twice.
predict_cutoffsalready writesoutput_file; the extrato_csvduplicates I/O.♻️ Suggested cleanup
-predictions_2026.to_csv("predict/most_likely_case.csv", index=False)
5-9: Consider using joblib or safer serialization instead of pickle.
pickle.load()can execute arbitrary code. While the current model files are version-controlled and internal, usingjoblib.load()or another format (e.g., ONNX, safetensors) would be safer and more maintainable as the project evolves.analytics.py (4)
38-47: Avoid one bad CSV aborting the entire parallel load.Line 45 will raise on any single
read_csvfailure, stopping the whole load. Consider per-file error handling and filtering invalid reads.🔧 Suggested hardening for parallel reads
def load_data_parallel(path_pattern): files = glob.glob(path_pattern) if not files: print(f"warning: no files found matching {path_pattern}") return pd.DataFrame(columns=["campus", "branch", "marks", "year"]) + def _safe_read_csv(path): + try: + return pd.read_csv(path) + except (OSError, pd.errors.ParserError, UnicodeDecodeError) as exc: + print(f"error reading {path}: {exc}") + return None + with concurrent.futures.ThreadPoolExecutor() as executor: - results = list(executor.map(pd.read_csv, files)) + results = list(executor.map(_safe_read_csv, files)) - return pd.concat(results, ignore_index=True) + results = [df for df in results if df is not None] + if not results: + return pd.DataFrame(columns=["campus", "branch", "marks", "year"]) + return pd.concat(results, ignore_index=True)
89-97: Narrow the exception handling for prediction file reads.Line 96 catches
Exception, which can hide unrelated bugs. Prefer specific exceptions and log unexpected errors.🧭 Suggested exception narrowing
- except Exception as e: - print(f"error loading {filepath}: {e}") + except (OSError, pd.errors.ParserError, UnicodeDecodeError) as e: + print(f"error loading {filepath}: {e}")
114-115: Silence unusedfigbindings to satisfy Ruff.
figisn’t used in these blocks. Prefix with_(or usefig.savefig) to avoid RUF059 noise.🧹 Suggested cleanup
- fig, ax = plt.subplots(figsize=(10, fig_height)) + _, ax = plt.subplots(figsize=(10, fig_height)) - fig, ax = plt.subplots(figsize=(12, 8), dpi=150) + _, ax = plt.subplots(figsize=(12, 8), dpi=150) - fig, ax = plt.subplots(figsize=(10, 6), dpi=150) + _, ax = plt.subplots(figsize=(10, 6), dpi=150)Also applies to: 168-169, 214-215
358-367: Consider bounding URL cache growth.
URL_CACHEis unbounded and could grow over long uptimes. Consider a max size or TTL to cap memory.
| Before contributing, review the [Roadmap](https://github.com/users/PranavU-Coder/projects/9) first to understand current priorities and exisitng issues that are being worked on. | ||
|
|
||
| For raising issues, they must follow these criterias to be considered: | ||
|
|
||
| if you want any feature to be implemented please state **explicitly why exactly it is important** under the [issues](https://github.com/PranavU-Coder/bitsatards_bot/issues) tab by creating an issue under the same mentioned, similarly in bug-related issues please state where exactly is the bug occurring and how to reproduce from the developer's side so it can be worked upon to fix it. | ||
| Feature Requests: You must explicitly justify why the feature is necessary to be implemented. | ||
|
|
||
| Bug Reports: You must provide a clear "Steps to Reproduce" section (check figure below). If the developer cannot replicate the bug based on your description, the issue will be marked as invalid. | ||
|
|
||
| Out of Scope (These will not be considered): Do **NOT** raise issues regarding the website's UI or functionality. This codebase is a **MONO-REPO** and is hence independent of the web project; such issues will be closed without review. | ||
|
|
||
|  | ||
|
|
||
| **PLEASE NOTE**: | ||
|
|
||
| (1) any issue with website's ui or functionality shouldn't be raised and as such any issues raised on that request will **NOT** be reviewed and closed as this codebase is independent from the one in which the website is going to be made in. | ||
| This bot is currently self-hosted through my old laptop which doubles as a server rather than a third-party cloud-service provider which leads to uptime problems. | ||
|
|
||
| With hostel network dependencies, there is no way to resolve downtime-issues specially if I am not near the laptop which is hosting the bot. | ||
|
|
||
| (2) discord bot being inactive at some portions of the day, now the reason why I chose to clear this up right now is because **I AM NOT USING ANY THIRD-PARTY CLOUD SERVICE PROVIDER** for the continuous deployment and execution of the discord bot and am rather using my old laptop as a full-time server to run my programs 24/7 out of which this bot is one of them, once I return back to hostel there would be issues of running this bot particularly given how bad my hostel wifi is (since it frequently goes out) and as such if I'm not available when this occurs there is no way for me to fix it immediately. | ||
| **Possible Solution**: While orchestration for backup service providers is a goal (using good MLOps principles), it has not yet been implemented. | ||
|
|
||
| there is a fix to the second issue by adopting good MLOps principles and orchestrating deployment of a backup service provider when one goes out, however at the time of writing this I'm not skilled enough to navigate through this and implement it. | ||
| Please do **NOT** open issues regarding temporary bot inactivity. These are known infrastructure limitations, not code problems. | ||
|
|
||
| ## Pipeline | ||
|
|
||
| The idea of execution-workflow will be greatly discussed in the second-version/iteration of this bot to ensure other open-source maintainers who might not be traditionally from an ML or a DS background can still maintain this codebase actively without much external help/assistance. | ||
| The project follows a structured execution workflow designed for maintainability. Even without a massive background in Data Science, contributors can help updating the bot's by interacting with scripts: | ||
|
|
||
| ### Data-Acquisition && Machine-Learning | ||
|
|
||
| **data_pipeline.py**: This scrapes official BITS-admission sites containing cutoff scores for particular years and converts raw data into standardized CSV files which can be then put in for model-training. | ||
|
|
||
| **helper_notebook.ipynb**: This contains code for the "model"—currently focused on min-max scaling applications so ... it is more of a statistical-validator than a traditional machine-learning model to process cutoff trends. | ||
|
|
||
| **predictions.py**: This bridges between data-science and ML aspect of this repository with the production-aspect. It takes the constraints (user inputs) and applies the model logic to generate results. Note: This script is currently volatile and subject to change as the model logic matures. | ||
|
|
||
| ### Production | ||
|
|
||
| **main.py**: Contains actual code for the bitsatards-discord-bot. It handles events, commands, and integrates with the prediction and database services. | ||
|
|
||
| analytics.py: Handles data visualization for the bot's features. | ||
|
|
||
| ### Databases | ||
|
|
||
| /services and /database concerns with the time-tracking feature which is a misc. feature of this bot using postgres as its database and SQLAlchemy as ORM. There is quite a bit of work to do here, but that will be halted till 3.0.0 release. |
There was a problem hiding this comment.
Fix a few typos/wording issues in contributor guidance.
Minor spelling/wording issues reduce polish in a user-facing doc.
📝 Suggested edits
-Before contributing, review the [Roadmap](https://github.com/users/PranavU-Coder/projects/9) first to understand current priorities and exisitng issues that are being worked on.
+Before contributing, review the [Roadmap](https://github.com/users/PranavU-Coder/projects/9) first to understand current priorities and existing issues that are being worked on.
-For raising issues, they must follow these criterias to be considered:
+For raising issues, they must follow these criteria to be considered:
-/services and /database concerns with the time-tracking feature which is a misc. feature of this bot using postgres as its database and SQLAlchemy as ORM. There is quite a bit of work to do here, but that will be halted till 3.0.0 release.
+/services and /database concerns with the time-tracking feature which is a misc. feature of this bot using postgres as its database and SQLAlchemy as ORM. There is quite a bit of work to do here, but that will be halted until 3.0.0 release.🧰 Tools
🪛 LanguageTool
[grammar] ~92-~92: Ensure spelling is correct
Context: ...st to understand current priorities and exisitng issues that are being worked on. For r...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[style] ~134-~134: Consider using the more formal “until”.
Context: ...ork to do here, but that will be halted till 3.0.0 release. init_db.py helps in...
(TILL)
🤖 Prompt for AI Agents
In `@CONTRIBUTING.md` around lines 92 - 134, Fix typos and awkward wording in
CONTRIBUTING.md: correct misspellings like "exisitng" → "existing" and
"criterias" → "criteria", standardize backtick/code formatting for filenames
(data_pipeline.py, helper_notebook.ipynb, predictions.py, main.py, analytics.py,
/services, /database), reword unclear phrases such as "bot's" to "bot" or "bot's
code" and "There is quite a bit of work to do here, but that will be halted till
3.0.0 release" to a clearer sentence about postponing work until v3.0.0, and fix
punctuation/grammar in the sections describing Roadmap, issue guidelines, and
the Pipeline so the document reads cleanly and consistently.
| div = soup.find("div", id="2025-2026") | ||
|
|
||
| data = [] | ||
| tables = div.find_all('table') | ||
| tables = div.find_all("table") | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find the data_pipeline.py file
find . -name "data_pipeline.py" -type fRepository: PranavU-Coder/bitsatards_bot
Length of output: 90
🏁 Script executed:
cat -n ./data_pipeline.py | head -40Repository: PranavU-Coder/bitsatards_bot
Length of output: 1491
🏁 Script executed:
# Check the full file to see if there's any error handling
wc -l ./data_pipeline.pyRepository: PranavU-Coder/bitsatards_bot
Length of output: 93
🏁 Script executed:
cat -n ./data_pipeline.pyRepository: PranavU-Coder/bitsatards_bot
Length of output: 2263
Add a guard to handle missing cutoff container before calling find_all. If the yearly id changes, div becomes None and calling find_all() on it crashes with an AttributeError. The code needs explicit error handling, especially since the comment at line 18 notes that this value must be updated annually.
🔧 Suggested guard
div = soup.find("div", id="2025-2026")
+if div is None:
+ raise RuntimeError("cutoff container not found; update URL/id for the target year")
tables = div.find_all("table")🤖 Prompt for AI Agents
In `@data_pipeline.py` around lines 19 - 23, The code assumes div =
soup.find("div", id="2025-2026") is present before calling
div.find_all("table"), which will raise AttributeError if the yearly id changes;
update the code around the div variable to guard for a missing cutoff container
(check if div is None), and handle it explicitly (e.g., log a clear
error/warning, raise a descriptive exception, or return an empty data list)
before calling div.find_all("table") so downstream code (the tables processing)
is not executed on None.
| There are plans to experiment by finding causation and co-relations with cutoffs of other entrance examinations such as: JEE however I don't think I will be able to complete it in time of announcement of this project. | ||
|
|
||
| The dataset is as follows for to work with: https://www.kaggle.com/datasets/pranavunni/bitsat-cutoff-dataset-2017-2025 for anyone willing to have their own spin-off to this. | ||
| The dataset is as follows for to work with: [dataset](!https://www.kaggle.com/datasets/pranavunni/bitsat-cutoff-dataset-2017-2025) for anyone willing to have their own spin-off to this. |
There was a problem hiding this comment.
Fix the broken dataset link markup.
The ! inside the URL breaks the link.
🔗 Suggested fix
-The dataset is as follows for to work with: [dataset](!https://www.kaggle.com/datasets/pranavunni/bitsat-cutoff-dataset-2017-2025) for anyone willing to have their own spin-off to this.
+The dataset is as follows for to work with: [dataset](https://www.kaggle.com/datasets/pranavunni/bitsat-cutoff-dataset-2017-2025) for anyone willing to have their own spin-off to this.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| The dataset is as follows for to work with: [dataset](!https://www.kaggle.com/datasets/pranavunni/bitsat-cutoff-dataset-2017-2025) for anyone willing to have their own spin-off to this. | |
| The dataset is as follows for to work with: [dataset](https://www.kaggle.com/datasets/pranavunni/bitsat-cutoff-dataset-2017-2025) for anyone willing to have their own spin-off to this. |
🤖 Prompt for AI Agents
In `@README.md` at line 68, The README contains a broken Markdown link "
[dataset](!https://...)" — remove the stray "!" and replace with proper Markdown
link syntax so it reads
[dataset](https://www.kaggle.com/datasets/pranavunni/bitsat-cutoff-dataset-2017-2025);
update the line in README.md where the dataset link appears to use that
corrected form.
| > ## Contributing | ||
| > This project is developed hoping to be a community effort. If you'd like to improve the model or add features, please check our [CONTRIBUTING.md](./CONTRIBUTING.md) for general guidelines. | ||
| > | ||
| > --- |
There was a problem hiding this comment.
Align horizontal rule style with markdownlint (MD035).
🧹 Suggested style alignment
-> ---
+> ------------------------------------------------------------------------------📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| > --- | |
| > ------------------------------------------------------------------------------ |
🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
[warning] 79-79: Horizontal rule style
Expected: ------------------------------------------------------------------------------; Actual: ---
(MD035, hr-style)
🤖 Prompt for AI Agents
In `@README.md` at line 79, Replace the solitary hyphen-style horizontal rule
'---' with the project's chosen markdownlint-compliant style (e.g., '***' or
'___') to satisfy MD035 and ensure consistency; update the horizontal rule token
'---' to the selected alternative everywhere in the README (or normalize all
horizontal rules to the same character) so the file conforms to the markdownlint
rule.
| <a href="https://github.com/PranavU-Coder/bitsatards_bot/graphs/contributors"> | ||
| <img src="https://contrib.rocks/image?repo=PranavU-Coder/bitsatards_bot" /> | ||
| </a> |
There was a problem hiding this comment.
Add alt text to contributor image (MD045).
♿ Suggested accessibility fix
- <img src="https://contrib.rocks/image?repo=PranavU-Coder/bitsatards_bot" />
+ <img alt="Contributors" src="https://contrib.rocks/image?repo=PranavU-Coder/bitsatards_bot" />📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <a href="https://github.com/PranavU-Coder/bitsatards_bot/graphs/contributors"> | |
| <img src="https://contrib.rocks/image?repo=PranavU-Coder/bitsatards_bot" /> | |
| </a> | |
| <a href="https://github.com/PranavU-Coder/bitsatards_bot/graphs/contributors"> | |
| <img alt="Contributors" src="https://contrib.rocks/image?repo=PranavU-Coder/bitsatards_bot" /> | |
| </a> |
🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
[warning] 93-93: Images should have alternate text (alt text)
(MD045, no-alt-text)
🤖 Prompt for AI Agents
In `@README.md` around lines 92 - 94, The contributor image tag (<img
src="https://contrib.rocks/image?repo=PranavU-Coder/bitsatards_bot">) in
README.md is missing alt text; add a descriptive alt attribute to the img
element (for example: a short phrase like "Contributors to bitsatards_bot on
GitHub") so the image is accessible and satisfies MD045.
| @patch("services.exam_tracker.date") | ||
| def test_set_exam_date_today(self, mock_date_class, tracker, mock_repo): | ||
| today = date(2026, 4, 15) | ||
| mock_date_class.today.return_value = today | ||
| result = tracker.set_exam_date(123, "testuser", 456, today) | ||
| assert "which is btw today, best of luck soldier!" in result | ||
|
|
||
| @patch('services.exam_tracker.date') | ||
| @patch("services.exam_tracker.date") | ||
| def test_past(self, mock_date_class, tracker, mock_repo): | ||
| mock_date_class.today.return_value = date(2026, 4, 15) | ||
| past_date = date(2026, 1, 1) | ||
| result = tracker.set_exam_date(123, "testuser", 456, past_date) | ||
| assert "has already been passed" in result |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd tests && head -30 services_test.py | cat -nRepository: PranavU-Coder/bitsatards_bot
Length of output: 1613
🏁 Script executed:
# Also search for mock_repo usage throughout the file
rg "mock_repo" tests/services_test.py -nRepository: PranavU-Coder/bitsatards_bot
Length of output: 2219
🏁 Script executed:
# Check if there are any Ruff configuration files
fd "ruff\.toml|\.ruff\.toml|pyproject\.toml" --type fRepository: PranavU-Coder/bitsatards_bot
Length of output: 96
🏁 Script executed:
# Check the Ruff configuration to verify ARG002 is enabled
cat ruff.tomlRepository: PranavU-Coder/bitsatards_bot
Length of output: 978
Unused mock_repo args will trigger Ruff ARG002.
mock_repo isn't used in these tests; prefix with _ or remove to keep ruff check clean.
🧹 Suggested fix
- def test_set_exam_date_today(self, mock_date_class, tracker, mock_repo):
+ def test_set_exam_date_today(self, mock_date_class, tracker, _mock_repo):
- def test_past(self, mock_date_class, tracker, mock_repo):
+ def test_past(self, mock_date_class, tracker, _mock_repo):📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @patch("services.exam_tracker.date") | |
| def test_set_exam_date_today(self, mock_date_class, tracker, mock_repo): | |
| today = date(2026, 4, 15) | |
| mock_date_class.today.return_value = today | |
| result = tracker.set_exam_date(123, "testuser", 456, today) | |
| assert "which is btw today, best of luck soldier!" in result | |
| @patch('services.exam_tracker.date') | |
| @patch("services.exam_tracker.date") | |
| def test_past(self, mock_date_class, tracker, mock_repo): | |
| mock_date_class.today.return_value = date(2026, 4, 15) | |
| past_date = date(2026, 1, 1) | |
| result = tracker.set_exam_date(123, "testuser", 456, past_date) | |
| assert "has already been passed" in result | |
| `@patch`("services.exam_tracker.date") | |
| def test_set_exam_date_today(self, mock_date_class, tracker, _mock_repo): | |
| today = date(2026, 4, 15) | |
| mock_date_class.today.return_value = today | |
| result = tracker.set_exam_date(123, "testuser", 456, today) | |
| assert "which is btw today, best of luck soldier!" in result | |
| `@patch`("services.exam_tracker.date") | |
| def test_past(self, mock_date_class, tracker, _mock_repo): | |
| mock_date_class.today.return_value = date(2026, 4, 15) | |
| past_date = date(2026, 1, 1) | |
| result = tracker.set_exam_date(123, "testuser", 456, past_date) | |
| assert "has already been passed" in result |
🧰 Tools
🪛 Ruff (0.14.14)
[warning] 17-17: Unused method argument: mock_repo
(ARG002)
[warning] 24-24: Unused method argument: mock_repo
(ARG002)
🤖 Prompt for AI Agents
In `@tests/services_test.py` around lines 16 - 28, The tests
test_set_exam_date_today and test_past declare an unused fixture parameter
mock_repo which triggers Ruff ARG002; remove the unused parameter or rename it
to _mock_repo in the function signatures (e.g., def
test_set_exam_date_today(self, mock_date_class, tracker, _mock_repo):) so the
unused mock_repo is ignored—update both test_set_exam_date_today and test_past
accordingly to eliminate the lint warning while keeping the
`@patch`("services.exam_tracker.date") mock_date_class and existing calls to
tracker.set_exam_date intact.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
Please describe the changes this PR makes and why it should be merged:
Status and versioning classification:
Please move lines that apply to you out of the comment:
Summary by CodeRabbit
New Features
Documentation
Refactor
Style