Skip to content

merge: v2.5.0#17

Merged
PranavU-Coder merged 3 commits intomasterfrom
2.5-update
Feb 5, 2026
Merged

merge: v2.5.0#17
PranavU-Coder merged 3 commits intomasterfrom
2.5-update

Conversation

@PranavU-Coder
Copy link
Owner

@PranavU-Coder PranavU-Coder commented Feb 5, 2026

Please describe the changes this PR makes and why it should be merged:

Status and versioning classification:

Please move lines that apply to you out of the comment:

  • Code changes have been tested locally
  • All tests have been rightfully passed
  • I know how to update typings and have done so
  • This PR changes the projects's interface (inbuilt methods)
  • This PR includes breaking changes which has been discussed in Migrations #16
  • This PR includes non-code changes, like changes to documentation, README, etc.

Summary by CodeRabbit

  • New Features

    • Added automated test and release workflows.
    • Bot: improved plotting/prediction flows with caching, campus parsing, clearer messages, and better error handling.
  • Documentation

    • Expanded CONTRIBUTING.md and README with detailed contributor guidance, command references, tech stack, and roadmap.
  • Refactor

    • Analytics: parallel data loading, custom fonts, and image outputs now returned as in-memory bytes.
    • CI/tooling migrated to a uv-based flow.
  • Style

    • Project-wide formatting and consistency updates.

…esettings with the help of pyproject.toml and a lock file, using ruff as default code-linter and formatter, customizing seaborn for more eye-candy plots, optimizing discord bot code with asynchronous code, buffer file sharing, LRU-cache implementation and CDN-exploits
@coderabbitai
Copy link

coderabbitai bot commented Feb 5, 2026

📝 Walkthrough

Walkthrough

Adds uv-based tooling (pyproject.toml, uv flows), new GitHub Actions workflows (main/test/release), refactors analytics for parallel loading/cached image bytes, changes Discord bot display to cache-backed generator flow, standardizes quoting across codebase, and updates documentation and CI configs.

Changes

Cohort / File(s) Summary
GitHub Actions Workflows
.github/workflows/main.yml, .github/workflows/test.yml, .github/workflows/release.yml
Adds main.yml orchestrating tests and release jobs; test.yml migrated to workflow_call with uv-based Python/install/sync/test steps; release.yml uses release-please-action@v4 and conditionally updates uv.lock, commits and pushes when needed.
Dependency & Project Config
pyproject.toml, requirements.txt, .gitignore
Adds pyproject.toml with dependencies; clears requirements.txt; updates .gitignore to ignore .python-version, .ruff_cache, .venv, and fonts/.
Documentation & Contributor Guidance
CONTRIBUTING.md, README.md
Rewrites CONTRIBUTING with branch workflow, uv env steps, lint/test rules, and domain-specific guidance; expands README with command tables, tech stack badges, future plans, and contributor/community notes.
Analytics & Data Processing
analytics.py, data_pipeline.py, predictions.py, helper_notebook.ipynb
analytics.py: adds parallel CSV loader, custom font loader, LRU-cached image byte generators, returns BytesIO images, and URL cache helpers; other files update data paths to data/model_data, standardize quotes and minor refactors.
Database Layer
database/connection.py, database/models.py, database/repository.py, init_db.py
Formatting/quote standardization in connection/models; repository.py updates set_user_exam to also set username/channel_id and adds close() method; new init_database() in init_db.py with main guard.
Services
services/exam_tracker.py, services/reminder.py
exam_tracker.py formatting and minor signature reflow; reminder.py adds __del__ to close repository on garbage collection.
Discord Bot / Application Logic
main.py
Reworks display flow to use cache-backed generator + URL cache; adds CAMPUSES, parse_campus, send_dm, improved error handling (on_command_error), and refactors plot, plot-branch, select, predict to use the new pipeline and cache keys.
Tests
tests/conftest.py, tests/database_test.py, tests/services_test.py
Pure stylistic updates: single→double quotes in env/patch calls and dict keys; removes unused pytest import; test logic unchanged.
CI / Release Related Files
.github/workflows/..., pyproject.toml (repeated)
Ensure workflows reuse test/release templates and inherit secrets for release job; uv-based lockfile update flow handled in release workflow.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Analytics as analytics.py
    participant DataSrc as Data Sources
    participant Cache as LRU Cache
    participant ImageGen as ImageGen

    Client->>Analytics: plot_marks_by_campus(campus)
    activate Analytics

    Analytics->>Cache: check _get_campus_plot_bytes(campus)
    alt Cache Hit
        Cache-->>Analytics: cached image bytes
    else Cache Miss
        Analytics->>DataSrc: load_data_parallel(path_pattern)
        DataSrc-->>Analytics: CSV data
        Analytics->>ImageGen: _tabulate_to_bytes(filtered_data)
        ImageGen-->>Analytics: PNG byte stream
        Analytics->>Cache: store bytes
    end

    Analytics-->>Client: BytesIO(image_bytes) or None
    deactivate Analytics
Loading
sequenceDiagram
    participant User as Discord User
    participant Bot as main.py
    participant Cache as URL Cache
    participant Analytics as analytics (generator)
    participant DiscordAPI as Discord API

    User->>Bot: /plot campus_name
    activate Bot

    Bot->>Cache: get_cached_url(cache_key)
    alt Cache Hit
        Cache-->>Bot: cached_url
        Bot->>User: send embed with cached_url + disclaimer
    else Cache Miss
        Bot->>Analytics: run generator_func -> BytesIO
        Analytics-->>Bot: image BytesIO
        Bot->>DiscordAPI: upload file
        DiscordAPI-->>Bot: message with attachment URL
        Bot->>Cache: save_url_to_cache(cache_key, url)
        Bot->>User: send message with attachment + disclaimer
    end

    deactivate Bot
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 Thump-thump with delight
UV winds blew through workflows tonight,
Data streams raced in parallel beams,
Caches held pictures and bot-born dreams,
Quotes aligned in tidy rows—hop, behold!

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 16.18% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'merge: v2.5.0' is vague and generic, using a pattern that doesn't convey meaningful information about the specific changes in the changeset. Consider a more descriptive title that summarizes the main change, such as 'refactor: Migrate to uv build system and modernize workflows' or 'feat: Add parallel data loading and cache-backed visualization'.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch 2.5-update

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

🤖 Fix all issues with AI agents
In @.github/workflows/release.yml:
- Around line 19-37: The checkout step uses
fromJson(steps.release.outputs.pr).head_sha which creates a detached HEAD and
prevents git push; change the checkout ref to
fromJson(steps.release.outputs.pr).headBranchName (the PR branch name) in the
actions/checkout@v4 step and update the push command in the "update uv lockfile"
run block to explicitly push back to that branch (use the same headBranchName
value as the push target, e.g., push to origin HEAD:refs/heads/<headBranchName>)
so git push succeeds from a non-detached branch.

In @.github/workflows/test.yml:
- Around line 10-12: Update the workflow token permissions to least privilege by
changing the permissions entries for 'contents' and 'pull-requests' from write
to read; locate the permissions block that currently contains "contents: write"
and "pull-requests: write" and replace them with "contents: read" and
"pull-requests: read" so the test workflow only has read access.

In `@CONTRIBUTING.md`:
- Around line 92-134: Fix typos and awkward wording in CONTRIBUTING.md: correct
misspellings like "exisitng" → "existing" and "criterias" → "criteria",
standardize backtick/code formatting for filenames (data_pipeline.py,
helper_notebook.ipynb, predictions.py, main.py, analytics.py, /services,
/database), reword unclear phrases such as "bot's" to "bot" or "bot's code" and
"There is quite a bit of work to do here, but that will be halted till 3.0.0
release" to a clearer sentence about postponing work until v3.0.0, and fix
punctuation/grammar in the sections describing Roadmap, issue guidelines, and
the Pipeline so the document reads cleanly and consistently.

In `@data_pipeline.py`:
- Around line 19-23: The code assumes div = soup.find("div", id="2025-2026") is
present before calling div.find_all("table"), which will raise AttributeError if
the yearly id changes; update the code around the div variable to guard for a
missing cutoff container (check if div is None), and handle it explicitly (e.g.,
log a clear error/warning, raise a descriptive exception, or return an empty
data list) before calling div.find_all("table") so downstream code (the tables
processing) is not executed on None.

In `@main.py`:
- Around line 330-345: The error messages and usage/help strings currently
reference the wrong command name `!!exam` while the actual command is `!!time`;
update all user-facing strings in this block (the two ctx.send calls that prompt
for a date and handle ValueError) to use `!!time` and adjust examples
accordingly (e.g., `!!time -s DD-MM-YYYY`), ensuring the messages around parsing
date_str and the exam_tracker.set_exam_date call remain unchanged.
- Around line 39-56: The cache check in display() (function display and its use
of anal.get_cached_url) happens after the generator has already been scheduled
by callers, causing wasted executor work on cache hits; change the call pattern
so callers pass an un-started coroutine or a callable (not an already-submitted
Future/task) as generator_func, and move the executor scheduling/creation inside
display() after the cached_url check: if no cache, schedule and await the
generator (e.g., run_in_executor or await generator_func() / await
asyncio.create_task(generator_coroutine)) to produce image_buffer; update all
call sites that currently submit to the executor before calling display() to
instead pass the coroutine/callable so scheduling only occurs on cache miss.
- Around line 140-141: The parameter type annotations for command functions
(e.g., async def plot(ctx, *, args: str = None)) use an implicit Optional;
update them to explicit union syntax by changing args: str = None to args: str |
None = None (do the same for the other command functions that declare args with
a default None) so Ruff RUF013 is satisfied and the type is explicit.

In `@predictions.py`:
- Line 33: The code at line where df =
pd.read_csv("data/model_data/cutoff_2025.csv") will raise FileNotFoundError
because the path/data file doesn't exist; fix by either adding the missing CSV
into the repo at data/model_data/cutoff_2025.csv or updating the pd.read_csv
call in predictions.py to point to an existing dataset path (or a configurable
path/env var), and add a defensive check around the read (e.g., os.path.exists
or try/except) to raise a clear error message referencing the variable df and
the predictions.py loader so callers know which file is missing.

In `@README.md`:
- Around line 92-94: The contributor image tag (<img
src="https://contrib.rocks/image?repo=PranavU-Coder/bitsatards_bot">) in
README.md is missing alt text; add a descriptive alt attribute to the img
element (for example: a short phrase like "Contributors to bitsatards_bot on
GitHub") so the image is accessible and satisfies MD045.
- Line 68: The README contains a broken Markdown link " [dataset](!https://...)"
— remove the stray "!" and replace with proper Markdown link syntax so it reads
[dataset](https://www.kaggle.com/datasets/pranavunni/bitsat-cutoff-dataset-2017-2025);
update the line in README.md where the dataset link appears to use that
corrected form.
- Line 79: Replace the solitary hyphen-style horizontal rule '---' with the
project's chosen markdownlint-compliant style (e.g., '***' or '___') to satisfy
MD035 and ensure consistency; update the horizontal rule token '---' to the
selected alternative everywhere in the README (or normalize all horizontal rules
to the same character) so the file conforms to the markdownlint rule.

In `@tests/services_test.py`:
- Around line 16-28: The tests test_set_exam_date_today and test_past declare an
unused fixture parameter mock_repo which triggers Ruff ARG002; remove the unused
parameter or rename it to _mock_repo in the function signatures (e.g., def
test_set_exam_date_today(self, mock_date_class, tracker, _mock_repo):) so the
unused mock_repo is ignored—update both test_set_exam_date_today and test_past
accordingly to eliminate the lint warning while keeping the
`@patch`("services.exam_tracker.date") mock_date_class and existing calls to
tracker.set_exam_date intact.
🧹 Nitpick comments (7)
pyproject.toml (1)

1-23: Move dev-only tooling to optional dependencies per PEP 621.

pytest, pytest-mock, and ruff should be in [project.optional-dependencies] under a dev group instead of runtime dependencies, reducing the production install footprint. This follows PEP 621 best practices.

♻️ Suggested refactor
 [project]
 name = "bitsatards-bot"
 version = "2.5.0"
 description = "Official project-typesettings for the bitsatards-bot project."
 readme = "README.md"
 requires-python = ">=3.14"
 dependencies = [
     "beautifulsoup4>=4.14.3",
     "discord-py>=2.6.4",
     "lxml>=6.0.2",
     "matplotlib>=3.10.8",
     "numpy>=2.4.2",
     "pandas>=3.0.0",
     "psycopg2-binary>=2.9.11",
-    "pytest>=9.0.2",
-    "pytest-mock>=3.15.1",
     "python-dotenv>=1.2.1",
     "requests>=2.32.5",
-    "ruff>=0.14.14",
     "scikit-learn>=1.8.0",
     "seaborn>=0.13.2",
     "sqlalchemy>=2.0.46",
 ]
+
+[project.optional-dependencies]
+dev = [
+    "pytest>=9.0.2",
+    "pytest-mock>=3.15.1",
+    "ruff>=0.14.14",
+]
predictions.py (2)

42-42: Avoid writing the same CSV twice.
predict_cutoffs already writes output_file; the extra to_csv duplicates I/O.

♻️ Suggested cleanup
-predictions_2026.to_csv("predict/most_likely_case.csv", index=False)

5-9: Consider using joblib or safer serialization instead of pickle.
pickle.load() can execute arbitrary code. While the current model files are version-controlled and internal, using joblib.load() or another format (e.g., ONNX, safetensors) would be safer and more maintainable as the project evolves.

analytics.py (4)

38-47: Avoid one bad CSV aborting the entire parallel load.

Line 45 will raise on any single read_csv failure, stopping the whole load. Consider per-file error handling and filtering invalid reads.

🔧 Suggested hardening for parallel reads
 def load_data_parallel(path_pattern):
     files = glob.glob(path_pattern)
     if not files:
         print(f"warning: no files found matching {path_pattern}")
         return pd.DataFrame(columns=["campus", "branch", "marks", "year"])
 
+    def _safe_read_csv(path):
+        try:
+            return pd.read_csv(path)
+        except (OSError, pd.errors.ParserError, UnicodeDecodeError) as exc:
+            print(f"error reading {path}: {exc}")
+            return None
+
     with concurrent.futures.ThreadPoolExecutor() as executor:
-        results = list(executor.map(pd.read_csv, files))
+        results = list(executor.map(_safe_read_csv, files))
 
-    return pd.concat(results, ignore_index=True)
+    results = [df for df in results if df is not None]
+    if not results:
+        return pd.DataFrame(columns=["campus", "branch", "marks", "year"])
+    return pd.concat(results, ignore_index=True)

89-97: Narrow the exception handling for prediction file reads.

Line 96 catches Exception, which can hide unrelated bugs. Prefer specific exceptions and log unexpected errors.

🧭 Suggested exception narrowing
-        except Exception as e:
-            print(f"error loading {filepath}: {e}")
+        except (OSError, pd.errors.ParserError, UnicodeDecodeError) as e:
+            print(f"error loading {filepath}: {e}")

114-115: Silence unused fig bindings to satisfy Ruff.

fig isn’t used in these blocks. Prefix with _ (or use fig.savefig) to avoid RUF059 noise.

🧹 Suggested cleanup
-    fig, ax = plt.subplots(figsize=(10, fig_height))
+    _, ax = plt.subplots(figsize=(10, fig_height))

-    fig, ax = plt.subplots(figsize=(12, 8), dpi=150)
+    _, ax = plt.subplots(figsize=(12, 8), dpi=150)

-    fig, ax = plt.subplots(figsize=(10, 6), dpi=150)
+    _, ax = plt.subplots(figsize=(10, 6), dpi=150)

Also applies to: 168-169, 214-215


358-367: Consider bounding URL cache growth.

URL_CACHE is unbounded and could grow over long uptimes. Consider a max size or TTL to cap memory.

Comment on lines +92 to +134
Before contributing, review the [Roadmap](https://github.com/users/PranavU-Coder/projects/9) first to understand current priorities and exisitng issues that are being worked on.

For raising issues, they must follow these criterias to be considered:

if you want any feature to be implemented please state **explicitly why exactly it is important** under the [issues](https://github.com/PranavU-Coder/bitsatards_bot/issues) tab by creating an issue under the same mentioned, similarly in bug-related issues please state where exactly is the bug occurring and how to reproduce from the developer's side so it can be worked upon to fix it.
Feature Requests: You must explicitly justify why the feature is necessary to be implemented.

Bug Reports: You must provide a clear "Steps to Reproduce" section (check figure below). If the developer cannot replicate the bug based on your description, the issue will be marked as invalid.

Out of Scope (These will not be considered): Do **NOT** raise issues regarding the website's UI or functionality. This codebase is a **MONO-REPO** and is hence independent of the web project; such issues will be closed without review.

![issues](https://github.com/user-attachments/assets/66934dc7-b137-4469-a8f5-6272a7681f38)

**PLEASE NOTE**:

(1) any issue with website's ui or functionality shouldn't be raised and as such any issues raised on that request will **NOT** be reviewed and closed as this codebase is independent from the one in which the website is going to be made in.
This bot is currently self-hosted through my old laptop which doubles as a server rather than a third-party cloud-service provider which leads to uptime problems.

With hostel network dependencies, there is no way to resolve downtime-issues specially if I am not near the laptop which is hosting the bot.

(2) discord bot being inactive at some portions of the day, now the reason why I chose to clear this up right now is because **I AM NOT USING ANY THIRD-PARTY CLOUD SERVICE PROVIDER** for the continuous deployment and execution of the discord bot and am rather using my old laptop as a full-time server to run my programs 24/7 out of which this bot is one of them, once I return back to hostel there would be issues of running this bot particularly given how bad my hostel wifi is (since it frequently goes out) and as such if I'm not available when this occurs there is no way for me to fix it immediately.
**Possible Solution**: While orchestration for backup service providers is a goal (using good MLOps principles), it has not yet been implemented.

there is a fix to the second issue by adopting good MLOps principles and orchestrating deployment of a backup service provider when one goes out, however at the time of writing this I'm not skilled enough to navigate through this and implement it.
Please do **NOT** open issues regarding temporary bot inactivity. These are known infrastructure limitations, not code problems.

## Pipeline

The idea of execution-workflow will be greatly discussed in the second-version/iteration of this bot to ensure other open-source maintainers who might not be traditionally from an ML or a DS background can still maintain this codebase actively without much external help/assistance.
The project follows a structured execution workflow designed for maintainability. Even without a massive background in Data Science, contributors can help updating the bot's by interacting with scripts:

### Data-Acquisition && Machine-Learning

**data_pipeline.py**: This scrapes official BITS-admission sites containing cutoff scores for particular years and converts raw data into standardized CSV files which can be then put in for model-training.

**helper_notebook.ipynb**: This contains code for the "model"—currently focused on min-max scaling applications so ... it is more of a statistical-validator than a traditional machine-learning model to process cutoff trends.

**predictions.py**: This bridges between data-science and ML aspect of this repository with the production-aspect. It takes the constraints (user inputs) and applies the model logic to generate results. Note: This script is currently volatile and subject to change as the model logic matures.

### Production

**main.py**: Contains actual code for the bitsatards-discord-bot. It handles events, commands, and integrates with the prediction and database services.

analytics.py: Handles data visualization for the bot's features.

### Databases

/services and /database concerns with the time-tracking feature which is a misc. feature of this bot using postgres as its database and SQLAlchemy as ORM. There is quite a bit of work to do here, but that will be halted till 3.0.0 release.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix a few typos/wording issues in contributor guidance.

Minor spelling/wording issues reduce polish in a user-facing doc.

📝 Suggested edits
-Before contributing, review the [Roadmap](https://github.com/users/PranavU-Coder/projects/9) first to understand current priorities and exisitng issues that are being worked on.
+Before contributing, review the [Roadmap](https://github.com/users/PranavU-Coder/projects/9) first to understand current priorities and existing issues that are being worked on.

-For raising issues, they must follow these criterias to be considered:
+For raising issues, they must follow these criteria to be considered:

-/services and /database concerns with the time-tracking feature which is a misc. feature of this bot using postgres as its database and SQLAlchemy as ORM. There is quite a bit of work to do here, but that will be halted till 3.0.0 release.
+/services and /database concerns with the time-tracking feature which is a misc. feature of this bot using postgres as its database and SQLAlchemy as ORM. There is quite a bit of work to do here, but that will be halted until 3.0.0 release.
🧰 Tools
🪛 LanguageTool

[grammar] ~92-~92: Ensure spelling is correct
Context: ...st to understand current priorities and exisitng issues that are being worked on. For r...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)


[style] ~134-~134: Consider using the more formal “until”.
Context: ...ork to do here, but that will be halted till 3.0.0 release. init_db.py helps in...

(TILL)

🤖 Prompt for AI Agents
In `@CONTRIBUTING.md` around lines 92 - 134, Fix typos and awkward wording in
CONTRIBUTING.md: correct misspellings like "exisitng" → "existing" and
"criterias" → "criteria", standardize backtick/code formatting for filenames
(data_pipeline.py, helper_notebook.ipynb, predictions.py, main.py, analytics.py,
/services, /database), reword unclear phrases such as "bot's" to "bot" or "bot's
code" and "There is quite a bit of work to do here, but that will be halted till
3.0.0 release" to a clearer sentence about postponing work until v3.0.0, and fix
punctuation/grammar in the sections describing Roadmap, issue guidelines, and
the Pipeline so the document reads cleanly and consistently.

Comment on lines +19 to 23
div = soup.find("div", id="2025-2026")

data = []
tables = div.find_all('table')
tables = div.find_all("table")

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Find the data_pipeline.py file
find . -name "data_pipeline.py" -type f

Repository: PranavU-Coder/bitsatards_bot

Length of output: 90


🏁 Script executed:

cat -n ./data_pipeline.py | head -40

Repository: PranavU-Coder/bitsatards_bot

Length of output: 1491


🏁 Script executed:

# Check the full file to see if there's any error handling
wc -l ./data_pipeline.py

Repository: PranavU-Coder/bitsatards_bot

Length of output: 93


🏁 Script executed:

cat -n ./data_pipeline.py

Repository: PranavU-Coder/bitsatards_bot

Length of output: 2263


Add a guard to handle missing cutoff container before calling find_all. If the yearly id changes, div becomes None and calling find_all() on it crashes with an AttributeError. The code needs explicit error handling, especially since the comment at line 18 notes that this value must be updated annually.

🔧 Suggested guard
 div = soup.find("div", id="2025-2026")
+if div is None:
+    raise RuntimeError("cutoff container not found; update URL/id for the target year")
 tables = div.find_all("table")
🤖 Prompt for AI Agents
In `@data_pipeline.py` around lines 19 - 23, The code assumes div =
soup.find("div", id="2025-2026") is present before calling
div.find_all("table"), which will raise AttributeError if the yearly id changes;
update the code around the div variable to guard for a missing cutoff container
(check if div is None), and handle it explicitly (e.g., log a clear
error/warning, raise a descriptive exception, or return an empty data list)
before calling div.find_all("table") so downstream code (the tables processing)
is not executed on None.

There are plans to experiment by finding causation and co-relations with cutoffs of other entrance examinations such as: JEE however I don't think I will be able to complete it in time of announcement of this project.

The dataset is as follows for to work with: https://www.kaggle.com/datasets/pranavunni/bitsat-cutoff-dataset-2017-2025 for anyone willing to have their own spin-off to this.
The dataset is as follows for to work with: [dataset](!https://www.kaggle.com/datasets/pranavunni/bitsat-cutoff-dataset-2017-2025) for anyone willing to have their own spin-off to this.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix the broken dataset link markup.
The ! inside the URL breaks the link.

🔗 Suggested fix
-The dataset is as follows for to work with: [dataset](!https://www.kaggle.com/datasets/pranavunni/bitsat-cutoff-dataset-2017-2025) for anyone willing to have their own spin-off to this.
+The dataset is as follows for to work with: [dataset](https://www.kaggle.com/datasets/pranavunni/bitsat-cutoff-dataset-2017-2025) for anyone willing to have their own spin-off to this.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
The dataset is as follows for to work with: [dataset](!https://www.kaggle.com/datasets/pranavunni/bitsat-cutoff-dataset-2017-2025) for anyone willing to have their own spin-off to this.
The dataset is as follows for to work with: [dataset](https://www.kaggle.com/datasets/pranavunni/bitsat-cutoff-dataset-2017-2025) for anyone willing to have their own spin-off to this.
🤖 Prompt for AI Agents
In `@README.md` at line 68, The README contains a broken Markdown link "
[dataset](!https://...)" — remove the stray "!" and replace with proper Markdown
link syntax so it reads
[dataset](https://www.kaggle.com/datasets/pranavunni/bitsat-cutoff-dataset-2017-2025);
update the line in README.md where the dataset link appears to use that
corrected form.

> ## Contributing
> This project is developed hoping to be a community effort. If you'd like to improve the model or add features, please check our [CONTRIBUTING.md](./CONTRIBUTING.md) for general guidelines.
>
> ---
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Align horizontal rule style with markdownlint (MD035).

🧹 Suggested style alignment
-> ---
+> ------------------------------------------------------------------------------
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
> ---
> ------------------------------------------------------------------------------
🧰 Tools
🪛 markdownlint-cli2 (0.20.0)

[warning] 79-79: Horizontal rule style
Expected: ------------------------------------------------------------------------------; Actual: ---

(MD035, hr-style)

🤖 Prompt for AI Agents
In `@README.md` at line 79, Replace the solitary hyphen-style horizontal rule
'---' with the project's chosen markdownlint-compliant style (e.g., '***' or
'___') to satisfy MD035 and ensure consistency; update the horizontal rule token
'---' to the selected alternative everywhere in the README (or normalize all
horizontal rules to the same character) so the file conforms to the markdownlint
rule.

Comment on lines +92 to +94
<a href="https://github.com/PranavU-Coder/bitsatards_bot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=PranavU-Coder/bitsatards_bot" />
</a>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add alt text to contributor image (MD045).

♿ Suggested accessibility fix
-  <img src="https://contrib.rocks/image?repo=PranavU-Coder/bitsatards_bot" />
+  <img alt="Contributors" src="https://contrib.rocks/image?repo=PranavU-Coder/bitsatards_bot" />
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<a href="https://github.com/PranavU-Coder/bitsatards_bot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=PranavU-Coder/bitsatards_bot" />
</a>
<a href="https://github.com/PranavU-Coder/bitsatards_bot/graphs/contributors">
<img alt="Contributors" src="https://contrib.rocks/image?repo=PranavU-Coder/bitsatards_bot" />
</a>
🧰 Tools
🪛 markdownlint-cli2 (0.20.0)

[warning] 93-93: Images should have alternate text (alt text)

(MD045, no-alt-text)

🤖 Prompt for AI Agents
In `@README.md` around lines 92 - 94, The contributor image tag (<img
src="https://contrib.rocks/image?repo=PranavU-Coder/bitsatards_bot">) in
README.md is missing alt text; add a descriptive alt attribute to the img
element (for example: a short phrase like "Contributors to bitsatards_bot on
GitHub") so the image is accessible and satisfies MD045.

Comment on lines +16 to 28
@patch("services.exam_tracker.date")
def test_set_exam_date_today(self, mock_date_class, tracker, mock_repo):
today = date(2026, 4, 15)
mock_date_class.today.return_value = today
result = tracker.set_exam_date(123, "testuser", 456, today)
assert "which is btw today, best of luck soldier!" in result

@patch('services.exam_tracker.date')
@patch("services.exam_tracker.date")
def test_past(self, mock_date_class, tracker, mock_repo):
mock_date_class.today.return_value = date(2026, 4, 15)
past_date = date(2026, 1, 1)
result = tracker.set_exam_date(123, "testuser", 456, past_date)
assert "has already been passed" in result
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cd tests && head -30 services_test.py | cat -n

Repository: PranavU-Coder/bitsatards_bot

Length of output: 1613


🏁 Script executed:

# Also search for mock_repo usage throughout the file
rg "mock_repo" tests/services_test.py -n

Repository: PranavU-Coder/bitsatards_bot

Length of output: 2219


🏁 Script executed:

# Check if there are any Ruff configuration files
fd "ruff\.toml|\.ruff\.toml|pyproject\.toml" --type f

Repository: PranavU-Coder/bitsatards_bot

Length of output: 96


🏁 Script executed:

# Check the Ruff configuration to verify ARG002 is enabled
cat ruff.toml

Repository: PranavU-Coder/bitsatards_bot

Length of output: 978


Unused mock_repo args will trigger Ruff ARG002.

mock_repo isn't used in these tests; prefix with _ or remove to keep ruff check clean.

🧹 Suggested fix
-    def test_set_exam_date_today(self, mock_date_class, tracker, mock_repo):
+    def test_set_exam_date_today(self, mock_date_class, tracker, _mock_repo):

-    def test_past(self, mock_date_class, tracker, mock_repo):
+    def test_past(self, mock_date_class, tracker, _mock_repo):
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@patch("services.exam_tracker.date")
def test_set_exam_date_today(self, mock_date_class, tracker, mock_repo):
today = date(2026, 4, 15)
mock_date_class.today.return_value = today
result = tracker.set_exam_date(123, "testuser", 456, today)
assert "which is btw today, best of luck soldier!" in result
@patch('services.exam_tracker.date')
@patch("services.exam_tracker.date")
def test_past(self, mock_date_class, tracker, mock_repo):
mock_date_class.today.return_value = date(2026, 4, 15)
past_date = date(2026, 1, 1)
result = tracker.set_exam_date(123, "testuser", 456, past_date)
assert "has already been passed" in result
`@patch`("services.exam_tracker.date")
def test_set_exam_date_today(self, mock_date_class, tracker, _mock_repo):
today = date(2026, 4, 15)
mock_date_class.today.return_value = today
result = tracker.set_exam_date(123, "testuser", 456, today)
assert "which is btw today, best of luck soldier!" in result
`@patch`("services.exam_tracker.date")
def test_past(self, mock_date_class, tracker, _mock_repo):
mock_date_class.today.return_value = date(2026, 4, 15)
past_date = date(2026, 1, 1)
result = tracker.set_exam_date(123, "testuser", 456, past_date)
assert "has already been passed" in result
🧰 Tools
🪛 Ruff (0.14.14)

[warning] 17-17: Unused method argument: mock_repo

(ARG002)


[warning] 24-24: Unused method argument: mock_repo

(ARG002)

🤖 Prompt for AI Agents
In `@tests/services_test.py` around lines 16 - 28, The tests
test_set_exam_date_today and test_past declare an unused fixture parameter
mock_repo which triggers Ruff ARG002; remove the unused parameter or rename it
to _mock_repo in the function signatures (e.g., def
test_set_exam_date_today(self, mock_date_class, tracker, _mock_repo):) so the
unused mock_repo is ignored—update both test_set_exam_date_today and test_past
accordingly to eliminate the lint warning while keeping the
`@patch`("services.exam_tracker.date") mock_date_class and existing calls to
tracker.set_exam_date intact.

@PranavU-Coder
Copy link
Owner Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 5, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@PranavU-Coder PranavU-Coder merged commit 8d4fa61 into master Feb 5, 2026
2 checks passed
@github-project-automation github-project-automation bot moved this from Backlog to Done in bitsatards_bot_roadmap Feb 5, 2026
@PranavU-Coder PranavU-Coder deleted the 2.5-update branch February 5, 2026 15:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant