Skip to content

Conversation

@xbezdick
Copy link

@xbezdick xbezdick commented Dec 8, 2025

When supplying draft model the old code pretty much tried to bindmount model with both src and destination in the end translating to same location. Also there was no support for mulitple file models so when one tried to use 440B as main model and 30B as draft model it would not be possible.

Lastly mmproj and chat_template would translate to the same location as main model so we ignore them. As of now draft_models won't work with multimodal models but we can still do speculative decoding when disabling vision by passing --no-mmproj.

Summary by Sourcery

Support correct identification and mounting of draft models, including multi-file models, while reusing the existing model identifier parsing across transports.

Bug Fixes:

  • Fix draft model OCI mounts to bind individual model files instead of incorrectly mounting the same path as both source and destination.
  • Skip multimodal-specific files (e.g., mmproj and chat_template) when mounting draft models to avoid conflicts with the main model.

Enhancements:

  • Allow extract_model_identifiers to accept an explicit model parameter so it can be reused for draft and alternate models across base, Hugging Face, Ollama, and URL transports.

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Dec 8, 2025

Reviewer's Guide

Refactors model identifier extraction to accept an optional model argument and fixes draft model OCI mount behavior by mounting individual blob files instead of a single bind mount, while skipping incompatible multimodal-specific files for draft models.

File-Level Changes

Change Details Files
Allow transports to extract model identifiers for arbitrary model strings, not just the instance model.
  • Updated BaseTransport.extract_model_identifiers to accept an optional model parameter defaulting to self.model.
  • Propagated the new optional model parameter through HuggingFace, Ollama, and URL transport implementations, delegating to the base implementation with the passed-through model.
  • Kept existing organization/tag normalization logic in each transport while making it usable for both primary and draft models.
ramalama/transports/base.py
ramalama/transports/huggingface.py
ramalama/transports/ollama.py
ramalama/transports/url.py
Rework draft model mounting to bind individual model blob files into the container with correct destinations and to ignore multimodal-only artifacts.
  • Replace the previous single bind mount of a resolved draft model path with logic that reads the draft model’s ref file via its tag.
  • For each file in the draft model ref file, skip mmproj and chat_template types to avoid overlapping with the main model and unsupported multimodal use.
  • For remaining files, resolve the blob path from the model store, convert it to a container path, and add a read-only bind mount into MNT_DIR/<file.name> using the engine API.
  • Add explicit NoRefFileFound error when the draft model ref file is missing to fail fast and clearly.
ramalama/transports/base.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @xbezdick, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the handling of draft models within the system by refining how they are mounted in OCI containers. The core purpose is to enable robust support for draft models that consist of multiple files and to ensure that the container mount options are configured accurately. It also introduces a mechanism to intelligently ignore specific file types, such as 'mmproj' and 'chat_template', when processing draft models, which is crucial for preventing conflicts and facilitating advanced features like speculative decoding.

Highlights

  • Multi-file Draft Model Support: The system now correctly handles draft models composed of multiple files by iterating through each relevant file and creating individual bind mounts, addressing previous limitations with single-file assumptions.
  • Correct OCI Mount Options: The logic for setting up OCI mount options for draft models has been fixed, resolving an issue where the source and destination paths for bind mounts were incorrectly set to the same location.
  • Exclusion of Specific Draft Model Components: Files identified as 'mmproj' or 'chat_template' within a draft model are now explicitly skipped during the mounting process, preventing potential conflicts with the main model and enabling specific use cases like disabling vision for speculative decoding.
  • Flexible Model Identifier Extraction: The extract_model_identifiers method across various transport classes has been enhanced to accept an optional model argument, allowing for more flexible extraction of model identifiers beyond the instance's primary model.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • The draft model files are all mounted directly under MNT_DIR/{file.name}, which will collide with the main model mounts when filenames overlap (e.g., tokenizer/config/weights for 440B + 30B); consider mounting the draft model into its own subdirectory (e.g., MNT_DIR/draft/{file.name}) and updating consumers accordingly.
  • Now that extract_model_identifiers takes an optional model argument in the base class, you may want to add an explicit type hint and default consistently across all overrides (and consider a docstring update) to make the new calling pattern clearer and avoid accidental misuse.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The draft model files are all mounted directly under `MNT_DIR/{file.name}`, which will collide with the main model mounts when filenames overlap (e.g., tokenizer/config/weights for 440B + 30B); consider mounting the draft model into its own subdirectory (e.g., `MNT_DIR/draft/{file.name}`) and updating consumers accordingly.
- Now that `extract_model_identifiers` takes an optional `model` argument in the base class, you may want to add an explicit type hint and default consistently across all overrides (and consider a docstring update) to make the new calling pattern clearer and avoid accidental misuse.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly refactors extract_model_identifiers to be reusable for draft models and fixes the mounting logic for draft models, especially for multi-file models. The changes are logical and address the described issues. However, I've found a critical syntax error in ramalama/transports/base.py due to incorrect indentation that will prevent the code from running. I've also provided a suggestion to improve code quality by using an enum for type checking. After addressing the critical issue, this PR should be good to merge.

@rhatdan
Copy link
Member

rhatdan commented Dec 9, 2025

Thanks @xbezdick
Please sign your commits.

@olliewalsh @ieaves @engelmi PTAL

Copy link
Collaborator

@ieaves ieaves left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks really good. One of the AI reviews identified

The draft model files are all mounted directly under MNT_DIR/{file.name}, which will collide with the main model mounts when filenames overlap

This looks like a real issue and namespacing the mount directory to something like {MNT_DIR}/drafts/{file.name} seems like a reasonable approach.

I think you'll need to modify RamalamaModelContext.draft_model_path to reflect the namespaced mount path and it would be amazing to add a test for this scenario as well.

@xbezdick
Copy link
Author

@rhatdan will sign on when I refactor
@ieaves had a quick look and decided that there is a bit more to understand in the codebase so will take me few days till I get to it.

ATM I'm testing the draft models and I'm not really happy about the results so far.

@xbezdick xbezdick force-pushed the main branch 3 times, most recently from 3d14201 to 55d3869 Compare December 16, 2025 15:00
When supplying draft model the old code pretty much tried to bindmount
model with both src and destination in the end translating to same
location. Also there was no support for mulitple file models so when one
tried to use 440B as main model and 30B as draft model it would not be
possible.

To prevent file collisions for example for mmproj and chat_template we
use namespaced directory. As of now draft_models won't work with
multimodal models but we can still do speculative decoding when
disabling vision by passing --no-mmproj.

Signed-off-by: Lukas Bezdicka <lbezdick@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants