Skip to content

fix(inference): add support for xpu devices in TorchInferencer#3339

Open
rajeshgangireddy wants to merge 4 commits intoopen-edge-platform:mainfrom
rajeshgangireddy:fixes/torch_inference_xpu
Open

fix(inference): add support for xpu devices in TorchInferencer#3339
rajeshgangireddy wants to merge 4 commits intoopen-edge-platform:mainfrom
rajeshgangireddy:fixes/torch_inference_xpu

Conversation

@rajeshgangireddy
Copy link
Contributor

@rajeshgangireddy rajeshgangireddy commented Feb 12, 2026

📝 Description

Even though TorchInferencer is legacy, many users still use it. Adding support for xpu for this.
Tested with cpu, cuda and xpu.

Select what type of change your PR is:

  • 🚀 New feature (non-breaking change which adds functionality)
  • 🐞 Bug fix (non-breaking change which fixes an issue)
  • 🔄 Refactor (non-breaking change which refactors the code base)
  • ⚡ Performance improvements
  • 🎨 Style changes (code style/formatting)
  • 🧪 Tests (adding/modifying tests)
  • 📚 Documentation update
  • 📦 Build system changes
  • 🚧 CI/CD configuration
  • 🔧 Chore (general maintenance)
  • 🔒 Security update
  • 💥 Breaking change (fix or feature that would cause existing functionality to not work as expected)

✅ Checklist

Before you submit your pull request, please make sure you have completed the following steps:

  • 📚 I have made the necessary updates to the documentation (if applicable).
  • 🧪 I have written tests that support my changes and prove that my fix is effective or my feature works (if applicable).
  • 🏷️ My PR title follows conventional commit format.

For more information about code review checklists, see the Code Review Checklist.

Signed-off-by: rajeshgangireddy <rajesh.gangireddy@intel.com>
Copilot AI review requested due to automatic review settings February 12, 2026 12:51
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds xpu as a supported device option for the legacy TorchInferencer, including CLI exposure and auto-device selection logic.

Changes:

  • Added xpu to the inference CLI --device choices.
  • Updated TorchInferencer docs and device validation to accept xpu.
  • Enhanced auto device selection to prefer CUDA, then XPU (if available), else CPU.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

File Description
tools/inference/torch_inference.py Exposes xpu as a valid CLI device choice.
src/anomalib/deploy/inferencers/torch_inferencer.py Adds xpu to supported devices and implements auto-selection logic for it.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

…erencer

Signed-off-by: rajeshgangireddy <rajesh.gangireddy@intel.com>
Copilot AI review requested due to automatic review settings February 12, 2026 13:01
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

…e script to remove 'gpu' alias

Signed-off-by: rajeshgangireddy <rajesh.gangireddy@intel.com>
@rajeshgangireddy rajeshgangireddy changed the title fix(inference): add support for 'xpu' device option in TorchInferencer fix(inference): add support for xpu devices in TorchInferencer Feb 12, 2026
@rajeshgangireddy rajeshgangireddy marked this pull request as ready for review February 12, 2026 16:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant