Skip to content

torch._check_is_size is not being recognized by CoreML partitioner #9213

Open
@Ray-Luo

Description

@Ray-Luo

🐛 Describe the bug

As titled.
torch._check_is_size(pos_x) is not working with CoreML partitioner, the workaround is to use torch._check(pos_x >= 0).

Versions

PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: macOS 15.3.1 (arm64)
GCC version: Could not collect
Clang version: 11.1.0
CMake version: version 3.31.4
Libc version: N/A

Python version: 3.10.0 (default, Mar 3 2022, 03:54:28) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-15.3.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Apple M1 Max

Versions of relevant libraries:
[pip3] executorch==0.5.0a0+6df7779
[pip3] executorchcoreml==0.0.1
[pip3] numpy==2.0.0
[pip3] torch==2.6.0
[pip3] torchao==0.8.0+gitebc43034
[pip3] torchaudio==2.6.0
[pip3] torchsr==1.0.4
[pip3] torchvision==0.21.0
[conda] executorch 0.5.0a0+6df7779 pypi_0 pypi
[conda] executorchcoreml 0.0.1 pypi_0 pypi
[conda] numpy 2.0.0 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchao 0.8.0+gitebc43034 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchsr 1.0.4 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi

cc @kimishpatel @YifanShenSZ @cymbalrush @metascroy

Metadata

Metadata

Assignees

Labels

enhancementNot as big of a feature, but technically not a bug. Should be easy to fixmodule: coremlIssues related to Apple's Core ML delegation and code under backends/apple/coreml/triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions