Closed
Description
🐛 Describe the bug
ExecuTorch recently bumped their numpy requirements to numpy == 2.0 in pytorch/executorch@a7b5297
This puts torchchat in a finicky spot since the current requirements are tied to under 2.0 due to GGUF support requiring < 2.0 (see blame for previous attempts)
torchchat/install/requirements.txt
Lines 19 to 20 in fb65b8b
While not actively an issue, as soon as an ExecuTorch pin bump is required, this will become a hard blocker.
Task: Make a requirements version pinbump numpy >= 2.0
numpy > 1.17
- Considerations:
- 1/23: This may "just work" without additional effort; Seems like llama.cpp may have fixed this in December ([gguf-py] gguf_reader: numpy 2 newbyteorder fix ggml-org/llama.cpp#9772)
- Is updating the GGUF support within torchchat's control (i.e. propogating dependencies)?
If not who/what needs coercing? - Should be icebox GGUF support?
Versions
N/A
Metadata
Metadata
Assignees
Labels
Issues related to ExecuTorch installation, export, or build. Mobile uses separate tagsThese are known Gaps/Issues/Bug items in torchchatItems in the backlog waiting for an appropriate impl/fixGood for newcomersThis issue has been looked at a team member, and triaged and prioritized into an appropriate module