Skip to content

Issues: pytorch/torchchat

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

Llama3.2 vision model AOTI integration Compile / AOTI Issues related to AOT Inductor and torch compile enhancement New feature or request triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#1497 opened Feb 21, 2025 by larryliu0820
What's the future plan for torchchat serving enhancement New feature or request triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#1491 opened Feb 8, 2025 by jenniew
What is the future plan of model expansion? enhancement New feature or request Question Question about the repo as a whole triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#1380 opened Nov 15, 2024 by jenniew
Llama 3.2 11B Currently Only Supports Single Image enhancement New feature or request Known Gaps These are known Gaps/Issues/Bug items in torchchat Llama 3.2- Multimodal Issues related to Multimodal of Llama3.2 triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#1281 opened Oct 8, 2024 by Jack-Khuu
Failures when using PyTorch local build vs. binaries bug Something isn't working enhancement New feature or request
#1134 opened Sep 11, 2024 by angelayi
should have the ability to output debug artifacts when exporting to a .pte file enhancement New feature or request ExecuTorch Issues related to ExecuTorch installation, export, or build. Mobile uses separate tags
#1101 opened Sep 3, 2024 by byjlw
Slimming down torchchat: Replace replace_attention_with_custom_sdpa_attention() with ET's implementation enhancement New feature or request ExecuTorch Issues related to ExecuTorch installation, export, or build. Mobile uses separate tags good first issue Good for newcomers triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#1058 opened Aug 23, 2024 by Jack-Khuu
RFC: Make quantization a first class feature enhancement New feature or request
#1032 opened Aug 15, 2024 by byjlw
[Feature request] Langchain Support - Chat Model enhancement New feature or request
#1009 opened Aug 4, 2024 by raymon-io
Leverage the HF cache for models actionable Items in the backlog waiting for an appropriate impl/fix enhancement New feature or request
#992 opened Aug 1, 2024 by byjlw
Improve the scope of Model Evaluation to AOTI and ET enhancement New feature or request Evaluation/Benchmarking Issues Related to Evaluation and Benchmarking Known Gaps These are known Gaps/Issues/Bug items in torchchat triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#938 opened Jul 22, 2024 by Jack-Khuu
[FEATURE REQUEST] connect browser to native (Python-free) execution environment enhancement New feature or request triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#513 opened Apr 27, 2024 by mikekgfb
[Feature request] Make GGUF load lazy enhancement New feature or request
#263 opened Apr 18, 2024 by metascroy
[Feature request] Support more GGUF tensor formats enhancement New feature or request
#211 opened Apr 16, 2024 by metascroy
[Feature request] Support CPU+GPU mixed execution enhancement New feature or request
#1 opened Mar 28, 2024 by malfet
ProTip! Adding no:label will show everything without a label.