-
Notifications
You must be signed in to change notification settings - Fork 874
Issues: mistralai/mistral-inference
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[BUG: ImportError: cannot import name 'Transformer' from 'mistral_inference.model' (/usr/local/lib/python3.10/dist-packages/mistral_inference/model.py)
bug
Something isn't working
#206
opened Jul 26, 2024 by
rabeeqasem
[BUG: Could not find consolidated.00.pth or consolidated.safetensors in Mistral model path but mistralai/Mistral-Large-Instruct-2407 surely not contains it
bug
Something isn't working
#205
opened Jul 26, 2024 by
ShadowTeamCN
[BUG: AssertionError: Mamba is not installed. Please install it using Something isn't working
pip install mamba-ssm
.
bug
#192
opened Jul 17, 2024 by
matbee-eth
Has any thought been given to using LoRA to increase the number of experts (100x) with minimal memory?
#95
opened Dec 22, 2023 by
sixChar
[BUG: ModuleNotFoundError: No module named 'mistral_inference.transformer'
bug
Something isn't working
#202
opened Jul 23, 2024 by
yafangwang9
[BUG] Transformer.from_folder() does not load the model on multiple GPU
bug
Something isn't working
#197
opened Jul 19, 2024 by
Cerrix
Missing model card / data sheet with info on pretraining and RLHF datasets
#9
opened Sep 28, 2023 by
mdingemanse
[BUG: pip install mistral_inference: ModuleNotFoundError: No module named 'torch'
bug
Something isn't working
#228
opened Oct 4, 2024 by
chrisstankevitz
[Mistral 7B mistral-7b-instruct-v0.1.Q8_0.gguf] Wrong text "quoted" while presented as real
#131
opened Mar 3, 2024 by
SINAPSA-IC
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.