-
Notifications
You must be signed in to change notification settings - Fork 9.6k
Issues: meta-llama/llama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
rotary position embedding cause different output in different tensor parallel settings!
model-usage
issues related to how models are used/loaded
#203
opened Mar 16, 2023 by
marscrazy
How to load multiple GPU version without torchrun
documentation
Improvements or additions to documentation
model-usage
issues related to how models are used/loaded
#84
opened Mar 3, 2023 by
ruian0
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9)
compatibility
issues arising from specific hardware or system configs
documentation
Improvements or additions to documentation
#93
opened Mar 3, 2023 by
hopto-dot
Unable to run example.py
compatibility
issues arising from specific hardware or system configs
#98
opened Mar 4, 2023 by
jessebikman
does anyone did with a single RTX 3070 Ti 8Gb?
model-usage
issues related to how models are used/loaded
#145
opened Mar 7, 2023 by
felipehime
Sentence/ Word embedding from LLaMA
new-feature
New feature or request
#152
opened Mar 8, 2023 by
kmukeshreddy
2 process started but the model loaded only on one device?
model-usage
issues related to how models are used/loaded
#172
opened Mar 10, 2023 by
minlik
Run 65B on 2PC with 4 GPU each, distribution inference failed
model-usage
issues related to how models are used/loaded
#173
opened Mar 10, 2023 by
sophieyl820
Distributing LLaMA on multiple machines within the same network
community-discussion
#176
opened Mar 10, 2023 by
fabawi
Plain pytorch LLaMA implementation (no fairscale, use as many GPUs as you want)
feedback-blogpost
If the issue or fix has potential for broader announcement and blog post.
model-usage
issues related to how models are used/loaded
#179
opened Mar 11, 2023 by
galatolofederico
How to recreate Winogrande results
documentation
Improvements or additions to documentation
#188
opened Mar 13, 2023 by
DanielWe2
Can not reproduce the results on the paper with 65B ckpt?
model-usage
issues related to how models are used/loaded
#193
opened Mar 13, 2023 by
Ageliss
Able to load 13B model on 2x3090 24Gb! But not inference... :(
compatibility
issues arising from specific hardware or system configs
documentation
Improvements or additions to documentation
#61
opened Mar 2, 2023 by
carlos-gemmell
Stuck when I run inference
needs-more-information
Issue is not fully clear to be acted upon
#194
opened Mar 13, 2023 by
BeachWang
run example.py error
model-usage
issues related to how models are used/loaded
#205
opened Mar 16, 2023 by
coderabbit214
It would be immensely useful to have an example that can be run in a notebook
documentation
Improvements or additions to documentation
#206
opened Mar 16, 2023 by
vigna
New AI copyright laws says weights aren't covered??
miscellaneous
does not fit an existing category, useful to determine whether we need further categorization
question
General questions about using Llama2
#207
opened Mar 17, 2023 by
elephantpanda
Will the evaluation code release?
documentation
Improvements or additions to documentation
research-paper
Issues and questions relating to the published architecture or methodology
#211
opened Mar 17, 2023 by
lshowway
Multi-GPU models give bizarre results on example.py
model-usage
issues related to how models are used/loaded
#212
opened Mar 18, 2023 by
tbenst
BBH stats?
documentation
Improvements or additions to documentation
research-paper
Issues and questions relating to the published architecture or methodology
#214
opened Mar 18, 2023 by
i-am-neo
My link expired for downloading model files and tokenizer. how can i request it back ?
model-access
issues and questions related to accessing the model, application form
#215
opened Mar 18, 2023 by
raghav20
Unable to reproduce the HumanEval performance, very poor performance
performance
Runtime / memory / accuracy performance issues
research-paper
Issues and questions relating to the published architecture or methodology
#223
opened Mar 21, 2023 by
sairitwik27
Guidance on releasing the fine-tuned LLaMA model weights
legal
potential legal/licensing inquiries
#226
opened Mar 22, 2023 by
binmakeswell
multi GPU error
model-usage
issues related to how models are used/loaded
#228
opened Mar 22, 2023 by
yuxuan2015
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.