-
Notifications
You must be signed in to change notification settings - Fork 3k
Insights: PaddlePaddle/PaddleNLP
Overview
-
- 0 Merged pull requests
- 1 Open pull request
- 4 Closed issues
- 0 New issues
There hasn’t been any commit activity on PaddlePaddle/PaddleNLP in the last week.
Want to help out?
1 Pull request opened by 1 person
-
[LLM] Add pipeline and flashmask for Qwen2Moe and Deepseek
#9827 opened
Feb 1, 2025
4 Issues closed by 1 person
-
[Docs]:预测demo中加载了两次模型参数,不符合逻辑
#9482 closed
Feb 5, 2025 -
[Question]: 昇腾910b运行llama_npu_sft_N1C8.sh时报错
#9469 closed
Feb 4, 2025 -
[Question]: 使用Taskflow进行NER时出错
#9413 closed
Feb 3, 2025 -
[Question]: 关于FlashMask的相关问题
#9459 closed
Feb 3, 2025
26 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
Support XPU for auto-paralllel LLaMa
#9796 commented on
Feb 5, 2025 • 2 new comments -
[FlashCheckpoint] support EMA
#9815 commented on
Feb 5, 2025 • 0 new comments -
Update paddlenlp version to 3.0.0b3
#9572 commented on
Feb 4, 2025 • 0 new comments -
[DON'T NEED REVIEW] Mthreads llama 13 b 64 pp8
#9557 commented on
Feb 4, 2025 • 0 new comments -
[XPU] fp16/bf16 for multinomial op
#9550 commented on
Feb 3, 2025 • 0 new comments -
My mora
#9535 commented on
Feb 3, 2025 • 0 new comments -
[CacheKV] Add abstract `Cache` class.
#9401 commented on
Feb 5, 2025 • 0 new comments -
[NPU] Add chatglmv3-6b
#9213 commented on
Feb 3, 2025 • 0 new comments -
[DON'T NEED REVIEW] Mthreads llama 13 b 128 pp16
#9193 commented on
Feb 4, 2025 • 0 new comments -
[PPFleetX] add some config for hetero train
#8763 commented on
Feb 4, 2025 • 0 new comments -
[DO NOT Merge] Test dynamic auto parallel 3d sp acc
#7683 commented on
Feb 5, 2025 • 0 new comments -
[AutoParallel] Test 3d SP acc
#7677 commented on
Feb 5, 2025 • 0 new comments -
[WIP] Test for sequence parallel
#7657 commented on
Feb 5, 2025 • 0 new comments -
MP2-PP2 hack shared layer to non-sharded layer to Step Alignment
#7614 commented on
Feb 5, 2025 • 0 new comments -
add run_hybrid_parallel.sh
#7549 commented on
Feb 5, 2025 • 0 new comments -
fix bug when use_flas_attention is 0
#7421 commented on
Feb 5, 2025 • 0 new comments -
Llama prim jit
#7345 commented on
Feb 5, 2025 • 0 new comments -
Llama run
#7342 commented on
Feb 5, 2025 • 0 new comments -
[LLM] Support llama precache input.
#6928 commented on
Feb 5, 2025 • 0 new comments -
[LLM] Support pre_caches input of llama
#6900 commented on
Feb 5, 2025 • 0 new comments -
[LLM] support bloom fine grained dybatch v1.
#6878 commented on
Feb 4, 2025 • 0 new comments -
Refactor training loop
#6098 commented on
Feb 5, 2025 • 0 new comments -
Add question generation example
#2944 commented on
Feb 5, 2025 • 0 new comments -
Add byt5 Model
#1742 commented on
Feb 5, 2025 • 0 new comments -
[Question]: ernie-3.0模型在按照文档使用paddleslim模型压缩遇到data_loader cannot be None.
#9497 commented on
Feb 3, 2025 • 0 new comments -
[Docs]: https://paddlenlp.readthedocs.io/zh/latest/model_zoo/taskflow.html
#9544 commented on
Feb 1, 2025 • 0 new comments