-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add FasterTokenizer on PPMiniLM #1542
Merged
LiuChiachi
merged 19 commits into
PaddlePaddle:develop
from
LiuChiachi:add-faster-ppminilm
Jan 11, 2022
Merged
Changes from 14 commits
Commits
Show all changes
19 commits
Select commit
Hold shift + click to select a range
2ca439e
update ppminilm, support export faster tokenizer
LiuChiachi 51af868
supoprts pruning
LiuChiachi 67a9b82
solve conflicts
LiuChiachi 60b8842
update quantation code
LiuChiachi 565df2d
fix quant generator for text pair dataset
LiuChiachi 7407caa
Merge branch 'develop' into add-faster-ppminilm
LiuChiachi fc9097a
pad_to_max_seq_len defaults to False
LiuChiachi 630451f
Merge branch 'add-faster-ppminilm' of https://github.com/LiuChiachi/P…
LiuChiachi 398557a
update ppminilm readme data
LiuChiachi 8dfbf58
update readme
LiuChiachi 78d5454
update readme
LiuChiachi 99455df
remove save_inference_model_with_tokenizer arg
LiuChiachi aed086a
Merge branch 'develop' of https://github.com/PaddlePaddle/PaddleNLP i…
LiuChiachi 1a4329d
update modeling
LiuChiachi baa4c1f
update export
LiuChiachi c95e5d8
update modeling
LiuChiachi f5d0f9d
Merge branch 'develop' of https://github.com/PaddlePaddle/PaddleNLP i…
LiuChiachi f82f770
Merge branch 'add-faster-ppminilm' of https://github.com/LiuChiachi/P…
LiuChiachi f037168
Merge branch 'develop' into add-faster-ppminilm
LiuChiachi File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
78 changes: 0 additions & 78 deletions
78
examples/model_compression/pp-minilm/finetuning/export_model.py
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为什么需要 save_inference_model 和 save_inference_model_with_tokenizer 2 个命令行参数?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已经把
save_inference_model_with_tokenizer
去掉了,之前的use_faster_tokenizer
可以发挥这样的功能