Skip to content

Commit

Permalink
update docs
Browse files Browse the repository at this point in the history
  • Loading branch information
JunnYu committed Oct 19, 2021
1 parent d9565fc commit 79169c2
Show file tree
Hide file tree
Showing 2 changed files with 28 additions and 8 deletions.
2 changes: 1 addition & 1 deletion community/junnyu/ckiplab-bert-base-chinese-ner/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
import paddle
import paddle.nn.functional as F
from paddlenlp.transformers import BertForTokenClassification, BertTokenizer
path = "ckiplab-bert-base-chinese-ner"
path = "junnyu/ckiplab-bert-base-chinese-ner"
model = BertForTokenClassification.from_pretrained(path)
model.eval()
tokenizer = BertTokenizer.from_pretrained(path)
Expand Down
34 changes: 27 additions & 7 deletions docs/model_zoo/transformers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ PaddleNLP为用户提供了常用的 ``BERT``、``ERNIE``、``ALBERT``、``RoBER
Transformer预训练模型汇总
------------------------------------

下表汇总了介绍了目前PaddleNLP支持的各类预训练模型以及对应预训练权重。我们目前提供了**21**种网络结构, **91** 种预训练的参数权重供用户使用,
其中包含了 **45** 种中文语言模型的预训练权重。
下表汇总了介绍了目前PaddleNLP支持的各类预训练模型以及对应预训练权重。我们目前提供了**21**种网络结构, **96** 种预训练的参数权重供用户使用,
其中包含了 **48** 种中文语言模型的预训练权重。

+--------------------+-----------------------------------------+--------------+-----------------------------------------+
| Model | Pretrained Weight | Language | Details of the model |
Expand Down Expand Up @@ -124,6 +124,31 @@ Transformer预训练模型汇总
| | | | and Traditional text using |
| | | | Whole-Word-Masking with extented data. |
| +-----------------------------------------+--------------+-----------------------------------------+
| |``junnyu/ckiplab-bert-base-chinese-ner`` | Chinese | 12-layer, 768-hidden, |
| | | | 12-heads, 102M parameters. |
| | | | Finetuned on NER task. |
| +-----------------------------------------+--------------+-----------------------------------------+
| |``junnyu/ckiplab-bert-base-chinese-pos`` | Chinese | 12-layer, 768-hidden, |
| | | | 12-heads, 102M parameters. |
| | | | Finetuned on POS task. |
| +-----------------------------------------+--------------+-----------------------------------------+
| |``junnyu/ckiplab-bert-base-chinese-ws`` | Chinese | 12-layer, 768-hidden, |
| | | | 12-heads, 102M parameters. |
| | | | Finetuned on WS task. |
| +-----------------------------------------+--------------+-----------------------------------------+
| |``junnyu/nlptown-bert-base-`` | Multilingual | 12-layer, 768-hidden, |
| |``multilingual-uncased-sentiment`` | | 12-heads, 167M parameters. |
| | | | Finetuned for sentiment analysis on |
| | | | product reviews in six languages: |
| | | | English, Dutch, German, French, |
| | | | Spanish and Italian. |
| +-----------------------------------------+--------------+-----------------------------------------+
| |``junnyu/tbs17-MathBERT`` | English | 12-layer, 768-hidden, |
| | | | 12-heads, 110M parameters. |
| | | | Trained on pre-k to graduate math |
| | | | language (English) using a masked |
| | | | language modeling (MLM) objective. |
| +-----------------------------------------+--------------+-----------------------------------------+
| |``macbert-base-chinese`` | Chinese | 12-layer, 768-hidden, |
| | | | 12-heads, 102M parameters. |
| | | | Trained with novel MLM as correction |
Expand All @@ -133,11 +158,6 @@ Transformer预训练模型汇总
| | | | 16-heads, 326M parameters. |
| | | | Trained with novel MLM as correction |
| | | | pre-training task. |
| +-----------------------------------------+--------------+-----------------------------------------+
| |``simbert-base-chinese`` | Chinese | 12-layer, 768-hidden, |
| | | | 12-heads, 108M parameters. |
| | | | Trained on 22 million pairs of similar |
| | | | sentences crawed from Baidu Know. |
+--------------------+-----------------------------------------+--------------+-----------------------------------------+
|BigBird_ |``bigbird-base-uncased`` | English | 12-layer, 768-hidden, |
| | | | 12-heads, _M parameters. |
Expand Down

0 comments on commit 79169c2

Please sign in to comment.