Skip to content

Commit 4df6b59

Browse files
authored
Update deepset/roberta-base-squad2 model card (#8522)
* Update README.md * Update README.md
1 parent 0c9bae0 commit 4df6b59

File tree

1 file changed

+16
-7
lines changed
  • model_cards/deepset/roberta-base-squad2

1 file changed

+16
-7
lines changed

model_cards/deepset/roberta-base-squad2/README.md

Lines changed: 16 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ datasets:
55

66
# roberta-base for QA
77

8-
NOTE: This model has been superseded by deepset/roberta-base-squad2-v2. For an explanation of why, see [this github issue](https://github.com/deepset-ai/FARM/issues/552) from the FARM repository.
8+
NOTE: This is version 2 of the model. See [this github issue](https://github.com/deepset-ai/FARM/issues/552) from the FARM repository for an explanation of why we updated. If you'd like to use version 1, specify `revision="v1.0"` when loading the model in Transformers 3.5.
99

1010
## Overview
1111
**Language model:** roberta-base
@@ -19,10 +19,10 @@ NOTE: This model has been superseded by deepset/roberta-base-squad2-v2. For an e
1919
## Hyperparameters
2020

2121
```
22-
batch_size = 50
23-
n_epochs = 3
22+
batch_size = 96
23+
n_epochs = 2
2424
base_LM_model = "roberta-base"
25-
max_seq_len = 384
25+
max_seq_len = 386
2626
learning_rate = 3e-5
2727
lr_schedule = LinearWarmup
2828
warmup_proportion = 0.2
@@ -32,9 +32,18 @@ max_query_length=64
3232

3333
## Performance
3434
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
35+
3536
```
36-
"exact": 78.49743114629833,
37-
"f1": 81.73092721240889
37+
"exact": 79.97136359807968
38+
"f1": 83.00449234495325
39+
40+
"total": 11873
41+
"HasAns_exact": 78.03643724696356
42+
"HasAns_f1": 84.11139298441825
43+
"HasAns_total": 5928
44+
"NoAns_exact": 81.90075693860386
45+
"NoAns_f1": 81.90075693860386
46+
"NoAns_total": 5945
3847
```
3948

4049
## Usage
@@ -85,7 +94,7 @@ For doing QA at scale (i.e. many docs instead of single paragraph), you can load
8594
```python
8695
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
8796
# or
88-
reader = TransformersReader(model="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
97+
reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
8998
```
9099

91100

0 commit comments

Comments
 (0)