Skip to content

Commit 953530f

Browse files
committed
update readme with usage caveats and calls for research
This write-up was loosely inspired in part by Mitchell et al.’s work on [Model Cards for Model Reporting](https://arxiv.org/abs/1810.03993). Adding such model usage sections could be good practice in general for open source research projects with potentially broad applications.
1 parent ed0dedc commit 953530f

File tree

1 file changed

+18
-2
lines changed

1 file changed

+18
-2
lines changed

README.md

Lines changed: 18 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,22 @@ For now, we have only released a smaller (117M parameter) version of GPT-2.
66

77
See more details in our [blog post](https://blog.openai.com/better-language-models/).
88

9+
## Usage
10+
11+
This repository is meant to be a starting point for researchers and engineers to experiment with GPT-2-117M. While GPT-2-117M is less proficient than GPT-2-1.5B, it is useful for a wide range of research and applications which could also apply to larger models.
12+
13+
### Some caveats
14+
15+
- GPT-2-117M robustness and worst case behaviors are not well-understood. As with any machine-learned model, carefully evaluate GPT-2-117M for your use case, especially if used without fine-tuning or in safety-critical applications where reliability is important.
16+
- The dataset our GPT-2-117M was trained on contains many texts with [biases](https://twitter.com/TomerUllman/status/1101485289720242177) and factual inaccuracies, and thus GPT-2-117M is likely to be biased and inaccurate as well.
17+
- To avoid having samples mistaken as human-written, we recommend clearly labeling samples as synthetic before wide dissemination. Our models are often incoherent or inaccurate in subtle ways, which takes more than a quick read for a human to notice.
18+
19+
### Work with us
20+
21+
Please [let us know](mailto:languagequestions@openai.com) if you’re doing interesting research with or working on applications of GPT-2-117M! We’re especially interested in hearing from and potentially working with those who are studying
22+
- Potential malicious use cases and defenses against them (e.g. the detectability of synthetic text)
23+
- The extent of problematic content (e.g. bias) being baked into the models and effective mitigations
24+
925
## Installation
1026

1127
Git clone this repository, and `cd` into directory for remaining commands
@@ -53,7 +69,7 @@ and a valid install of [nvidia-docker 2.0](https://github.com/nvidia/nvidia-dock
5369
docker run --runtime=nvidia -it gpt-2 bash
5470
```
5571

56-
## Usage
72+
## Sampling scripts
5773

5874
| WARNING: Samples are unfiltered and may contain offensive content. |
5975
| --- |
@@ -120,4 +136,4 @@ We are still considering release of the larger models.
120136

121137
## License
122138

123-
MIT
139+
[MIT](./LICENSE)

0 commit comments

Comments
 (0)