We have pre-trained four types of neural language models trained on a large historical dataset of books in English, published between 1760-1900 and comprised of ~5.1 billion tokens. The language model architectures include word type embeddings (word2vec and fastText) and contextualized models (BERT and Flair). For each architecture, we trained a model instance using the whole dataset. Additionally, we trained separate instances on text published before 1850 for the type embeddings (i.e., word2vec and fastText), and four instances considering different time slices for BERT.
Each .zip
file on zenodo contains model instances for one neural network architecture (i.e., bert, flair, fasttext and word2vec). After unzipping the four .zip files, the directory structure is as follows:
histLM_dataset
├── README.md
├── bert
│ ├── bert_1760_1850
│ │ ├── config.json
│ │ ├── pytorch_model.bin
│ │ ├── special_tokens_map.json
│ │ ├── tokenizer_config.json
│ │ ├── training_args.bin
│ │ └── vocab.txt
│ ├── bert_1760_1900
│ | └── ...
│ ├── bert_1850_1875
│ | └── ...
│ ├── bert_1875_1890
│ | └── ...
│ └── bert_1890_1900
│ └── ...
|
├── flair
│ └── flair_1760_1900
│ ├── best-lm.pt
│ ├── loss.txt
│ └── training.log
|
├── fasttext
│ ├── ft_1760_1850
│ │ ├── fasttext_words.model
│ │ ├── fasttext_words.model.trainables.syn1neg.npy
│ │ ├── fasttext_words.model.trainables.vectors_ngrams_lockf.npy
│ │ ├── fasttext_words.model.trainables.vectors_vocab_lockf.npy
│ │ ├── fasttext_words.model.wv.vectors.npy
│ │ ├── fasttext_words.model.wv.vectors_ngrams.npy
│ │ └── fasttext_words.model.wv.vectors_vocab.npy
│ └── ft_1760_1900
│ └── ...
|
└── word2vec
├── w2v_1760_1850
│ ├── w2v_words.model
│ ├── w2v_words.model.trainables.syn1neg.npy
│ └── w2v_words.model.wv.vectors.npy
└── w2v_1760_1900
└── ...
In addition to downloading the models from zenodo, the BERT models can be downloaded from Hugging Face Hub, see: https://huggingface.co/Livingwithmachines
- bert_1760_1900 : https://huggingface.co/Livingwithmachines/bert_1760_1900
- bert_1760_1850: https://huggingface.co/Livingwithmachines/bert_1760_1850
- bert_1850_1875: https://huggingface.co/Livingwithmachines/bert_1850_1875
- bert_1875_1890: https://huggingface.co/Livingwithmachines/bert_1875_1890
- bert_1890_1900: https://huggingface.co/Livingwithmachines/bert_1890_1900
After downloading the language models from zenodo (refer to Download section):
- Go to
histLM
directory:
cd /path/to/histLM
- Create a directory called
histLM_dataset
:
mkdir histLM_dataset
- Move the unzipped directories to
histLM/histLM_dataset
. The directory structure should be:
histLM
├── LICENSE
├── README.md
├── histLM_dataset
│ ├── README.md
│ ├── bert
│ │ ├── bert_1760_1900
│ │ ├── bert_1760_1850
│ │ ├── bert_1850_1875
│ │ ├── bert_1875_1890
│ │ └── bert_1890_1900
│ ├── fasttext
│ │ ├── ft_1760_1850
│ │ └── ft_1760_1900
│ ├── flair
│ │ └── flair_1760_1900
│ └── word2vec
│ ├── w2v_1760_1850
│ └── w2v_1760_1900
├── notebooks
│ ├── BERT_model.ipynb
│ ├── Flair_model.ipynb
│ ├── fastText_model.ipynb
│ └── word2vec_model.ipynb
├── requirements.txt
└── tests
└── test_import.py
- Finally, open one of the jupyter notebooks stored in the
notebooks
directory:
$ cd notebooks
$ jupyter notebook
So far, the language models presented in this repository have been used in the following projects:
- When Time Makes Sense: A Historically-Aware Approach to Targeted Sense Disambiguation (Findings of ACL: ACL-IJCNLP 2021): repository and paper.
- Living Machines: A Study of Atypical Animacy (COLING 2020): repository and paper.
- Assessing the Impact of OCR Quality on Downstream NLP Tasks (ARTIDIGH 2020): repository and paper.
- 'The Living Machine: A Computational Approach to the Nineteenth-Century Language of Technology" in Technology and Culture (2023) Paper and Repository
We strongly recommend installation via Anaconda:
-
After installing Anaconda, create a new environment for
histLM
calledpy38_histLM
:
conda create -n py38_histLM python=3.8
- Activate the environment:
conda activate py38_histLM
- Clone
histLM
source code:
git clone https://github.com/Living-with-machines/histLM.git
- Install dependencies:
pip install -r requirements.txt
Alternatively:
pip install torch==1.9.0
pip install transformers==4.10.0
pip install flair==0.9
pip install gensim==3.8.3
pip install notebook==6.4.3
pip install jupyter-client==7.0.2
pip install jupyter-core==4.7.1
pip install ipywidgets==7.6.4
- To allow the newly created
py38_histLM
environment to show up in the notebooks:
python -m ipykernel install --user --name py38_histLM --display-name "Python (py38_histLM)"
To cite histLM or any of the language models:
Hosseini, K., Beelen, K., Colavizza, G., & Coll Ardanuy, M. (2021). Neural Language Models for Nineteenth-Century English. Journal of Open Humanities Data, 7: 22, pp. 1–6. DOI: https://doi.org/10.5334/johd.48
Link (Journal of Open Humanities Data): http://doi.org/10.5334/johd.48
Codes/notebooks are released under MIT License.
Models are released under open license CC BY 4.0, available at https://creativecommons.org/licenses/by/4.0/legalcode.