-
Notifications
You must be signed in to change notification settings - Fork 9
Training the Parser
This page describes how to train the AM parser and assumes that we have
- converted the necessary corpora to AM CoNLL and moved them to the right directory.
- prepared dev and test data. (This will become obsolete.)
Internal note: If you are working on the Saarland servers you can get the preprocessed train and dev sets from these locations.
Further, you need a copy of am-tools.jar in the main directory. You can download it here.
-
Pick your graphbank.
-
In order to make sure that the parser finds all your files, check the contents of
configs/data_paths.libsonnet
,configs/eval_commands.libsonnet
,configs/test_evaluators.libsonnet
andconfigs/validation_evaluators.libsonnet
. -
Pick a config file. You can find config files for all the formalisms in
jsonnets/single/bert/
. Adapt the config file to your needs (for instance, should it evaluate on test set after training, yes or no?). -
Train the model:
python -u train.py <config-file> -s <where to save the model> -f --file-friendly-logging -o ' {"trainer" : {"cuda_device" : <your cuda device> } }' &> <where to log output>
If you want to use comet, also add these options (before the &>
):
--comet <'your API key here'> --project <name of project in comet>
Internal note: If you are training the model on the Saarland server use the nvidia-smi
command to get an overview over the available GPUs. Choose one of the GPUs as <your cuda device>
.