Skip to content

Training the Parser

jgroschwitz edited this page Jan 5, 2021 · 6 revisions

This page describes how to train the AM parser and assumes that we have

Internal note: If you are working on the Saarland servers you can get the preprocessed train and dev sets from these locations.

Further, you need a copy of am-tools.jar in the main directory. You can download it here.

Actual Training

  • Pick your graphbank.

  • In order to make sure that the parser finds all your files, check the contents of configs/data_paths.libsonnet, configs/eval_commands.libsonnet, configs/test_evaluators.libsonnet and configs/validation_evaluators.libsonnet.

  • Pick a config file. You can find config files for all the formalisms in jsonnets/single/bert/. Adapt the config file to your needs (for instance, should it evaluate on test set after training, yes or no?).

  • Train the model:

python -u train.py <config-file> -s <where to save the model>  -f --file-friendly-logging  -o ' {"trainer" : {"cuda_device" :  <your cuda device>  } }' &> <where to log output>

If you want to use comet, also add these options (before the &>):

--comet <'your API key here'> --project <name of project in comet>

Internal note: If you are training the model on the Saarland server use the nvidia-smi command to get an overview over the available GPUs. Choose one of the GPUs as <your cuda device>.

Output of the training process

Clone this wiki locally