Skip to content
jgroschwitz edited this page Jan 28, 2022 · 18 revisions

Welcome to the am-parser wiki!

This documentation is a work in progress. If anything is unclear, please open an issue and we will get back to you as quickly as possible.

A characteristic feature of the AM parser is that it learns to parse sentences into AM dependency trees, which then evaluate to graphs in the AM algebra. We have defined our own file format, AM-CoNLL, to store AM dependency trees; it is a mild extension of the well-known CoNLL format for storing dependency trees.

Internal note: There is an UdS specific documentation on how to set up the environment and where to find the files.

The AM parser pipeline

The AM parser consists of four steps:

  1. Preprocessing and decomposition: convert the graphbank to AM dependency trees
  2. Training the neural network to predict AM dependency trees
  3. Use the model to parse and evaluate the test data.

Applying the AM parser to new graphbanks and new graph formalisms

If you want to use the AM parser for a graphbank based on one of the graph formalisms supported here, you can use the above pipeline.

If however you want to use the AM parser on a new graph formalism, we recommend that you use our unsupervised training regiment that learns AM dependency trees from sentence-graph pairs. This makes it easy to use our compositional, interpretable parser on novel graphbanks! You can find a guide here.

Other ways of running the AM parser

You can use our pretrained models to directly parse sentences into graphs (this is also explained in the main README).

Alternatively, you can just let the am-parser compute the scores, and use some other parser to produce trees in the AM-CoNLL format, e.g. the A* parser. In this case, you will need to evaluate the AM-CoNLL files yourself.

Utilities

Third Party Packages

An overview of the third party packages we used can be found here.