September 2021: Research Talk

Alexander Koller (Universität des Saarlandes): Compositional semantic parsing

Termin: 16. September 2021, 16 Uhr -- Sprach: Englisch

Abstract:
There are many technical approaches to mapping natural-language sentences to symbolic meaning representations. The current dominant approach is with neural sequence-to-sequence models which map the sentence to a string version of the meaning representation. Seq2seq models work well for many NLP tasks, including tagging and parsing, and deliver excellent accuracy on semantic parsing as well.

However, it has recently been found that seq2seq models struggle with "compositional generalization": They have a hard time generalizing from training examples to structurally similar unseen test sentences. I will show some new results that pinpoint this difficulty more precisely, and discuss what this means for how to best evaluate
semantic parsers.

I will then present our own research on compositional semantic parsing, which combines neural models with the Principle of Compositionality from theoretical semantics. Our semantic parser uses a neural supertagger to predict word meanings and a neural dependency parser to predict the compositional structure, and then evaluates this
dependency structure in a graph algebra to obtain the meaning representation. We achieve state-of-the-art parsing accuracy across a number of graphbanks, at a speed of up to 10k tokens/second. A demo is available at http://amparser.coli.uni-saarland.de:8080/

Alexander Koller is a Professor of Computational Linguistics at the Department of Language Science and Technology at Saarland University. He received his PhD from Saarland University in 2004 and was then a postdoc at Columbia University and the University of Edinburgh and a professor at the University of Potsdam. His research interested span a broad range of topics in computational linguistics, including semantics, parsing, and dialogue. He is currently particularly interested in bringing together principled linguistic modeling with the accuracy and robustness of neural methods.