Unit 4: Structured prediction

Published

February 10, 2025

Structured prediction is an umbrella term for tasks that involve predicting structured outputs, rather than individual values. This unit covers two such tasks: sequence labelling, the task of mapping an input sequence to an output sequence, and dependency parsing, the task of mapping a sentence to a representation of its syntactic structure in the form of a dependency tree. The lectures introduce several technical approaches and concrete algorithms for these tasks.

Section Title Video Slides Quiz
4.1 Introduction to sequence labelling video slides quiz
4.2 Approaches to sequence labelling video slides quiz
4.3 The Viterbi algorithm video slides quiz
4.4 Introduction to dependency parsing video slides quiz
4.5 The arc-standard algorithm video slides quiz
4.6 Neural architectures for dependency parsing video slides quiz

Lab

In this lab, you will implement a simplified version of the dependency parser presented by Glavaš and Vulić (2021). This parser consists of a transformer encoder followed by a bi-affine layer that computes arc scores for all pairs of words. These scores are then used as logits in a classifier that predicts the position of the head of each word. As the encoder, you will use the uncased DistilBERT base model from the Transformers library.

Link to the lab

Advanced lab

In the advanced lab for this unit, you will extend the parser from the basic lab to support labelled parsing. This means that your parser should not only predict that there is a syntactic relation between two words but the type of this relation – for example, subject or object. You will validate your implementation by comparing the performance of your parser to those reported by Glavaš and Vulić (2021).

Link to the advanced lab