This lecture covers the Syntax-aware Graph-to-Graph Transformer architecture, which improves the input of graph relations into the self-attention mechanism of the Transformer model. It enables effective conditioning on syntactic dependency graphs for predicting both dependency-based and span-based semantic role labelling graphs.