Seminar in Computational Linguistics
- Datum: –14.30
- Plats: https://uu-se.zoom.us/j/63619411634
- Föreläsare: Artur Kulmizev
- Kontaktperson: Gongbo Tang
Transformer Guts: The Search for Syntax
Transformer-based language models are ubiquitous in contemporary NLP. Though the `` comprehension '' skills of such models are often overstated, their outputs are typically fluid and grammatical (if not semantically coherent). Moreover, the features learned by such models, when used as input to existing pipelines, often contribute to dramatic improvements in performance across the vast majority of NLP tasks. Certainly, models like BERT and GPT have been the focus of the computational linguistics community in recent years, whose researchers have been preoccupied with taking such models apart and investigating the components that may `` endow '' them with syntax (and to what extent) . Unfortunately, as is the case in much of NLP, many such studies exclusively employ English as a test bed, leaving their conclusions to be tenuous at best. In this talk, I will offer a brief introduction to Transformer-based language models and discuss prominent work in the NLP interpretability literature. I will then detail two recent multilingual interpretability studies, concerning probing and self-attention decoding, respectively. I will close with a general discussion of the interpretability toolkit within NLP and whether or not language models, in their current state, are capable of yielding novel linguistic insights.