Unit 5: Current research
In this unit, you will see several examples of current research in natural language processing, with a focus on large language models. The unit features both lecture-style reviews of recent developments and videos from research presentations.
Section | Title | Video | Slides | Quiz |
---|---|---|---|---|
5.1 | How might LLMs store facts? | video | none | quiz |
5.2 | Efficient fine-tuning | video | paper | quiz |
5.3 | Retrieval-augmented generation (until 17:50) | video | none | quiz |
5.4 | Multilinguality and modular transformers | video | paper | quiz |
5.5 | Data contamination | video | paper | quiz |
5.6 | LLMs as stochastic parrots (until 15:00) | video | none | quiz |
Lab
In this lab, you will implement LoRA, one of the most well-known methods for parameter-efficient fine-tuning of large language models. Along the way, you will earn experience with Hugging Face Transformers, a state-of-the-art library for training and deploying language models, as well as with several related libraries.