Current research
In this unit, you will see several examples of current research in natural language processing, with a focus on large language models. The unit features both lecture-style reviews of recent developments (emergent abilities of LLMs, alignment via reinforcement learning) and videos from research presentations, including research produced in the Natural Language Processing Group.
Video lectures
Reading
- Ouyang et al. (2022): Training Language Models to Follow Instructions with Human Feedback
- Hu et al. (2021): LoRA: Low-Rank Adaptation of Large Language Models
- Norlund et al. (2023): On the Generalization Ability of Retrieval-Enhanced Transformers
- Pfeiffer et al. (2022): Lifting the Curse of Multilinguality by Pre-training Modular Transformers
- Kunz et al. (2022): Human Ratings Do Not Reflect Downstream Utility