In this tutorial, we explore the critical role of Retrieval-Augmented Language Models (RALMs) in enhancing large language model's knowledge retrieval capabilities. Starting with an overview of their importance, we’ll trace the evolution of core retrieval techniques from 2014 to present, covering milestones including word2vec, S-BERT, ICT, DPR, REALM, ColBERT, GAR, HyDE, and LLM Embeddings. Participants will gain a foundational understanding of how each method contributed to retrieval advancements and acquire practical insights into leveraging these techniques to improve model efficiency and precision in real-world applications.
Yao-Chung Fan
Sin-Syuan Wu
Yen-Hsiang Wang
Che-Wei Chang
We encourage participants to go through the following readings before attending the tutorial: