#24 Significantly advancing LLMs with RAG (Google's Gemini 2.0, Deep Research, notebookLM)
Dev and Doc - Latest News
Dev and Doc - Latest News
It's 2025, Dev and Doc cover the latest news including Google's deep research and notebook LM, DeepMind's Promptbreeder, and Anthropic's new RAG approach. We also go through what retrieval augmented generation (RAG) is, and how this technique is advancing LLM performance.
š Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :)
Meet the Team
šØš»āāļø Doc - Dr. Joshua Au Yeung - LinkedIn
š¤ Dev - Zeljko Kraljevic - Twitter
Where to Follow Us
LinkedIn Newsletter
YouTube
Spotify
Apple Podcasts
Substack
Contact Us
š§ For enquiries -
[email protected]
Credits
šļø Editor - Dragan KraljeviÄ - Instagram
šØ Brand Design and Art Direction - Ana Grigorovici - Behance
Episode Timeline
00:00 Highlights
00:53 News - Notebook LM, OpenAI 12 days of Christmas
07:44 Change in the meta - post-training
11:34 Optimizing prompts with DeepMind Promptbreeder
13:20 Is OpenAI losing their lead against Google
16:45 Deep research vs Perplexity
24:18 AIME and oncology
26:00 Deep research results
30:20 RAG intro
33:14 Second pass RAG
36:20 RAG didn't take off
38:40 Wikichat
39:16 How do we improve on RAG?
41:11 Semantic/topic chunking, cross-encoders, agentic RAG
51:15 Googleās Problem Decomposition
53:32 Anthropicās Contextual Retrieval Processing
56:07 Summary and wrap up
References
Cross Encoders
Wikichat
Google's Problem Decomposition
Anthropic's Contextual Retrieval
Google AIME in Oncology
DeepMind's Promptbreeder