Partner im RedaktionsNetzwerk Deutschland
Radio Logo
The station's stream will start in null sec.
Listen to The Gradient: Perspectives on AI in the App
Listen to The Gradient: Perspectives on AI in the App
(3,230)(171,489)
Save favourites
Alarm
Sleep timer
Save favourites
Alarm
Sleep timer
HomePodcastsTechnology
The Gradient: Perspectives on AI

The Gradient: Perspectives on AI

Podcast The Gradient: Perspectives on AI
Podcast The Gradient: Perspectives on AI

The Gradient: Perspectives on AI

The Gradient
add
Interviews with various people who research, build, or use AI, including academics, engineers, artists, entrepreneurs, and more. thegradientpub.substack.com More
Interviews with various people who research, build, or use AI, including academics, engineers, artists, entrepreneurs, and more. thegradientpub.substack.com More

Available Episodes

5 of 76
  • Riley Goodside: The Art and Craft of Prompt Engineering
    In episode 75 of The Gradient Podcast, Daniel Bashir speaks to Riley Goodside. Riley is a Staff Prompt Engineer at Scale AI. Riley began posting GPT-3 prompt examples and screenshot demonstrations in 2022. He previously worked as a data scientist at OkCupid, Grindr, and CopyAI.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected] to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:37) Riley’s journey to becoming the first Staff Prompt Enginer* (02:00) data science background in online dating industry* (02:15) Sabbatical + catching up on LLM progress* (04:00) AI Dungeon and first taste of GPT-3* (05:10) Developing on codex, ideas about integrating codex with Jupyter Notebooks, start of posting on Twitter* (08:30) “LLM ethnography”* (09:12) The history of prompt engineering: in-context learning, Reinforcement Learning from Human Feedback (RLHF)* (10:20) Models used to be harder to talk to* (10:45) The three eras* (10:45) 1 - Pre-trained LM era—simple next-word predictors* (12:54) 2 - Instruction tuning* (16:13) 3 - RLHF and overcoming instruction tuning’s limitations* (19:24) Prompting as subtractive sculpting, prompting and AI safety* (21:17) Riley on RLHF and safety* (24:55) Riley’s most interesting experiments and observations* (25:50) Mode collapse in RLHF models* (29:24) Prompting models with very long instructions* (33:13) Explorations with regular expressions, chain-of-thought prompting styles* (36:32) Theories of in-context learning and prompting, why certain prompts work well* (42:20) Riley’s advice for writing better prompts* (49:02) Debates over prompt engineering as a career, relevance of prompt engineers* (58:55) OutroLinks:* Riley’s Twitter and LinkedIn* Talk: LLM Prompt Engineering and RLHF: History and Techniques Get full access to The Gradient at thegradientpub.substack.com/subscribe
    01/06/2023
    59:42
  • Talia Ringer: Formal Verification and Deep Learning
    In episode 74 of The Gradient Podcast, Daniel Bashir speaks to Professor Talia Ringer.Professor Ringer is an Assistant Professor with the Programming Languages, Formal Methods, and Software Engineering group at the University of Illinois at Urbana Champaign. Their research leverages proof engineering to allow programmers to more easily build formally verified software systems.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected] to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Daniel’s long annoying intro* (02:15) Origin Story* (04:30) Why / when formal verification is important* (06:40) Concerns about ChatGPT/AutoGPT et al failures, systems for accountability* (08:20) Difficulties in making formal verification accessible* (11:45) Tactics and interactive theorem provers, interface issues* (13:25) How Prof Ringer’s research first crossed paths with ML* (16:00) Concrete problems in proof automation* (16:15) How ML can help people verifying software systems* (20:05) Using LLMs for understanding / reasoning about code* (23:05) Going from tests / formal properties to code* (31:30) Is deep learning the right paradigm for dealing with relations for theorem proving? * (36:50) Architectural innovations, neuro-symbolic systems* (40:00) Hazy definitions in ML* (41:50) Baldur: Proof Generation & Repair with LLMs* (45:55) In-context learning’s effectiveness for LLM-based theorem proving* (47:12) LLMs without fine-tuning for proofs* (48:45) Something ~ surprising ~ about Baldur results (maybe clickbait or maybe not)* (49:32) Asking models to construct proofs with restrictions, translating proofs to formal proofs* (52:07) Methods of proofs and relative difficulties* (57:45) Verifying / providing formal guarantees on ML systems* (1:01:15) Verifying input-output behavior and basic considerations, nature of guarantees* (1:05:20) Certified/verifies systems vs certifying/verifying systems—getting LLMs to spit out proofs along with code* (1:07:15) Interpretability and how much model internals matter, RLHF, mechanistic interpretability* (1:13:50) Levels of verification for deploying ML systems, HCI problems* (1:17:30) People (Talia) actually use Bard* (1:20:00) Dual-use and “correct behavior”* (1:24:30) Good uses of jailbreaking* (1:26:30) Talia’s views on evil AI / AI safety concerns* (1:32:00) Issues with talking about “intelligence,” assumptions about what “general intelligence” means* (1:34:20) Difficulty in having grounded conversations about capabilities, transparency* (1:39:20) Great quotation to steal for your next thinkpiece + intelligence as socially defined* (1:42:45) Exciting research directions* (1:44:48) OutroLinks:* Talia’s Twitter and homepage* Research* Concrete Problems in Proof Automation* Baldur: Whole-Proof Generation and Repair with LLMs* Research ideas Get full access to The Gradient at thegradientpub.substack.com/subscribe
    25/05/2023
    1:45:35
  • Brigham Hyde: AI for Clinical Decision-Making
    In episode 72 of The Gradient Podcast, Daniel Bashir speaks to Brigham Hyde.Brigham is Co-Founder and CEO of Atropos Health. Prior to Atropos, he served as President of Data and Analytics at Eversana, a life sciences commercialization service provider. He led the investment in Concert AI in the oncology real-world data space at Symphony AI. Brigham has also held research faculty positions at Tufts University and the MIT Media Lab.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected] to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:55) Brigham’s background* (06:00) Current challenges in healthcare* (12:33) Interpretablity and delivering positive patient outcomes* (17:10) How Atropos surfaces relevant data for patient interventions, on personalized observational research studies* (22:10) Quality and quantity of data for patient interventions* (27:25) Challenges and opportunities for generative AI in healthcare* (35:17) Database augmentation for generative models* (36:25) Future work for Atropos* (39:15) Future directions for AI + healthcare* (40:56) OutroLinks:* Atropos Health homepage* Brigham’s Twitter and LinkedIn Get full access to The Gradient at thegradientpub.substack.com/subscribe
    18/05/2023
    41:43
  • Scott Aaronson: Against AI Doomerism
    In episode 72 of The Gradient Podcast, Daniel Bashir speaks to Professor Scott Aaronson. Scott is the Schlumberger Centennial Chair of Computer Science at the University of Texas at Austin and director of its Quantum Information Center. His research interests focus on the capabilities and limits of quantum computers and computational complexity theory more broadly. He has recently been on leave to work at OpenAI, where he is researching theoretical foundations of AI safety. Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected] to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:45) Scott’s background* (02:50) Starting grad school in AI, transitioning to quantum computing and the AI / quantum computing intersection* (05:30) Where quantum computers can give us exponential speedups, simulation overhead, Grover’s algorithm* (10:50) Overselling of quantum computing applied to AI, Scott’s analysis on quantum machine learning* (18:45) ML problems that involve quantum mechanics and Scott’s work* (21:50) Scott’s recent work at OpenAI* (22:30) Why Scott was skeptical of AI alignment work early on* (26:30) Unexpected improvements in modern AI and Scott’s belief update* (32:30) Preliminary Analysis of DALL-E 2 (Marcus & Davis)* (34:15) Watermarking GPT outputs* (41:00) Motivations for watermarking and language model detection* (45:00) Ways around watermarking* (46:40) Other aspects of Scott’s experience with OpenAI, theoretical problems* (49:10) Thoughts on definitions for humanistic concepts in AI* (58:45) Scott’s “reform AI alignment stance” and Eliezer Yudkowsky’s recent comments (+ Daniel pronounces Eliezer wrong), orthogonality thesis, cases for stopping scaling* (1:08:45) OutroLinks:* Scott’s blog* AI-related work* Quantum Machine Learning Algorithms: Read the Fine Print* A very preliminary analysis of DALL-E 2 w/ Marcus and Davis* New AI classifier for indicating AI-written text and Watermarking GPT Outputs* Writing* Should GPT exist?* AI Safety Lecture* Why I’m not terrified of AI Get full access to The Gradient at thegradientpub.substack.com/subscribe
    11/05/2023
    1:09:32
  • Ted Underwood: Machine Learning and the Literary Imagination
    In episode 71 of The Gradient Podcast, Daniel Bashir speaks to Ted Underwood.Ted is a professor in the School of Information Sciences with an appointment in the Department of English at the University of Illinois at Urbana Champaign. Trained in English literary history, he turned his research focus to applying machine learning to large digital collections. His work explores literary patterns that become visible across long timelines when we consider many works at once—often, his work involves correcting and enriching digital collections to make them more amenable to interesting literary research.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected] to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:42) Ted’s background / origin story, * (04:35) Context in interpreting statistics, “you need a model,” the need for data about human responses to literature and how that manifested in Ted’s work* (07:25) The recognition that we can model literary prestige/genre because of ML* (08:30) Distant reading and the import of statistics over large digital libraries* (12:00) Literary prestige* (12:45) How predictable is fiction? Scales of predictability in texts* (13:55) Degrees of autocorrelation in biography and fiction and the structure of narrative, how LMs might offer more sophisticated analysis* (15:15) Braided suspense / suspense at different scales of a story* (17:05) The Literary Uses of High-Dimensional Space: how “big data” came to impact the humanities, skepticism from humanists and responses, what you can do with word count* (20:50) Why we could use more time to digest statistical ML—how acceleration in AI advances might impact pedagogy* (22:30) The value in explicit models* (23:30) Poetic “revolutions” and literary prestige* (25:53) Distant vs. close reading in poetry—follow-up work for “The Longue Durée”* (28:20) Sophistication of NLP and approaching the human experience* (29:20) What about poetry renders it prestigious?* (32:20) Individualism/liberalism and evolution of poetic taste* (33:20) Why there is resistance to quantitative approaches to literature* (34:00) Fiction in other languages* (37:33) The Life Cycles of Genres* (38:00) The concept of “genre”* (41:00) Inflationary/deflationary views on natural kinds and genre* (44:20) Genre as a social and not a linguistic phenomenon* (46:10) Will causal models impact the humanities? * (48:30) (Ir)reducibility of cultural influences on authors* (50:00) Machine Learning and Human Perspective* (50:20) Fluent and perspectival categories—Miriam Posner on “the radical, unrealized potential of digital humanities.”* (52:52) How ML’s vices can become virtues for humanists* (56:05) Can We Map Culture? and The Historical Significance of Textual Distances* (56:50) Are cultures and other social phenomena related to one another in a way we can “map”? * (59:00) Is cultural distance Euclidean? * (59:45) The KL Divergence’s use for humanists* (1:03:32) We don’t already understand the broad outlines of literary history* (1:06:55) Science Fiction Hasn’t Prepared us to Imagine Machine Learning* (1:08:45) The latent space of language and what intelligence could mean* (1:09:30) LLMs as models of culture* (1:10:00) What it is to be a human in “the age of AI” and Ezra Klein’s framing* (1:12:45) Mapping the Latent Spaces of Culture* (1:13:10) Ted on Stochastic Parrots* (1:15:55) The risk of AI enabling hermetically sealed cultures* (1:17:55) “Postcards from an unmapped latent space,” more on AI systems’ limitations as virtues* (1:20:40) Obligatory GPT-4 section* (1:21:00) Using GPT-4 to estimate passage of time in fiction* (1:23:39) Is deep learning more interpretable than statistical NLP?* (1:25:17) The “self-reports” of language models: should we trust them?* (1:26:50) University dependence on tech giants, open-source models* (1:31:55) Reclaiming Ground for the Humanities* (1:32:25) What scientists, alone, can contribute to the humanities* (1:34:45) On the future of the humanities* (1:35:55) How computing can enable humanists as humanists* (1:37:05) Human self-understanding as a collaborative project* (1:39:30) Is anything ineffable? On what AI systems can “grasp”* (1:43:12) OutroLinks:* Ted’s blog and Twitter* Research* The literary uses of high-dimensional space* The Longue Durée of literary prestige* The Historical Significance of Textual Distances* Machine Learning and Human Perspective* The life cycles of genres* Can We Map Culture?* Cohort Succession Explains Most Change in Literary Culture* Other Writing* Reclaiming Ground for the Humanities* We don’t already understand the broad outlines of literary history* Science fiction hasn’t prepared us to imagine machine learning.* How predictable is fiction?* Mapping the latent spaces of culture* Using GPT-4 to measure the passage of time in fiction Get full access to The Gradient at thegradientpub.substack.com/subscribe
    04/05/2023
    1:43:59

More Technology podcasts

About The Gradient: Perspectives on AI

Interviews with various people who research, build, or use AI, including academics, engineers, artists, entrepreneurs, and more.

thegradientpub.substack.com
Podcast website

Listen to The Gradient: Perspectives on AI, The Wholesale Change Show and Many Other Stations from Around the World with the radio.net App

The Gradient: Perspectives on AI

The Gradient: Perspectives on AI

Download now for free and listen to the radio easily.

Google Play StoreApp Store