Powered by RND
PodcastsTechnologyThe Gradient: Perspectives on AI
Listen to The Gradient: Perspectives on AI in the App
Listen to The Gradient: Perspectives on AI in the App
(7,438)(250,057)
Save favourites
Alarm
Sleep timer

The Gradient: Perspectives on AI

Podcast The Gradient: Perspectives on AI
Daniel Bashir
Deeply researched, technical interviews with experts thinking about AI and technology. thegradientpub.substack.com
More

Available Episodes

5 of 145
  • Some Changes at The Gradient
    Hi everyone!If you’re a new subscriber or listener, welcome. If you’re not new, you’ve probably noticed that things have slowed down from us a bit recently. Hugh Zhang, Andrey Kurenkov and I sat down to recap some of The Gradient’s history, where we are now, and how things will look going forward. To summarize and give some context:The Gradient has been around for around 6 years now – we began as an online magazine, and began producing our own newsletter and podcast about 4 years ago. With a team of volunteers — we take in a bit of money through Substack that we use for subscriptions to tools we need and try to pay ourselves a bit — we’ve been able to keep this going for quite some time. Our team has less bandwidth than we’d like right now (and I’ll admit that at least some of us are running on fumes…) — we’ll be making a few changes:* Magazine: We’re going to be scaling down our editing work on the magazine. While we won’t be accepting pitches for unwritten drafts for now, if you have a full piece that you’d like to pitch to us, we’ll consider posting it. If you’ve reached out about writing and haven’t heard from us, we’re really sorry. We’ve tried a few different arrangements to manage the pipeline of articles we have, but it’s been difficult to make it work. We still want this to be a place to promote good work and writing from the ML community, so we intend to continue using this Substack for that purpose. If we have more editing bandwidth on our team in the future, we want to continue doing that work. * Newsletter: We’ll aim to continue the newsletter as before, but with a “Best from the Community” section highlighting posts. We’ll have a way for you to send articles you want to be featured, but for now you can reach us at our [email protected]. * Podcast: I’ll be continuing this (at a slower pace), but eventually transition it away from The Gradient given the expanded range. If you’re interested in following, it might be worth subscribing on another player like Apple Podcasts, Spotify, or using the RSS feed.* Sigmoid Social: We’ll keep this alive as long as there’s financial support for it.If you like what we do and/or want to help us out in any way, do reach out to [email protected]. We love hearing from you.Timestamps* (0:00) Intro* (01:55) How The Gradient began* (03:23) Changes and announcements* (10:10) More Gradient history! On our involvement, favorite articles, and some plugsSome of our favorite articles!There are so many, so this is very much a non-exhaustive list:* NLP’s ImageNet moment has arrived* The State of Machine Learning Frameworks in 2019* Why transformative artificial intelligence is really, really hard to achieve* An Introduction to AI Story Generation* The Artificiality of Alignment (I didn’t mention this one in the episode, but it should be here)Places you can find us!Hugh:* Twitter* Personal site* Papers/things mentioned!* A Careful Examination of LLM Performance on Grade School Arithmetic (GSM1k)* Planning in Natural Language Improves LLM Search for Code Generation* Humanity’s Last ExamAndrey:* Twitter* Personal site* Last Week in AI PodcastDaniel:* Twitter* Substack blog* Personal site (under construction) Get full access to The Gradient at thegradientpub.substack.com/subscribe
    --------  
    34:25
  • Jacob Andreas: Language, Grounding, and World Models
    Episode 140I spoke with Professor Jacob Andreas about:* Language and the world* World models* How he’s developed as a scientistEnjoy!Jacob is an associate professor at MIT in the Department of Electrical Engineering and Computer Science as well as the Computer Science and Artificial Intelligence Laboratory. His research aims to understand the computational foundations of language learning, and to build intelligent systems that can learn from human guidance. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. He has received a Sloan fellowship, an NSF CAREER award, MIT's Junior Bose and Kolokotrones teaching awards, and paper awards at ACL, ICML and NAACL.Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (00:40) Jacob’s relationship with grounding fundamentalism* (05:21) Jacob’s reaction to LLMs* (11:24) Grounding language — is there a philosophical problem?* (15:54) Grounding and language modeling* (24:00) Analogies between humans and LMs* (30:46) Grounding language with points and paths in continuous spaces* (32:00) Neo-Davidsonian formal semantics* (36:27) Evolving assumptions about structure prediction* (40:14) Segmentation and event structure* (42:33) How much do word embeddings encode about syntax?* (43:10) Jacob’s process for studying scientific questions* (45:38) Experiments and hypotheses* (53:01) Calibrating assumptions as a researcher* (54:08) Flexibility in research* (56:09) Measuring Compositionality in Representation Learning* (56:50) Developing an independent research agenda and developing a lab culture* (1:03:25) Language Models as Agent Models* (1:04:30) Background* (1:08:33) Toy experiments and interpretability research* (1:13:30) Developing effective toy experiments* (1:15:25) Language Models, World Models, and Human Model-Building* (1:15:56) OthelloGPT’s bag of heuristics and multiple “world models”* (1:21:32) What is a world model?* (1:23:45) The Big Question — from meaning to world models* (1:28:21) From “meaning” to precise questions about LMs* (1:32:01) Mechanistic interpretability and reading tea leaves* (1:35:38) Language and the world* (1:38:07) Towards better language models* (1:43:45) Model editing* (1:45:50) On academia’s role in NLP research* (1:49:13) On good science* (1:52:36) OutroLinks:* Jacob’s homepage and Twitter* Language Models, World Models, and Human Model-Building* Papers* Semantic Parsing as Machine Translation (2013)* Grounding language with points and paths in continuous spaces (2014)* How much do word embeddings encode about syntax? (2014)* Translating neuralese (2017)* Analogs of linguistic structure in deep representations (2017)* Learning with latent language (2018)* Learning from Language (2018)* Measuring Compositionality in Representation Learning (2019)* Experience grounds language (2020)* Language Models as Agent Models (2022) Get full access to The Gradient at thegradientpub.substack.com/subscribe
    --------  
    1:52:43
  • Evan Ratliff: Our Future with Voice Agents
    Episode 139I spoke with Evan Ratliff about:* Shell Game, Evan’s new podcast, where he creates an AI voice clone of himself and sets it loose. * The end of the Longform Podcast and his thoughts on the state of journalism. Enjoy!Evan is an award-winning investigative journalist, bestselling author, podcast host, and entrepreneur. He’s the author of the The Mastermind: A True Story of Murder, Empire, and a New Kind of Crime Lord; the writer and host of the hit podcasts Shell Game and Persona: The French Deception; and the cofounder of The Atavist Magazine, Pop-Up Magazine, and the Longform Podcast. As a writer, he’s a two-time National Magazine Award finalist. As an editor and producer, he’s a two-time Emmy nominee and National Magazine Award winner.Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:05) Evan’s ambitious and risky projects* (04:45) Wearing different personas as a journalist* (08:31) Boundaries and acceptability in using voice agents* (11:42) Impacts on other people* (13:12) “The kids these days” — how will new technologies impact younger people?* (17:12) Evan’s approach to children’s technology use* (20:05) Techno-solutionism and improvements in medicine, childcare* (24:15) Evan’s perspective on simulations of people* (27:05) On motivations for building tech startups* (30:42) Evan’s outlook for Shell Game’s impact and motivations for his work* (36:05) How Evan decided to write for a career* (40:02) How voice agents might impact our conversations* (43:52) Evan’s experience with Longform and podcasting* (47:15) Perspectives on doing good interviews* (52:11) Mimicking and inspiration, developing style* (57:15) Writers and their motivations, the state of longform journalism* (1:06:15) The internet and writing* (1:09:41) On the ending of Longform* (1:19:48) OutroLinks:* Evan’s homepage and Twitter* Shell Game, Evan’s new podcast* Longform Podcast Get full access to The Gradient at thegradientpub.substack.com/subscribe
    --------  
    1:19:59
  • Meredith Ringel Morris: Generative AI's HCI Moment
    Episode 138I spoke with Meredith Morris about:* The intersection of AI and HCI and why we need more cross-pollination between AI and adjacent fields* Disability studies and AI* Generative ghosts and technological determinism* Developing a useful definition of AGII didn’t get to record an intro for this episode since I’ve been sick. Enjoy!Meredith is Director for Human-AI Interaction Research for Google DeepMind and an Affiliate Professor in The Paul G. Allen School of Computer Science & Engineering and in The Information School at the University of Washington, where she participates in the dub research consortium. Her work spans the areas of human-computer interaction (HCI), human-centered AI, human-AI interaction, computer-supported cooperative work (CSCW), social computing, and accessibility. She has been recognized as an ACM Fellow and ACM SIGCHI Academy member for her contributions to HCI.Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Meredith’s influences and earlier work* (03:00) Distinctions between AI and HCI* (05:56) Maturity of fields and cross-disciplinary work* (09:03) Technology and ends* (10:37) Unique aspects of Meredith’s research direction* (12:55) Forms of knowledge production in interdisciplinary work* (14:08) Disability, Bias, and AI* (18:32) LaMPost and using LMs for writing* (20:12) Accessibility approaches for dyslexia* (22:15) Awareness of AI and perceptions of autonomy* (24:43) The software model of personhood* (28:07) Notions of intelligence, normative visions and disability studies* (32:41) Disability categories and learning systems* (37:24) Bringing more perspectives into CS research and re-defining what counts as CS research* (39:36) Training interdisciplinary researchers, blurring boundaries in academia and industry* (43:25) Generative Agents and public imagination* (45:13) The state of ML conferences, the need for more cross-pollination* (46:42) Prestige in conferences, the move towards more cross-disciplinary work* (48:52) Joon Park Appreciation* (49:51) Training interdisciplinary researchers* (53:20) Generative Ghosts and technological determinism* (57:06) Examples of generative ghosts and clones, relationships to agentic systems* (1:00:39) Reasons for wanting generative ghosts* (1:02:25) Questions of consent for generative clones and ghosts* (1:05:01) Labor involved in maintaining generative ghosts, psychological tolls* (1:06:25) Potential religious and spiritual significance of generative systems* (1:10:19) Anthropomorphization* (1:12:14) User experience and cognitive biases* (1:15:24) Levels of AGI* (1:16:13) Defining AGI* (1:23:20) World models and AGI* (1:26:16) Metacognitive abilities in AGI* (1:30:06) Towards Bidirectional Human-AI Alignment* (1:30:55) Pluralistic value alignment* (1:32:43) Meredith’s perspective on deploying AI systems* (1:36:09) Meredith’s advice for younger interdisciplinary researchersLinks:* Meredith’s homepage, Twitter, and Google Scholar* Papers* Mediating Group Dynamics through Tabletop Interface Design* SearchTogether: An Interface for Collaborative Web Search* AI and Accessibility: A Discussion of Ethical Considerations* Disability, Bias, and AI* LaMPost: Design and Evaluation of an AI-assisted Email Writing Prototype for Adults with Dyslexia* Generative Ghosts* Levels of AGI Get full access to The Gradient at thegradientpub.substack.com/subscribe
    --------  
    1:37:45
  • Davidad Dalrymple: Towards Provably Safe AI
    Episode 137I spoke with Davidad Dalrymple about:* His perspectives on AI risk* ARIA (the UK’s Advanced Research and Invention Agency) and its Safeguarded AI ProgrammeEnjoy—and let me know what you think!Davidad is a Programme Director at ARIA. He was most recently a Research Fellow in technical AI safety at Oxford. He co-invented the top-40 cryptocurrency Filecoin, led an international neuroscience collaboration, and was a senior software engineer at Twitter and multiple startups.Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (00:36) Calibration and optimism about breakthroughs* (03:35) Calibration and AGI timelines, effects of AGI on humanity* (07:10) Davidad’s thoughts on the Orthogonality Thesis* (10:30) Understanding how our current direction relates to AGI and breakthroughs* (13:33) What Davidad thinks is needed for AGI* (17:00) Extracting knowledge* (19:01) Cyber-physical systems and modeling frameworks* (20:00) Continuities between Davidad’s earlier work and ARIA* (22:56) Path dependence in technology, race dynamics* (26:40) More on Davidad’s perspective on what might go wrong with AGI* (28:57) Vulnerable world, interconnectedness of computers and control* (34:52) Formal verification and world modeling, Open Agency Architecture* (35:25) The Semantic Sufficiency Hypothesis* (39:31) Challenges for modeling* (43:44) The Deontic Sufficiency Hypothesis and mathematical formalization* (49:25) Oversimplification and quantitative knowledge* (53:42) Collective deliberation in expressing values for AI* (55:56) ARIA’s Safeguarded AI Programme* (59:40) Anthropic’s ASL levels* (1:03:12) Guaranteed Safe AI — * (1:03:38) AI risk and (in)accurate world models* (1:09:59) Levels of safety specifications for world models and verifiers — steps to achieve high safety* (1:12:00) Davidad’s portfolio research approach and funding at ARIA* (1:15:46) Earlier concerns about ARIA — Davidad’s perspective* (1:19:26) Where to find more information on ARIA and the Safeguarded AI Programme* (1:20:44) OutroLinks:* Davidad’s Twitter* ARIA homepage* Safeguarded AI Programme* Papers* Guaranteed Safe AI* Davidad’s Open Agency Architecture for Safe Transformative AI* Dioptics: a Common Generalization of Open Games and Gradient-Based Learners (2019)* Asynchronous Logic Automata (2008) Get full access to The Gradient at thegradientpub.substack.com/subscribe
    --------  
    1:20:50

More Technology podcasts

About The Gradient: Perspectives on AI

Podcast website

Listen to The Gradient: Perspectives on AI, Hard Fork and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Radio
Social
v6.29.0 | © 2007-2024 radio.de GmbH
Generated: 12/2/2024 - 10:57:05 PM