Powered by RND
PodcastsScienceBrain Inspired

Brain Inspired

Paul Middlebrooks
Brain Inspired
Latest episode

Available Episodes

5 of 99
  • BI 212 John Beggs: Why Brains Seek the Edge of Chaos
    Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. You may have heard of the critical brain hypothesis. It goes something like this: brain activity operates near a dynamical regime called criticality, poised at the sweet spot between too much order and too much chaos, and this is a good thing because systems at criticality are optimized for computing, they maximize information transfer, they maximize the time range over which they operate, and a handful of other good properties. John Beggs has been studying criticality in brains for over 20 years now. His 2003 paper with Deitmar Plenz is one of of the first if not the first to show networks of neurons operating near criticality, and it gets cited in almost every criticality paper I read. John runs the Beggs Lab at Indiana University Bloomington, and a few years ago he literally wrote the book on criticality, called The Cortex and the Critical Point: Understanding the Power of Emergence, which I highly recommend as an excellent introduction to the topic, and he continues to work on criticality these days. On this episode we discuss what criticality is, why and how brains might strive for it, the past and present of how to measure it and why there isn't a consensus on how to measure it, what it means that criticality appears in so many natural systems outside of brains yet we want to say it's a special property of brains. These days John spends plenty of effort defending the criticality hypothesis from critics, so we discuss that, and much more. Beggs Lab. Book: The Cortex and the Critical Point: Understanding the Power of Emergence Related papers Addressing skepticism of the critical brain hypothesis Papers John mentioned: Tetzlaff et al 2010: Self-organized criticality in developing neuronal networks. Haldeman and Beggs 2005: Critical Branching Captures Activity in Living Neural Networks and Maximizes the Number of Metastable States. Bertschinger et al 2004: At the edge of chaos: Real-time computations and self-organized criticality in recurrent neural networks. Legenstein and Maass 2007: Edge of chaos and prediction of computational performance for neural circuit models. Kinouchi and Copelli 2006: Optimal dynamical range of excitable networks at criticality. Chialvo 2010: Emergent complex neural dynamics.. Mora and Bialek 2011: Are Biological Systems Poised at Criticality? Read the transcript. 0:00 - Intro 4:28 - What is criticality? 10:19 - Why is criticality special in brains? 15:34 - Measuring criticality 24:28 - Dynamic range and criticality 28:28 - Criticisms of criticality 31:43 - Current state of critical brain hypothesis 33:34 - Causality and criticality 36:39 - Criticality as a homeostatic set point 38:49 - Is criticality necessary for life? 50:15 - Shooting for criticality far from thermodynamic equilibrium 52:45 - Quasi- and near-criticality 55:03 - Cortex vs. whole brain 58:50 - Structural criticality through development 1:01:09 - Criticality in AI 1:03:56 - Most pressing criticisms of criticality 1:10:08 - Gradients of criticality 1:22:30 - Homeostasis vs. criticality 1:29:57 - Minds and criticality
    --------  
    1:33:34
  • BI 211 COGITATE: Testing Theories of Consciousness
    Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. Rony Hirschhorn, Alex Lepauvre, and Oscar Ferrante are three of many many scientists that comprise the COGITATE group. COGITATE is an adversarial collaboration project to test theories of consciousness in humans, in this case testing the integrated information theory of consciousness and the global neuronal workspace theory of consciousness. I said it's an adversarial collaboration, so what does that mean. It's adversarial in that two theories of consciousness are being pitted against each other. It's a collaboration in that the proponents of the two theories had to agree on what experiments could be performed that could possibly falsify the claims of either theory. The group has just published the results of the first round of experiments in a paper titled Adversarial testing of global neuronal workspace and integrated information theories of consciousness, and this is what Rony, Alex, and Oscar discuss with me today. The short summary is that they used a simple task and measured brain activity with three different methods: EEG, MEG, and fMRI, and made predictions about where in the brain correlates of consciousness should be, how that activity should be maintained over time, and what kind of functional connectivity patterns should be present between brain regions. The take home is a mixed bag, with neither theory being fully falsified, but with a ton of data and results for the world to ponder and build on, to hopefully continue to refine and develop theoretical accounts of how brains and consciousness are related. So we discuss the project itself, many of the challenges they faced, their experiences and reflections working on it and on coming together as a team, the nature of working on an adversarial collaboration, when so much is at stake for the proponents of each theory, and, as you heard last episode with Dean Buonomano, when one of the theories, IIT, is surrounded by a bit of controversy itself regarding whether it should even be considered a scientific theory. COGITATE. Oscar Ferrante. @ferrante_oscar Rony Hirschhorn. @RonyHirsch Alex Lepauvre. @LepauvreAlex Paper: Adversarial testing of global neuronal workspace and integrated information theories of consciousness. BI 210 Dean Buonomano: Consciousness, Time, and Organotypic Dynamics Read the transcript. 0:00 - Intro 4:00 - COGITATE 17:42 - How the experiments were developed 32:37 - How data was collected and analyzed 41:24 - Prediction 1: Where is consciousness? 47:51 - The experimental task 1:00:14 - Prediction 2: Duration of consciousness-related activity 1:18:37 - Prediction 3: Inter-areal communication 1:28:28 - Big picture of the results 1:44:25 - Moving forward
    --------  
    1:59:40
  • BI 210 Dean Buonomano: Consciousness, Time, and Organotypic Dynamics
    Support the show to get full episodes, full archive, and join the Discord community. Dean Buonomano runs the Buonomano lab at UCLA. Dean was a guest on Brain Inspired way back on episode 18, where we talked about his book Your Brain is a Time Machine: The Neuroscience and Physics of Time, which details much of his thought and research about how centrally important time is for virtually everything we do, different conceptions of time in philosophy, and how how brains might tell time. That was almost 7 years ago, and his work on time and dynamics in computational neuroscience continues. One thing we discuss today, later in the episode, is his recent work using organotypic brain slices to test the idea that cortical circuits implement timing as a computational primitive it's something they do by they're very nature. Organotypic brain slices are between what I think of as traditional brain slices and full on organoids. Brain slices are extracted from an organism, and maintained in a brain-like fluid while you perform experiments on them. Organoids start with a small amount of cells that you the culture, and let them divide and grow and specialize, until you have a mass of cells that have grown into an organ of some sort, to then perform experiments on. Organotypic brain slices are extracted from an organism, like brain slices, but then also cultured for some time to let them settle back into some sort of near-homeostatic point - to them as close as you can to what they're like in the intact brain... then perform experiments on them. Dean and his colleagues use optigenetics to train their brain slices to predict the timing of the stimuli, and they find the populations of neurons do indeed learn to predict the timing of the stimuli, and that they exhibit replaying of those sequences similar to the replay seen in brain areas like the hippocampus. But, we begin our conversation talking about Dean's recent piece in The Transmitter, that I'll point to in the show notes, called The brain holds no exclusive rights on how to create intelligence. There he argues that modern AI is likely to continue its recent successes despite the ongoing divergence between AI and neuroscience. This is in contrast to what folks in NeuroAI believe. We then talk about his recent chapter with physicist Carlo Rovelli, titled Bridging the neuroscience and physics of time, in which Dean and Carlo examine where neuroscience and physics disagree and where they agree about the nature of time. Finally, we discuss Dean's thoughts on the integrated information theory of consciousness, or IIT. IIT has see a little controversy lately. Over 100 scientists, a large part of that group calling themselves IIT-Concerned, have expressed concern that IIT is actually unscientific. This has cause backlash and anti-backlash, and all sorts of fun expression from many interested people. Dean explains his own views about why he thinks IIT is not in the purview of science - namely that it doesn't play well with the existing ontology of what physics says about science. What I just said doesn't do justice to his arguments, which he articulates much better. Buonomano lab. Twitter: @DeanBuono. Related papers The brain holds no exclusive rights on how to create intelligence. What makes a theory of consciousness unscientific? Ex vivo cortical circuits learn to predict and spontaneously replay temporal patterns. Bridging the neuroscience and physics of time. BI 204 David Robbe: Your Brain Doesn’t Measure Time Read the transcript. 0:00 - Intro 8:49 - AI doesn't need biology 17:52 - Time in physics and in neuroscience 34:04 - Integrated information theory 1:01:34 - Global neuronal workspace theory 1:07:46 - Organotypic slices and predictive processing 1:26:07 - Do brains actually measure time? David Robbe
    --------  
    1:50:33
  • BI 209 Aran Nayebi: The NeuroAI Turing Test
    Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. Aran Nayebi is an Assistant Professor at Carnegie Mellon University in the Machine Learning Department. He was there in the early days of using convolutional neural networks to explain how our brains perform object recognition, and since then he's a had a whirlwind trajectory through different AI architectures and algorithms and how they relate to biological architectures and algorithms, so we touch on some of what he has studied in that regard. But he also recently started his own lab, at CMU, and he has plans to integrate much of what he has learned to eventually develop autonomous agents that perform the tasks we want them to perform in similar at least ways that our brains perform them. So we discuss his ongoing plans to reverse-engineer our intelligence to build useful cognitive architectures of that sort. We also discuss Aran's suggestion that, at least in the NeuroAI world, the Turing test needs to be updated to include some measure of similarity of the internal representations used to achieve the various tasks the models perform. By internal representations, as we discuss, he means the population-level activity in the neural networks, not the mental representations philosophy of mind often refers to, or other philosophical notions of the term representation. Aran's Website. Twitter: @ayan_nayebi. Related papers Brain-model evaluations need the NeuroAI Turing Test. Barriers and pathways to human-AI alignment: a game-theoretic approach. 0:00 - Intro 5:24 - Background 20:46 - Building embodied agents 33:00 - Adaptability 49:25 - Marr's levels 54:12 - Sensorimotor loop and intrinsic goals 1:00:05 - NeuroAI Turing Test 1:18:18 - Representations 1:28:18 - How to know what to measure 1:32:56 - AI safety
    --------  
    1:43:59
  • BI 208 Gabriele Scheler: From Verbal Thought to Neuron Computation
    Support the show to get full episodes, full archive, and join the Discord community. Gabriele Scheler co-founded the Carl Correns Foundation for Mathematical Biology. Carl Correns was her great grandfather, one of the early pioneers in genetics. Gabriele is a computational neuroscientist, whose goal is to build models of cellular computation, and much of her focus is on neurons. We discuss her theoretical work building a new kind of single neuron model. She, like Dmitri Chklovskii a few episodes ago, believes we've been stuck with essentially the same family of models for a neuron for a long time, despite minor variations on those models. The model Gabriele is working on, for example, respects the computations going on not only externally, via spiking, which has been the only game in town forever, but also the computations going on within the cell itself. Gabriele is in line with previous guests like Randy Gallistel, David Glanzman, and Hessam Akhlaghpour, who argue that we need to pay attention to how neurons are computing various things internally and how that affects our cognition. Gabriele also believes the new neuron model she's developing will improve AI, drastically simplifying the models by providing them with smarter neurons, essentially. We also discuss the importance of neuromodulation, her interest in wanting to understand how we think via our internal verbal monologue, her lifelong interest in language in general, what she thinks about LLMs, why she decided to start her own foundation to fund her science, what that experience has been like so far. Gabriele has been working on these topics for many years, and as you'll hear in a moment, she was there when computational neuroscience was just starting to pop up in a few places, when it was a nascent field, unlike its current ubiquity in neuroscience. Gabriele's website. Carl Correns Foundation for Mathematical Biology. Neuro-AI spinoff Related papers Sketch of a novel approach to a neural model. Localist neural plasticity identified by mutual information. Related episodes BI 199 Hessam Akhlaghpour: Natural Universal Computation BI 172 David Glanzman: Memory All The Way Down BI 126 Randy Gallistel: Where Is the Engram? 0:00 - Intro 4:41 - Gabriele's early interests in verbal thinking 14:14 - What is thinking? 24:04 - Starting one's own foundation 58:18 - Building a new single neuron model 1:19:25 - The right level of abstraction 1:25:00 - How a new neuron would change AI
    --------  
    1:35:08

More Science podcasts

About Brain Inspired

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.
Podcast website

Listen to Brain Inspired, Hidden Brain and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.18.3 | © 2007-2025 radio.de GmbH
Generated: 6/1/2025 - 4:13:33 AM