Untangled

Charley Johnson
Untangled
Latest episode

18 episodes

  • Untangled

    The Universe Called. It Says Your Theory of Change Is Cute.

    09/11/2025 | 48 mins.

    If you’ve sensed a shift in Untangled of late, you’re not wrong. I’m writing a lot more about ‘complex systems.’ To name a few:* What even is a ‘complex system’ and how do you know if you’re in one.* How to act interdependently and do the next right thing in a complex system.* Why if/then theories of change that assume causality are bonkers — and how to map backward from the future.* How do you act amidst uncertainty — if you truly don’t know how your system will respond to your intervention, what do you do?* How should we think about goals in an uncertain world?* Here’s a fun diagnostic tool I developed to help you assess how your organization thinks, acts, and learns under complexity.I am obsessed with complex systems because the world is uncertain and unpredictable — and yet all of our strategies pretend otherwise. We crave certainty, so we build plans that presume causality, control, and predictability. We know in our gut that the systems we’re trying to change won’t sit still for our long-term plans, yet our instinct to cling to control amid uncertainty is too strong to resist.And honestly, in 2025, this shouldn’t be a hard sell. Politics, climate change, and AI are laughing at your five-year strategy decks.Complexity thinking helps us see this clearly — that systems are dynamic, nonlinear, and adaptive — but it, too, has blind spots. First, it lacks a theory of technology. The closest we get is Brian Arthur’s brilliant book, The Nature of Technology: What It Is and How It Evolves, which explains how technologies co-evolve with economic systems. (Give it a read, or check out write-up in Technically Social). But Arthur was focused on markets, not on social systems — not on how technology is entangled with people and power.That’s where my course comes in. I’m trying to offer frameworks and practices for creating change across difference, amid uncertainty, in tech-mediated environments — approaches that honor both complexity and the mutual shaping of people, power, and technology. (And yes, Cohort 5 of Systems Change for Tech & Society Leaders starts November 19.)Second, complexity is hard to talk about simply and make practical (that’s why my Playbook turned into a 200 page monstrosity!) Every time I use the words “complex” or “system,” I can feel the distance between me and whoever I’m talking to widen. I’ve been searching for thinkers who bridge that gap — who write about systems with both clarity and depth — and recently came across the brilliant work of Aarn Wennekers, who writes the great newsletter Super Cool & Hyper Critical (Subscribe if you haven’t yet!)After reading his essay, Systems Thinking Isn’t Enough Anymore, I reached out and invited him onto the podcast. I’m thrilled to share that conversation — one that digs into the mindsets and muscles leaders need to navigate uncertainty and constant change, the need to collapse old distinctions between strategy and operations, and what it really means to act when the ground beneath us keeps shifting. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com/subscribe

  • Untangled

    "Autonomy or Empire"- Rethinking What AI Is For

    02/11/2025 | 1h 4 mins.

    This week, I spoke with Harry Law, Editorial Lead at the Cosmos Institute and a researcher at the University of Cambridge, about AI and autonomy. Harry wrote a terrific essay on how generative AI might serve human autonomy rather than the empires Big Tech is intent on building.In our conversation, we explore:* What the Cosmos Institute is — and how it’s challenging the binary, deterministic thinking that dominates tech.* The difference between “democratic” and “authoritarian” technologies — and why it depends less on the tools themselves than on the political, cultural, and economic systems they’re embedded in.* The gap between agency (Silicon Valley’s favorite word) and autonomy, and why that difference matters.* How generative AI can collapse curiosity — closing the reflective space between question and answer — and what it might mean to design it instead for wonder, inquiry, and self-understanding.* Why removing friction and optimizing for efficiency often strips away learning, growth, and self-actualization.* The need for more “philosophy builders” — technologists designing systems that expand our capacity to think, choose, and act for ourselves.* Harry’s provocative idea of personalized AIs grounded in our own values and second-order preferences — a radically different vision from today’s “personalization” built for engagement.The conversation around generative AI has gone stale. Everyone is interpreting it through their own frames of meaning — their own logics, values, incentives, and worldviews — yet we still talk about “AI” as if it’s a single, coherent, inevitable thing. It’s not.My conversation with Harry is an attempt to move beyond the binary — to imagine alternative pathways for technology that place human autonomy, curiosity, and moral imagination at the center.If you’re fed up with imagining alternative futures and want to do the hard, strategic work of changing the system you’re in, and set it — and you! — on a fundamentally new path, sign up for Cohort 5 of my course, Systems Change for Tech & Society Leaders. It kicks off in three weeks and there are still a few spots available.https://www.charley-johnson.com/sociotechnicalsystemschangeBefore you go: 3 ways I can help* Systems Change for Tech & Society Leaders - Everything you need to cut through the tech-hype and implement strategies that catalyze true systems change.* Need 1:1 help aligning technology with your vision of the future. Apply for advising & executive coaching here.* Organizational Support: Your organizational playbook for navigating uncertainty and making sense of AI — what’s real, what’s noise, and how it should (or shouldn’t) shape your system.P.S. If you have a question about this post (or anything related to tech & systems change), reply to this email and let me know! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com/subscribe

  • Untangled

    'Be Curious, Not judgmental' or What AI Critics Get Wrong!

    12/10/2025 | 36 mins.

    Today, I’m sharing the 15-minute diagnostic framework I use to assess an organization’s capacity to navigate uncertainty and complexity. Fill out this short survey to get access.The diagnostic is just one tool of 30+ included in the Playbook that will help you put the frameworks from my course immediately into practice. This one helps participants see how their current assumptions, decision structures, and learning practices align (or clash) with the realities of complex systems — and identify immediate interventions they can try to build adaptive capacity across their teams and organizations. Fun, huh? Cohorts 4 & 5 are open but enrollment is limited. Sign up today!Okay, let’s get to my conversation with Lee Vinsel, Assistant Professor of Science, Technology, and Society at Virginia Tech and the creator of the great newsletter and podcast People & Things.I try (and fail often!) to live by the line from an incredible Ted Lasso scene, “Be curious, not judgmental.” I was reminded of that phrase while reading Lee Vinsel’s essay Against Narcissistic-Sociopathic Technology Studies, or Why Do People USE Technologies. Lee encourages scholars and critics of generative AI — and tech more broadly — to go beyond their own value judgments and actually study how and why people use technologies. He points to a perceived tension we don’t have to resolve: that “you can hold any ethical principle you want and still do the interpretive work of trying to understand other people who are not yourself.”I feel that tension! There are so many reasons to be critical of the inherently anti-democratic, scale-at-all-costs approach to generative AI. You know the one that anthropomorphizes fancy math and strips us of what it means to be human — all while carrying forward historical biases, stealing from creators, and contributing to climate change and water scarcity? (Deep breath.) But Lee’s point is that we can hold these truths and still choose curiosity. Choosing curiosity over judgment is also strategic. Often, judgment centers the technology, inflating its power, and reducing our own agency. This gestures at another one of Lee’s ideas, “criti-hype,” or critiques that are “parasitic upon and even inflates hype.” As Vinsel writes, these critics, “invert boosters’ messages — they retain the picture of extraordinary change but focus instead on negative problems and risks.” Judgment and critique focuses our attention on the technology itself and centers it as the driver of big problems, not the social and cultural systems it is entangled with. What we need instead is research and analysis that focuses on how and why people use generative AI, and the systems it often hides. In our conversation, Lee and I talk about:* How, in a world where tech discourse is all hype and increasingly political, curiosity can feel like ceding ground to ‘the other side.’* Where narcissistic/sociopathic tech studies comes from — and what it would look like to center curiosity in how we talk about and research generative AI.* How centering the technology itself overplays its role in social problems and obscures the systems that actually need to change.* The limits of critique, and what would shift if experts and scholars centered description and translation instead of judgment.* Whether we’re in a bubble — and what might happen next.This conversation is a wonky one, but its implications are quite practical. If we don’t understand how and why organizations use generative AI, we can’t anticipate how work will change — or see that much of the adoption is actually performative. If we don’t understand how and why students use it, we’ll miss shifts in identity formation and learning. If we don’t understand how and why people choose it for companionship, we’ll miss big shifts in the nature of relationships. I could go on — but the point is this: in a rush to critique generative AI, we often forget to notice how people are using it in the present — the small, weird, human ways people are already making it part of their lives. To see around the corner, we have to get over ourselves. We have to replace assumption with observation, and judgment with curiosity.Before you go: 3 ways I can help* Systems Change for Tech & Society Leaders - Everything you need to cut through the tech-hype and implement strategies that catalyze true systems change.* Need 1:1 help aligning technology with your vision of the future. Apply for advising & executive coaching here.* Organizational Support: Your organizational playbook for navigating uncertainty and making sense of AI — what’s real, what’s noise, and how it should (or shouldn’t) shape your system.P.S. If you have a question about this post (or anything related to tech & systems change), reply to this email and let me know! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com/subscribe

  • Untangled

    "Empire of AI" w/Karen Hao

    29/6/2025 | 48 mins.

    Today, I’m sharing my conversation with Karen Hao, award-winning reporter covering artificial intelligence and author of NYT bestseller, Empire of AI. We discuss:* The scale-at-all cost approach to AI Big Tech is pursuing — the misguided assumptions and beliefs it rests upon, and the harms it causes.* How the companies pursuing this approach represent a modern-day empire, and the role narrative power plays in sustaining it.* Boomers, doomers, and the religion of AGI.* Alternative visions of AI that center consent, community ownership, context, and don’t come at the expense of people’s livelihoods, public health, and the environment.* How to reclaim our agency in an age of AI.👉 Tech hype hides power. Reclaim it in my live course Systems Change for Tech & Society Leaders.Links:- Check out the podcast, Computer Says Maybe This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com/subscribe

  • Untangled

    There’s no such thing as ‘fully autonomous’ agents

    06/4/2025 | 8 mins.

    I’m Charley Johnson, and this is Untangled, a newsletter and podcast about our sociotechnical world, and how to change it. Today, I’m bringing you the audio version of my latest essay, “There’s no such thing as ‘fully autonomous agents.’ Before getting into it, two quick things:1. I have two part essay out in Tech Policy Press with Michelle Shevin that offers a roadmap for how philanthropy can use the current “AI Moment” to build more just futures.2. There is still room available in my upcoming course. In it, I weave together frameworks — from science and technology studies, complex adaptive systems, future thinking etc. — to offer you strategies and practical approaches to address the twin questions confronting all mission driven leaders, strategists, and change-makers right now: what is your 'AI strategy' and how will you change the system you’re in?Now, on to the show! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com/subscribe

More Society & Culture podcasts

About Untangled

Untangled is a podcast about technology, people, and power. untangled.substack.com
Podcast website

Listen to Untangled, Desert Island Discs and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.2.0 | © 2007-2025 radio.de GmbH
Generated: 12/17/2025 - 2:34:23 PM