'Be Curious, Not judgmental' or What AI Critics Get Wrong!
Today, I’m sharing the 15-minute diagnostic framework I use to assess an organization’s capacity to navigate uncertainty and complexity. Fill out this short survey to get access.The diagnostic is just one tool of 30+ included in the Playbook that will help you put the frameworks from my course immediately into practice. This one helps participants see how their current assumptions, decision structures, and learning practices align (or clash) with the realities of complex systems — and identify immediate interventions they can try to build adaptive capacity across their teams and organizations. Fun, huh? Cohorts 4 & 5 are open but enrollment is limited. Sign up today!Okay, let’s get to my conversation with Lee Vinsel, Assistant Professor of Science, Technology, and Society at Virginia Tech and the creator of the great newsletter and podcast People & Things.I try (and fail often!) to live by the line from an incredible Ted Lasso scene, “Be curious, not judgmental.” I was reminded of that phrase while reading Lee Vinsel’s essay Against Narcissistic-Sociopathic Technology Studies, or Why Do People USE Technologies. Lee encourages scholars and critics of generative AI — and tech more broadly — to go beyond their own value judgments and actually study how and why people use technologies. He points to a perceived tension we don’t have to resolve: that “you can hold any ethical principle you want and still do the interpretive work of trying to understand other people who are not yourself.”I feel that tension! There are so many reasons to be critical of the inherently anti-democratic, scale-at-all-costs approach to generative AI. You know the one that anthropomorphizes fancy math and strips us of what it means to be human — all while carrying forward historical biases, stealing from creators, and contributing to climate change and water scarcity? (Deep breath.) But Lee’s point is that we can hold these truths and still choose curiosity. Choosing curiosity over judgment is also strategic. Often, judgment centers the technology, inflating its power, and reducing our own agency. This gestures at another one of Lee’s ideas, “criti-hype,” or critiques that are “parasitic upon and even inflates hype.” As Vinsel writes, these critics, “invert boosters’ messages — they retain the picture of extraordinary change but focus instead on negative problems and risks.” Judgment and critique focuses our attention on the technology itself and centers it as the driver of big problems, not the social and cultural systems it is entangled with. What we need instead is research and analysis that focuses on how and why people use generative AI, and the systems it often hides. In our conversation, Lee and I talk about:* How, in a world where tech discourse is all hype and increasingly political, curiosity can feel like ceding ground to ‘the other side.’* Where narcissistic/sociopathic tech studies comes from — and what it would look like to center curiosity in how we talk about and research generative AI.* How centering the technology itself overplays its role in social problems and obscures the systems that actually need to change.* The limits of critique, and what would shift if experts and scholars centered description and translation instead of judgment.* Whether we’re in a bubble — and what might happen next.This conversation is a wonky one, but its implications are quite practical. If we don’t understand how and why organizations use generative AI, we can’t anticipate how work will change — or see that much of the adoption is actually performative. If we don’t understand how and why students use it, we’ll miss shifts in identity formation and learning. If we don’t understand how and why people choose it for companionship, we’ll miss big shifts in the nature of relationships. I could go on — but the point is this: in a rush to critique generative AI, we often forget to notice how people are using it in the present — the small, weird, human ways people are already making it part of their lives. To see around the corner, we have to get over ourselves. We have to replace assumption with observation, and judgment with curiosity.Before you go: 3 ways I can help* Systems Change for Tech & Society Leaders - Everything you need to cut through the tech-hype and implement strategies that catalyze true systems change.* Need 1:1 help aligning technology with your vision of the future. Apply for advising & executive coaching here.* Organizational Support: Your organizational playbook for navigating uncertainty and making sense of AI — what’s real, what’s noise, and how it should (or shouldn’t) shape your system.P.S. If you have a question about this post (or anything related to tech & systems change), reply to this email and let me know! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit untangled.substack.com/subscribe