
Rewriting History with AI
18/12/2025 | 51 mins.
What happens when students turn to LLMs to learn about history? My guest, Nuno Moniz, Associate Research Professor at the University of Notre Dame, argues this can ultimately lead to mass confusion, which in turn can lead to tragic conflicts. There are at least three sources of that confusion: AI hallucinations, misinformation spreading, and biased interpretations of history getting the upper hand. Exactly how bad this can get and what we’re supposed to do about it isn’t obvious, but Nuno has some suggestions.Advertising Inquiries: https://redcircle.com/brands

AI is Not a Normal Technology
11/12/2025 | 46 mins.
When thinking about AI replacing people, we usually look to the extremes: utopia and dystopia. My guest today, Finn Morehouse, a research fellow at Forethought, a nonprofit research organization, thinks that neither of these extremes are the most likely. In fact, he thinks that one reason that AI defies prediction is that it’s not a normal technology. What’s not normal about it? It’s not merely in the business of multiplying productivity, he says, but of replacing the standard bottleneck to greater productivity: humans.Advertising Inquiries: https://redcircle.com/brands

We Are All Responsible for AI, Part 2
04/12/2025 | 58 mins.
In the last episode, Brian Wong, argued that there’s a “gap” between the harms that developing and using AI causes, on the one hand, and identifying who is responsible for those harms. At the end of that discussion, Brian claimed that we’re all responsible for those harms. But how could that be? Aren’t some people more responsible than others? And if we are responsible, what does that mean we’re supposed to do differently? In part 2 Brian explains how he thinks about what responsibility is and how it has implications for our social responsibilities.Advertising Inquiries: https://redcircle.com/brands

We Are All Responsible for AI, Part 1
20/11/2025 | 1h 4 mins.
We’re all connected to how AI is developed and used across the world. And that connection, my guest Brian Wong, Assistant Professor of Philosophy at the University of Hong Kong argues, is what makes us all, to varying degrees, responsible for the harmful impacts of AI. This conversation has two parts. This is the first, where we focus on the kinds of geo-political risks and harms he concerned about, why he takes issue with “the alignment problem,” and how AI operates in a way that produces what he calls “accountability gaps and deficits” - ways in which it looks like no one is accountable for the harms and how people are not compensated by anyone after they’re harmed. There’s a lot here - buckle up!Advertising Inquiries: https://redcircle.com/brands

Orchestrating Ethics
13/11/2025 | 44 mins.
One company builds the LLM. Another company uses that model for their purposes. How do we know that the ethical standards of the first one match the ethical standards of the second one? How does the second company know they are using a technology that is commensurate with their own ethical standards? This is a conversation I had with David Danks, Professor OF Philosophy and data science UCSD, almost 3 years ago. But the conversation is just as pressing now as it was then. In fact, given the widespread adoption of AI that’s built by a handful of companies, it’s even more important now that we get this right.Advertising Inquiries: https://redcircle.com/brands



Ethical Machines