Powered by RND
PodcastsBusinessEthical Machines

Ethical Machines

Reid Blackman
Ethical Machines
Latest episode

Available Episodes

5 of 64
  • The Military is the Safest Place to Test AI
    How can one of the most high risk industries also be the safest place to test AI? That’s what I discuss today with former Navy Commander Zac Staples, currently Founder and CEO of Fathom, an industrial cybersecurity company focused on the maritime industry. He walks me through how the military performs its due diligence on new technologies, explains that there are lots of “watchers” of new technologies as they’re tested and used, and that all of this happens against a backdrop of a culture of self-critique. We also talk about the increasing complexity of AI, which makes it harder to test, and we zoom out to larger, political issues, including China’s use of military AI. Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
    --------  
    45:39
  • Should We Make Digital Copies of People?
    Deepfakes to deceive people? No good. How about a digital duplicate of a lost loved one so you can keep talking to them? What’s the impact of having a child talk to the digital duplicate of their dead father? Should you leave instructions about what can be done with your digital identify in your will? Could you lose control of your digital duplicate? These questions are ethically fascinating and crucial in themselves. They also raise other longer standing philosophical issues: can you be harmed after you die? Can your rights be violated? What if a Holocaust denier uses a digital duplicate of a survivor to say the Holocaust never happened? I used to think deepfakes were most of the conversation. Now I know better thanks to this great conversation with Atay Kozlovski, Visiting Research Fellow at Delft University of Technology.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
    --------  
    46:05
  • How Society Bears AI’s Costs
    AI is leading the economic charge. In fact, without the massive investments in AI, our economy would look a lot worse right now. But what are the social and political costs that we incur? My guest, Karen Yeung, a professor at Birmingham Law School and School of Computer Science, argues that investments in AI our consolidating power while disempowering the rest of society. Our individual autonomy and our collective cohesion are simultaneously eroding. We need to push back - but how? And on what grounds? To what extent is the problem our socio-economic system or our culture or government (in)action? These questions and more in a particularly fun episode (for me, anyway).Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
    --------  
    40:13
  • How Should We Teach Ethics to Computer Science Majors?
    The engineering and data science students of today are tomorrow’s tech innovators. IF we want them to develop ethically sound technology, they better have a good grip on what ethics is all about. But how should we teach them? The same way we teach ethics in philosophy? Or is something different needed given the kinds of organizational forces they’ll find themselves subject to once they’re working. Steven Kelts, a lecturer in Princeton’s School of Public and International Affairs and in the Department of Computer Science researches this subject and teaches those very students himself. We explore what his research and his experience shows us about how we can best train our computer scientists to take the welfare of society into their minds and their work.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
    --------  
    55:35
  • In Defense of Killer Robots
    Giving AI systems autonomy in a military context seems like a bad idea. Of course AI shouldn’t “decide” which targets should be killed and/or blown up. Except…maybe it’s not so obvious after all. That’s what my guest, Michael Horowitz, formerly of the DOD and now a professor at the University of Pennsylvania argues. Agree with him or not, he makes a compelling case we need to take seriously. In fact, you may even conclude with him that using autonomous AI in a military context can be morally superior to having a human pull the trigger.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
    --------  
    50:47

More Business podcasts

About Ethical Machines

I have to roll my eyes at the constant click bait headlines on technology and ethics.  If we want to get anything done, we need to go deeper. That’s where I come in. I’m Reid Blackman, a former philosophy professor turned AI ethics advisor to government and business. If you’re looking for a podcast that has no tolerance for the superficial, try out Ethical Machines.
Podcast website

Listen to Ethical Machines, This is Money Podcast and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.23.11 | © 2007-2025 radio.de GmbH
Generated: 11/9/2025 - 12:02:50 AM