Powered by RND
PodcastsTechnologyComputer Says Maybe
Listen to Computer Says Maybe in the App
Listen to Computer Says Maybe in the App
(7,438)(250,057)
Save favourites
Alarm
Sleep timer

Computer Says Maybe

Podcast Computer Says Maybe
Alix Dunn
Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the publi...

Available Episodes

5 of 39
  • The Taiwan Bottleneck w/ Brian Chen
    Do you ever wonder how semiconductors (AKA chips) get made? Or why most of them are made in Taiwan? Or what this means for geopolitics?Luckily, this is a podcast for nerds like you. Alix was joined this week by Brian Chen from Data & Society, who systematically explains the process of advanced chip manufacture, how its thoroughly entangled in US economic policy, and how Taiwan’s place as the main artery for chips is the product of deep colonial infrastructures.Brian J. Chen is the policy director of Data & Society, leading the organization’s work to shape tech policy. With a background in movement lawyering and legislative and regulatory advocacy, he has worked extensively on issues of economic justice, political economy, and tech governance.Previously, Brian led campaigns to strengthen the labor and employment rights of digital platform workers and other workers in precarious industries. Before that, he led programs to promote democratic accountability in policing, including community oversight over the adoption and use of police technologies.**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
    --------  
    37:10
  • AI Safety’s Spiral of Urgency w/ Shazeda Ahmed
    Are you tired of hearing the phrase ‘AI Safety’ and rolling your eyes? Do you also sometimes think… okay but what is technically wrong with advocating for ‘safer’ AI systems? Do you also wish we could have more nuanced conversations about China and AI?In this episode Shazeda Ahmed goes deep on the field of AI Safety, explaining that it is a community that is propped up by its own spiral of reproduced urgency; and that so much of it is rooted in American anti-China sentiment. Read: the fear that the big scary authoritarian country will build AGI before the US does, and destroy us all.Further reading & resources:Emotional Entanglement — Article 19Bodily Harms by Xiaowei Wang and Shazeda Ahmed for Access NowField-building and the epistemic culture of AI safety — First MondayMade in China journalPause AI**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**Shazeda Ahmed is a Chancellor’s Postdoctoral fellow at the University of California, Los Angeles. Shazeda completed her Ph.D. at UC Berkeley’s School of Information in 2022, and was previously a postdoctoral research fellow at Princeton University’s Center for Information Technology Policy. She has been a research fellow at Upturn, the Mercator Institute for China Studies, the University of Toronto's Citizen Lab, Stanford University’s Human-Centered Artificial Intelligence (HAI) Institute, and NYU's AI Now Institute.Shazeda’s research investigates relationships between the state, the firm, and society in the US-China geopolitical rivalry over AI, with implications for information technology policy and human rights. Her work draws from science and technology studies, ranging from her dissertation on the state-firm co-production of China’s social credit system, to her research on the epistemic culture of the emerging field of AI safety.
    --------  
    55:55
  • Live Show: Paris Post-Mortem
    Kapow! We just did our first ever LIVE SHOW. We barely had time to let the mics cool down before a bunch of you requested to have the recording on our pod feed so here we are.ICYMI: this is a recording from the live show that we did in Paris, right after the AI Action Summit. Alix sat down to have a candid conversation about the summit, and pontificate on what people might have meant when they kept saying ‘public interest AI’ over and over. She was joined by four of the best women in AI politics:Astha Kapoor, Co-Founder for the Aapti InstituteAmba Kak, Executive Director of the AI Now InstituteAbeba Birhane, Founder & Principal Investigator of the Artificial Intelligence Accountability Lab (AIAL)Nabiha Syed, Executive Director of MozillaIf audio is not enough for you, go ahead and watch the show on YouTube**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!***Astha Kapoor is the Co-founder of Aapti Institute, a Bangalore based research firm that works on the intersection of technology and society. She has 15 years of public policy and strategy consulting experience, with a focus on use of technology for welfare. Astha works on participative governance of data, and digital public infrastructure. She’s a member of World Economic Forum Global Future Council on data equity (2023-24), visiting fellow at the Ostrom Workshop (Indiana University). She was also a member of the Think20 taskforce on digital public infrastructure during India and Brazil's G20 presidency and is currently on the board of Global Partnership for Sustainable Data.**Amba Kak has spent the last fifteen years designing and advocating for technology policy in the public interest, across government, industry, and civil society roles – and in many parts of the world. Amba brings this experience to her current role co-directing AI Now, a New York-based research institute where she leads on advancing diagnosis and actionable policy to tackle concerns with artificial intelligence and concentrated power. She has served as Senior Advisor on AI to the Federal Trade Commission and was recognized as one of TIME’s 100 Most Influential People in AI in 2024.**Dr. Abeba Birhane founded and leads the TCD AI Accountability Lab (AIAL). Dr Birhane is currently a Research Fellow at the School of Computer Science and Statistics in Trinity College Dublin. Her research focuses on AI accountability, with a particular focus on audits of AI models and training datasets – work for which she was featured in Wired UK and TIME on the TIME100 Most Influential People in AI list in 2023. Dr. Birhane also served on the United Nations Secretary-General’s AI Advisory Body and currently serves at the AI Advisory Council in Ireland.**Nabiha Syed is the Executive Director of the Mozilla Foundation, the global nonprofit that does everything from championing trustworthy AI to advocating for a more open, equitable internet. Prior to joining Mozilla, she was CEO of The Markup, an award-winning journalism non-profit that challenges technology to serve the public good. Before launching The Markup in 2020, Nabiha spent a decade as an acclaimed media lawyer focused on the intersection of frontier technology and newsgathering, including advising on publication issues with the Snowden revelations and the Steele Dossier, access litigation around police disciplinary records, and privacy and free speech issues globally. In 2023, Naibha was awarded the NAACP/Archewell Digital Civil Rights Award for her work.*
    --------  
    46:39
  • Defying Datafication w/ Dr Abeba Birhane (PLUS: Paris AI Action Summit)
    The Paris AI Action Summit is just around the corner! If you’re not going to be there, and you wish you were — we got you.We are streaming next week’s podcast LIVE from Paris on YouTube — register here🎙️On Tuesday, February 11th, at 6:30pm Paris time / 12:30pm EST, we’ll be recording our first-ever LIVE podcast episode. After two days at the French AI Action Summit, Alix will sit down with four of the best women in AI politics to break down the power and politics of the Summit. It’s our Paris Post-Mortem — and we’re live-streaming the whole conversation.We’ll hear from:Astha Kapoor, Co-Founder for the Aapti InstituteAmba Kak, Executive Director of the AI Now InstituteAbeba Birhane, Founder & Principal Investigator of the Artificial Intelligence Accountability Lab (AIAL)Nabiha Syed, Executive Director of MozillaThis is our first-ever live-streamed podcast, and we’d love a great community turnout. Join the stream on Tuesday and share it with anyone else who wants the hot of the press review of what happens in Paris.And, today’s episode is abundant with treats to prime you for the summit: Alix checks in with Martin Tisne who is the special envoy to the Public Interest AI track to ask him about how he feels about the upcoming summit, and what he hopes it will achieve.We also hear from Michelle Thorne, of Green Web Foundation about a joint statement on the environmental impacts of AI she’s hoping can focus the energy of the summit towards planetary limits and decarbonisation of AI. Learn about why and how she put this together and how she’s hoping to start reasonable conversations about how AI is a complete and utter energy vampire.Then we have Dr. Abeba Birhane — who will also be at our live show next week — to share her experiences launching the AI Accountability Lab at Trinity College in Dublin. Abeba’s work pushes to actually research AI systems before we make claims about them. In a world of industry marketing spin, Abeba is a voice of reason. As a cognitive scientist who studies people she also cautions against the impossible and tantalising idea that we can somehow datafy human complexity.Further Reading & Resources:**AI auditing: The Broken Bus on the Road to AI Accountability** by Abeba Birhane, Ryan Steed, Victor Ojewale, Briana Vecchione, Inioluwa Deborah RajiAI Accountability LabPress release outlining the Lab’s launch last year — Trinity CollegeThe Artificial Intelligence Action SummitWithin Bounds: Limiting AI’s Environmental Impact — led by Michelle Thorne from the Green Web FoundationOur Youtube ChannelDr Abeba Birhane founded and leads the TCD AI Accountability Lab (AIAL). Dr Birhane is currently a Research Fellow at the School of Computer Science and Statistics in Trinity College Dublin. Her research focuses on AI accountability, with a particular focus on audits of AI models and training datasets – work for which she was featured in Wired UK and TIME on the TIME100 Most Influential People in AI list in 2023. Dr. Birhane also served on the United Nations Secretary-General’s AI Advisory Body and currently serves at the AI Advisory Council in Ireland.Martin Tisné is Thematic Envoy to the AI Action Summit, in charge of all deliverables related to Public Interest AI. He also leads the AI Collaborative, an initiative of The Omidyar Group created to help regulate artificial intelligence based on democratic values and principles and ensure the public has a voice in that regulation. He founded the Open Government Partnership (OGP) alongside the Obama White House and helped OGP grow to a 70+ country initiative. He also initiated the International Open Data Charter, the G7 Open Data Charter, and the G20’s commitment to open data principles.Michelle Thorne (@thornet) is working towards a fossil-free internet as the Director of Strategy at the Green Web Foundation. She’s a co-initiator of the Green Screen Coalition for digital rights and climate justice and a visiting professor at Northumbria University. Michelle publishes Branch, an online magazine written by and for people who dream about a sustainable internet, which received the Ars Electronica Award for Digital Humanities in 2021.
    --------  
    1:03:46
  • DEI Season Finale: Part Two
    This week Alix continues her conversation with Hanna McCloskey and Rubie Clarke from Fearless Futures and we take a whistle-stop tour of the past 5 years. We start in 2020 with the disingenuous but huge embrace of DEI work by tech companies, to 2025 when those same companies are part of massive movements actively campaigning against it.The pair share what it was like running a DEI consultancy in the months and years following the murder of George Floyd — when DEI was suddenly on the agenda for a lot organisations. The performative and ineffective methods that DEI is famous for (endless canape receptions!) has also given the inevitable backlash easy pickings for mockery and vilification.The news is happening so fast, but these DEI episodes can hopefully help listeners better understand the backlash, not just to DEI, but to any attempts to correct systemic inequity in society.Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!Further reading & resources:Fearless FuturesDEI Disrupted: The Blueprint for DEI Worth DoingCombahee River CollectiveRubie Eílis Clarke (she/her) is Senior Director of Consultancy, Fearless Futures. Rubie is of Jewish and Irish heritage and is based in her home town of London. As Senior Director of Consultancy at Fearless Futures, Rubie supports ambitious organisations to diagnose inequity in their ecosystems and design, implement and evaluate innovative anti-oppression solutions. Her expertise lies in critical social theory and research, policy analysis and organisational change strategy. She holds a B.A. in Sociology and Anthropology from Goldsmiths University, London and a M.A. in Global Political Economy from the University of Sussex, with a focus on social and economic policy, Race critical theory, decoloniality and intersectional feminism. Rubie is also an expert facilitator who is skilled at leaning into nuance, complexity and discomfort with curiosity and compassion. She is passionate about facilitating collaborative learning journeys that build deep understanding of the root causes of oppression and unlock innovative and meaningful ways to disrupt and divest in service, ultimately, of collective liberation.Hanna Naima Mccloskey (she/her) is Founder and CEO, Fearless Futures. Hanna is Algerian British and the Founder & CEO of Fearless Futures. Before founding Fearless Futures she worked for the UN, NGOs and the Royal Bank of Scotland, across communications, research and finance roles; and has lived, studied and worked in Israel-Palestine, Italy, USA, Sudan, Syria and the UK. She has a BA in English from the University of Cambridge and an MA in International Relations from the Johns Hopkins School of Advanced International Studies, with a specialism in Conflict Management. Hanna is passionate, compassionate and challenging as an educator and combines this with rigour and creativity in consultancy. She brings nuanced and complex ideas in incisive and engaging ways to all she supports, always with a commitment for equitable transformation. Hanna is also a qualified ABM bodyfeeding peer supporter, committed to enabling all parents to meet their body feeding goals.
    --------  
    44:51

More Technology podcasts

About Computer Says Maybe

Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly.
Podcast website

Listen to Computer Says Maybe, Acquired and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.8.0 | © 2007-2025 radio.de GmbH
Generated: 2/21/2025 - 10:06:23 PM