AISN #62: Big Tech Launches $100 Million pro-AI Super PAC
Also: Meta's Chatbot Policies Prompt Backlash Amid AI Reorganization; China Reverses Course on Nvidia H20 Purchases. In this edition: Big tech launches a $100 million pro-AI super PAC; Meta's chatbot policies prompt congressional scrutiny amid the company's AI reorganization; China reverses course on buying Nvidia H20 chips after comments by Secretary of Commerce Howard Lutnick. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Big Tech Launches $100 Million pro-AI Super PAC Silicon valley executives and investors are investing more than $100 million in a new political network to push back against AI regulations, signaling that the industry intends to be a major player in next year's U.S. midterms. The super PAC is backed by a16z and Greg Brockman and imitates the crypto super PAC Fairshake. The network, called Leading the Future, is modeled on the crypto-focused super-PAC Fairshake and aims to influence AI [...] ---Outline:(00:46) Big Tech Launches $100 Million pro-AI Super PAC(02:27) Meta's Chatbot Policies Prompt Backlash Amid AI Reorganization(04:45) China Reverses Course on Nvidia H20 Purchases(07:21) In Other News---
First published:
August 27th, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-62-big-tech
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
--------
10:16
--------
10:16
AISN #61: OpenAI Releases GPT-5
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition: OpenAI releases GPT-5. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. OpenAI Releases GPT-5 Ever since GPT-4's release in March 2023 marked a step-change improvement over GPT-3, people have used ‘GPT-5’ as a stand-in to speculate about the next generation of AI capabilities. On Thursday, OpenAI released GPT-5. While state-of-the-art in most respects, GPT-5 is not a step-change improvement over competing systems, or even recent OpenAI models—but we shouldn’t have expected it to be. GPT-5 is state of the art in most respects. GPT-5 isn’t a single model like GPTs 1 through 4. It is a system of two models: a base model that answers questions quickly and is better at tasks like creative writing (an improved [...] ---Outline:(00:19) OpenAI Releases GPT-5(06:20) In Other News---
First published:
August 12th, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-61-openai-releases
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
--------
9:13
--------
9:13
AISN #60: The AI Action Plan
Also: ChatGPT Agent and IMO Gold. In this edition: The Trump Administration publishes its AI Action Plan; OpenAI released ChatGPT Agent and announced that an experimental model achieved gold medal-level performance on the 2025 International Mathematical Olympiad. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The AI Action Plan On the 23rd, the White House released its AI Action Plan. The document is the outcome of a January executive order that required the President's Science Advisor, ‘AI and Crypto Czar’, and National Security Advisor (currently Michael Kratsios, David Sacks, and Marco Rubio) to submit a plan to “sustain and enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” President Trump also delivered an hour-long speech on the plan, and signed three executive orders beginning to implement some of its policies.Trump displaying an executive order at the [...] ---Outline:(00:34) The AI Action Plan(07:36) ChatGPT Agent and IMO Gold(12:48) In Other News---
First published:
July 31st, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-60-the-ai-action
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
--------
15:41
--------
15:41
AISN #59: EU Publishes General-Purpose AI Code of Practice
Plus: Meta Superintelligence Labs. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition: The EU published a General-Purpose AI Code of Practice for AI providers, and Meta is spending billions revamping its superintelligence development efforts. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. EU Publishes General-Purpose AI Code of Practice In June 2024, the EU adopted the AI Act, which remains the world's most significant law regulating AI systems. The Act bans some uses of AI like social scoring and predictive policing and limits other “high risk” uses such as generating credit scores or evaluating educational outcomes. It also regulates general-purpose AI (GPAI) systems, imposing transparency requirements, copyright protection policies, and safety and security standards for models that pose systemic risk (defined as those trained [...] ---Outline:(00:31) EU Publishes General-Purpose AI Code of Practice(04:50) Meta Superintelligence Labs(06:17) In Other News---
First published:
July 15th, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-59-eu-publishes
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
--------
9:23
--------
9:23
AISN #58: Senate Removes State AI Regulation Moratorium
Plus: Judges Split on Whether Training AI on Copyrighted Material is Fair Use. In this edition: The Senate removes a provision from Republican's “Big Beautiful Bill” aimed at restricting states from regulating AI; two federal judges split on whether training AI on copyrighted books in fair use. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Senate Removes State AI Regulation Moratorium The Senate removed a provision from Republican's “Big Beautiful Bill” aimed at restricting states from regulating AI. The moratorium would have prohibited states from receiving federal broadband expansion funds if they regulated AI—however, it faced procedural and political challenges in the Senate, and was ultimately removed in a vote of 99-1. Here's what happened. A watered-down moratorium cleared the Byrd Rule. In an attempt to bypass the Byrd Rule, which prohibits policy provisions in budget bills, the Senate Commerce Committee revised the [...] ---Outline:(00:35) Senate Removes State AI Regulation Moratorium(03:04) Judges Split on Whether Training AI on Copyrighted Material is Fair Use(07:19) In Other News---
First published:
July 3rd, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-58-senate-removes
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
This podcast also contains narrations of some of our publications.
ABOUT US
The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards.
Learn more at https://safe.ai