Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
This podcas...
Plus, Chinese researchers used Llama to create a military tool for the PLA, a Google AI system discovered a zero-day cybersecurity vulnerability, and Complex Systems. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The Trump Circle on AI Safety The incoming Trump administration is likely to significantly alter the US government's approach to AI safety. For example, Trump is likely to immediately repeal Biden's Executive Order on AI. However, some of Trump's circle appear to take AI safety seriously. The most prominent AI safety advocate close to Trump is Elon Musk, who earlier this year supported SB 1047. However, he is not alone. Below, we’ve gathered some promising perspectives from other members of Trump's circle and incoming administration. Trump and Musk at UFC 309. Photo Source. Robert F. Kennedy Jr, Trump's pick for Secretary of Health and Human Services, said in [...] ---Outline:(00:24) The Trump Circle on AI Safety(02:41) Chinese Researchers Used Llama to Create a Military Tool for the PLA(04:14) A Google AI System Discovered a Zero-Day Cybersecurity Vulnerability(05:27) Complex Systems(08:54) LinksThe original text contained 1 image which was described by AI. ---
First published:
November 19th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-44-the-trump
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
--------
11:22
AISN #43: White House Issues First National Security Memo on AI
Plus, AI and Job Displacement, and AI Takes Over the Nobels. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. White House Issues First National Security Memo on AI On October 24, 2024, the White House issued the first National Security Memorandum (NSM) on Artificial Intelligence, accompanied by a Framework to Advance AI Governance and Risk Management in National Security. The NSM identifies AI leadership as a national security priority. The memorandum states that competitors have employed economic and technological espionage to steal U.S. AI technology. To maintain a U.S. advantage in AI, the memorandum directs the National Economic Council to assess the U.S.'s competitive position in: Semiconductor design and manufacturing Availability of computational resources Access to workers highly skilled in AI Capital availability for AI development The Intelligence Community must make gathering intelligence on competitors' operations against the [...] ---Outline:(00:18) White House Issues First National Security Memo on AI(03:22) AI and Job Displacement(09:13) AI Takes Over the NobelsThe original text contained 2 images which were described by AI. ---
First published:
October 28th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-43-white-house
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
--------
14:55
AISN #42: Newsom Vetoes SB 1047
Plus, OpenAI's o1, and AI Governance Summary. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Newsom Vetoes SB 1047 On Sunday, Governor Newsom vetoed California's Senate Bill 1047 (SB 1047), the most ambitious legislation to-date aimed at regulating frontier AI models. The bill, introduced by Senator Scott Wiener and covered in a previous newsletter, would have required AI developers to test frontier models for hazardous capabilities and take steps to mitigate catastrophic risks. (CAIS Action Fund was a co-sponsor of SB 1047.) Newsom states that SB 1047 is not comprehensive enough. In his letter to the California Senate, the governor argued that “SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves [...] ---Outline:(00:18) Newsom Vetoes SB 1047(01:55) OpenAI's o1(06:44) AI Governance(10:32) LinksThe original text contained 3 images which were described by AI. ---
First published:
October 1st, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-42-newsom-vetoes
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
--------
13:11
AISN #41: The Next Generation of Compute Scale
Plus, Ranking Models by Susceptibility to Jailbreaking, and Machine Ethics. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The Next Generation of Compute Scale AI development is on the cusp of a dramatic expansion in compute scale. Recent developments across multiple fronts—from chip manufacturing to power infrastructure—point to a future where AI models may dwarf today's largest systems. In this story, we examine key developments and their implications for the future of AI compute. xAI and Tesla are building massive AI clusters. Elon Musk's xAI has brought its Memphis supercluster—“Colossus”—online. According to Musk, the cluster has 100k Nvidia H100s, making it the largest supercomputer in the world. Moreover, xAI plans to add 50k H200s in the next few months. For comparison, Meta's Llama 3 was trained on 16k H100s. Meanwhile, Tesla's “Gigafactory Texas” is expanding to house an AI supercluster. Tesla's Gigafactory supercomputer [...] ---Outline:(00:18) The Next Generation of Compute Scale(04:36) Ranking Models by Susceptibility to Jailbreaking(06:07) Machine EthicsThe original text contained 1 image which was described by AI. ---
First published:
September 11th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-41-the-next
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
--------
11:59
AISN #40: California AI Legislation
Plus, NVIDIA Delays Chip Production, and Do AI Safety Benchmarks Actually Measure Safety?. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. SB 1047, the Most-Discussed California AI Legislation California's Senate Bill 1047 has sparked discussion over AI regulation. While state bills often fly under the radar, SB 1047 has garnered attention due to California's unique position in the tech landscape. If passed, SB 1047 would apply to all companies performing business in the state, potentially setting a precedent for AI governance more broadly. This newsletter examines the current state of the bill, which has had various amendments in response to feedback from various stakeholders. We'll cover recent debates surrounding the bill, support from AI experts, opposition from the tech industry, and public opinion based on polling. The bill mandates safety protocols, testing procedures, and reporting requirements for covered AI models. The bill was [...] ---Outline:(00:18) SB 1047, the Most-Discussed California AI Legislation(04:38) NVIDIA Delays Chip Production(06:49) Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?(10:22) LinksThe original text contained 1 image which was described by AI. ---
First published:
August 21st, 2024
Source:
https://newsletter.safe.ai/p/aisn-40-california-ai-legislation
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
This podcast also contains narrations of some of our publications.
ABOUT US
The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards.
Learn more at https://safe.ai