Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
This podcas...
AISN #45: Center for AI Safety 2024 Year in Review
As 2024 draws to a close, we want to thank you for your continued support for AI safety and review what we’ve been able to accomplish. In this special-edition newsletter, we highlight some of our most important projects from the year. The mission of the Center for AI Safety is to reduce societal-scale risks from AI. We focus on three pillars of work: research, field-building, and advocacy. Research CAIS conducts both technical and conceptual research on AI safety. Here are some highlights from our research in 2024: Circuit Breakers. We published breakthrough research showing how circuit breakers can prevent AI models from behaving dangerously by interrupting crime-enabling outputs. In a jailbreaking competition with a prize pool of tens of thousands of dollars, it took twenty thousand attempts to jailbreak a model trained with circuit breakers. The paper was accepted to NeurIPS 2024. The WMDP Benchmark. We developed the Weapons [...] ---Outline:(00:34) Research(04:25) Advocacy(06:44) Field-Building(10:38) Looking AheadThe original text contained 4 images which were described by AI. ---
First published:
December 19th, 2024
Source:
https://newsletter.safe.ai/p/aisn-45-center-for-ai-safety-2024
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
--------
11:31
AISN #44: The Trump Circle on AI Safety
Plus, Chinese researchers used Llama to create a military tool for the PLA, a Google AI system discovered a zero-day cybersecurity vulnerability, and Complex Systems. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The Trump Circle on AI Safety The incoming Trump administration is likely to significantly alter the US government's approach to AI safety. For example, Trump is likely to immediately repeal Biden's Executive Order on AI. However, some of Trump's circle appear to take AI safety seriously. The most prominent AI safety advocate close to Trump is Elon Musk, who earlier this year supported SB 1047. However, he is not alone. Below, we’ve gathered some promising perspectives from other members of Trump's circle and incoming administration. Trump and Musk at UFC 309. Photo Source. Robert F. Kennedy Jr, Trump's pick for Secretary of Health and Human Services, said in [...] ---Outline:(00:24) The Trump Circle on AI Safety(02:41) Chinese Researchers Used Llama to Create a Military Tool for the PLA(04:14) A Google AI System Discovered a Zero-Day Cybersecurity Vulnerability(05:27) Complex Systems(08:54) LinksThe original text contained 1 image which was described by AI. ---
First published:
November 19th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-44-the-trump
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
--------
11:22
AISN #43: White House Issues First National Security Memo on AI
Plus, AI and Job Displacement, and AI Takes Over the Nobels. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. White House Issues First National Security Memo on AI On October 24, 2024, the White House issued the first National Security Memorandum (NSM) on Artificial Intelligence, accompanied by a Framework to Advance AI Governance and Risk Management in National Security. The NSM identifies AI leadership as a national security priority. The memorandum states that competitors have employed economic and technological espionage to steal U.S. AI technology. To maintain a U.S. advantage in AI, the memorandum directs the National Economic Council to assess the U.S.'s competitive position in: Semiconductor design and manufacturing Availability of computational resources Access to workers highly skilled in AI Capital availability for AI development The Intelligence Community must make gathering intelligence on competitors' operations against the [...] ---Outline:(00:18) White House Issues First National Security Memo on AI(03:22) AI and Job Displacement(09:13) AI Takes Over the NobelsThe original text contained 2 images which were described by AI. ---
First published:
October 28th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-43-white-house
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
--------
14:55
AISN #42: Newsom Vetoes SB 1047
Plus, OpenAI's o1, and AI Governance Summary. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Newsom Vetoes SB 1047 On Sunday, Governor Newsom vetoed California's Senate Bill 1047 (SB 1047), the most ambitious legislation to-date aimed at regulating frontier AI models. The bill, introduced by Senator Scott Wiener and covered in a previous newsletter, would have required AI developers to test frontier models for hazardous capabilities and take steps to mitigate catastrophic risks. (CAIS Action Fund was a co-sponsor of SB 1047.) Newsom states that SB 1047 is not comprehensive enough. In his letter to the California Senate, the governor argued that “SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves [...] ---Outline:(00:18) Newsom Vetoes SB 1047(01:55) OpenAI's o1(06:44) AI Governance(10:32) LinksThe original text contained 3 images which were described by AI. ---
First published:
October 1st, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-42-newsom-vetoes
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
--------
13:11
AISN #41: The Next Generation of Compute Scale
Plus, Ranking Models by Susceptibility to Jailbreaking, and Machine Ethics. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The Next Generation of Compute Scale AI development is on the cusp of a dramatic expansion in compute scale. Recent developments across multiple fronts—from chip manufacturing to power infrastructure—point to a future where AI models may dwarf today's largest systems. In this story, we examine key developments and their implications for the future of AI compute. xAI and Tesla are building massive AI clusters. Elon Musk's xAI has brought its Memphis supercluster—“Colossus”—online. According to Musk, the cluster has 100k Nvidia H100s, making it the largest supercomputer in the world. Moreover, xAI plans to add 50k H200s in the next few months. For comparison, Meta's Llama 3 was trained on 16k H100s. Meanwhile, Tesla's “Gigafactory Texas” is expanding to house an AI supercluster. Tesla's Gigafactory supercomputer [...] ---Outline:(00:18) The Next Generation of Compute Scale(04:36) Ranking Models by Susceptibility to Jailbreaking(06:07) Machine EthicsThe original text contained 1 image which was described by AI. ---
First published:
September 11th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-41-the-next
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
This podcast also contains narrations of some of our publications.
ABOUT US
The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards.
Learn more at https://safe.ai