AI Safety Newsletter

Center for AI Safety
AI Safety Newsletter
Najnowszy odcinek

75 odcinków

  • AI Safety Newsletter

    AISN #67: Trump’s preemption order, H200s go to China, and new frontier AI from OpenAI and DeepSeek

    17.12.2025 | 12 min.

    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition we discuss President Trump's executive order targeting state AI laws, Nvidia's approval to sell China high-end accelerators, and new frontier models from OpenAI and DeepSeek. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Executive Order Blocks State AI Laws U.S. President Donald Trump issued an executive order aimed at halting state efforts to regulate AI. The order, which differs from a version leaked last month, leverages federal funding and enforcement to evaluate, challenge, and limit state laws. The order caps off a year in which several ambitious state AI proposals were either watered down or vetoed outright. A push for regulatory uniformity. The order aims to reduce regulatory friction for companies by eliminating the variety of state-level regimes and limit the power of states at impacting commerce beyond their own borders. It calls for replacing them with a single, unspecified, federal framework. [...] ---Outline:(00:34) Executive Order Blocks State AI Laws(03:53) US Permits Nvidia to Sell H200s to China(06:11) ChatGPT-5.2 and DeepSeek-v3.2 Arrive(08:52) In Other News(08:55) Industry(09:42) Civil Society(10:26) Government(11:35) Discussion about this post(11:39) Ready for more? --- First published: December 17th, 2025 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-67-trumps-preemption --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

  • AI Safety Newsletter

    AISN #67: Trump’s preemption order, H200s go to China, and new frontier AI from OpenAI and DeepSeek

    17.12.2025 | 11 min.

    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition we discuss President Trump's executive order targeting state AI laws, Nvidia's approval to sell China high-end accelerators, and new frontier models from OpenAI and DeepSeek. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Executive Order Blocks State AI Laws U.S. President Donald Trump issued an executive order aimed at halting state efforts to regulate AI. The order, which differs from a version leaked last month, leverages federal funding and enforcement to evaluate, challenge, and limit state laws. The order caps off a year in which several ambitious state AI proposals were either watered down or vetoed outright. A push for regulatory uniformity. The order aims to reduce regulatory friction for companies by eliminating the variety of state-level regimes and limit the power of states at impacting commerce beyond their own borders. It calls for replacing them with a single, unspecified, federal framework. [...] ---Outline:(00:34) Executive Order Blocks State AI Laws(03:42) US Permits Nvidia to Sell H200s to China(06:00) ChatGPT-5.2 and DeepSeek-v3.2 Arrive(08:23) In Other News(08:27) Industry(09:13) Civil Society(09:58) Government(11:07) Discussion about this post(11:11) Ready for more? --- First published: December 17th, 2025 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-67-trumps-preemption --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

  • AI Safety Newsletter

    AISN #66: AISN #66: Evaluating Frontier Models, New Gemini and Claude, Preemption is Back

    02.12.2025 | 12 min.

    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition we discuss the new AI Dashboard, recent frontier models from Google and Anthropic, and a revived push to preempt state AI regulations. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. CAIS Releases the AI Dashboard for Frontier Performance CAIS launched its AI Dashboard, which evaluates frontier AI systems on capability and safety benchmarks. The dashboard also tracks the industry's overall progression toward broader milestones such as AGI, automation of remote labor, and full self-driving. How the dashboard works. The AI Dashboard features three leaderboards—one for text, one for vision, and one for risks—where frontier models are ranked according to their average score across a battery of benchmarks. Because CAIS evaluates models directly across a wide range of tasks, the dashboard provides apples-to-apples comparisons of how different frontier models perform on the same set of evaluations and safety-relevant behaviors. Ranking frontier models for [...] ---Outline:(00:33) CAIS Releases the AI Dashboard for Frontier Performance(04:05) Politicians Revive Push for Moratorium on State AI Laws(06:39) Gemini 3 Pro and Claude Opus 4.5 Arrive(09:17) In Other News(09:20) Government(10:15) Industry(11:03) Civil Society(12:00) Discussion about this post --- First published: December 2nd, 2025 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-66-aisn-66-evaluating --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

  • AI Safety Newsletter

    AISN #65: Measuring Automation and Superintelligence Moratorium Letter

    29.10.2025 | 6 min.

    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition: A new benchmark measures AI automation; 50,000 people, including top AI scientists, sign an open letter calling for a superintelligence moratorium. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. CAIS and Scale AI release Remote Labor Index The Center for AI Safety (CAIS) and Scale AI have released the Remote Labor Index (RLI), which tests whether AIs can automate a wide array of real computer work projects. RLI is intended to inform policy, AI research, and businesses about the effects of automation as AI continues to advance. RLI is the first benchmark of its kind. Previous AI benchmarks measure AIs on their intelligence and their abilities on isolated and specialized tasks, such as basic web browsing or coding. While these benchmarks measure useful capabilities, they don’t measure how AIs can affect the economy. RLI is the first benchmark to collect computer-based work projects from the real economy, containing work from many different professions, such as architecture, product design, video game development, and design.Examples of RLI Projects Current [...] ---Outline:(00:29) CAIS and Scale AI release Remote Labor Index(02:04) Bipartisan Coalition for Superintelligence Moratorium(04:18) In Other News(05:56) Discussion about this post --- First published: October 29th, 2025 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-65-measuring --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

  • AI Safety Newsletter

    AISN #63: New AGI Definition and Senate Bill Would Establish Liability for AI Harms

    16.10.2025 | 10 min.

    In this edition: A new bill in the Senate would hold AI companies liable for harms their products create; China tightens its export controls on rare earth metals; a definition of AGI. As a reminder, we’re hiring a writer for the newsletter. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Senate Bill Would Establish Liability for AI Harms Sens. Dick Durbin, (D-Ill) and Josh Hawley (R-Mo) introduced the AI LEAD Act, which would establish a federal cause of action for people harmed by AI systems to sue AI companies. Corporations are usually liable for harms their products create. When a company sells a product in the United States that harms someone, that person can generally sue that company for damages under the doctrine of product liability. Those suits force companies to internalize the harms their products create—and incentivize them to make their products safer. [...] ---Outline:(00:35) Senate Bill Would Establish Liability for AI Harms(02:48) China Tightens Export Controls on Rare Earth Metals(05:28) A Definition of AGI(08:31) In Other News(10:19) Discussion about this post --- First published: October 16th, 2025 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-63-new-agi-definition --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Więcej Społeczeństwo i Kultura podcastów

O AI Safety Newsletter

Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This podcast also contains narrations of some of our publications. ABOUT US The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards. Learn more at https://safe.ai
Strona internetowa podcastu

Słuchaj AI Safety Newsletter, Podcastex - podcast o latach 90. i 00. i wielu innych podcastów z całego świata dzięki aplikacji radio.pl

Uzyskaj bezpłatną aplikację radio.pl

  • Stacje i podcasty do zakładek
  • Strumieniuj przez Wi-Fi lub Bluetooth
  • Obsługuje Carplay & Android Auto
  • Jeszcze więcej funkcjonalności

AI Safety Newsletter: Podcasty w grupie

Media spoecznościowe
v8.2.1 | © 2007-2025 radio.de GmbH
Generated: 12/27/2025 - 4:14:25 AM