Powered by RND

AI Safety Newsletter

Center for AI Safety
AI Safety Newsletter
Najnowszy odcinek

Dostępne odcinki

5 z 71
  • AISN #65: Measuring Automation and Superintelligence Moratorium Letter
    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition: A new benchmark measures AI automation; 50,000 people, including top AI scientists, sign an open letter calling for a superintelligence moratorium. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. CAIS and Scale AI release Remote Labor Index The Center for AI Safety (CAIS) and Scale AI have released the Remote Labor Index (RLI), which tests whether AIs can automate a wide array of real computer work projects. RLI is intended to inform policy, AI research, and businesses about the effects of automation as AI continues to advance. RLI is the first benchmark of its kind. Previous AI benchmarks measure AIs on their intelligence and their abilities on isolated and specialized tasks, such as basic web browsing or coding. While these benchmarks measure useful capabilities, they don’t measure how AIs can affect the economy. RLI is the first benchmark to collect computer-based work projects from the real economy, containing work from many different professions, such as architecture, product design, video game development, and design.Examples of RLI Projects Current [...] ---Outline:(00:29) CAIS and Scale AI release Remote Labor Index(02:04) Bipartisan Coalition for Superintelligence Moratorium(04:18) In Other News(05:56) Discussion about this post --- First published: October 29th, 2025 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-65-measuring --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    6:29
  • AISN #63: New AGI Definition and Senate Bill Would Establish Liability for AI Harms
    In this edition: A new bill in the Senate would hold AI companies liable for harms their products create; China tightens its export controls on rare earth metals; a definition of AGI. As a reminder, we’re hiring a writer for the newsletter. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Senate Bill Would Establish Liability for AI Harms Sens. Dick Durbin, (D-Ill) and Josh Hawley (R-Mo) introduced the AI LEAD Act, which would establish a federal cause of action for people harmed by AI systems to sue AI companies. Corporations are usually liable for harms their products create. When a company sells a product in the United States that harms someone, that person can generally sue that company for damages under the doctrine of product liability. Those suits force companies to internalize the harms their products create—and incentivize them to make their products safer. [...] ---Outline:(00:35) Senate Bill Would Establish Liability for AI Harms(02:48) China Tightens Export Controls on Rare Earth Metals(05:28) A Definition of AGI(08:31) In Other News(10:19) Discussion about this post --- First published: October 16th, 2025 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-63-new-agi-definition --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    10:52
  • AISN #63: California’s SB-53 Passes the Legislature
    In this edition: California's legislature sent SB-53—the ‘Transparency in Frontier Artificial Intelligence Act’—to Governor Newsom's desk. If signed into law, California would become the first US state to regulate catastrophic risk. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. A note from Corin: I’m leaving the AI Safety Newsletter soon to start law school—but if you’d like to hear more from me, I’m planning to continue to write about AI in a new personal newsletter, Conditionals. On a related note, we’re also hiring a writer for the newsletter. California's SB-53 Passes the Legislature SB-53 is the Legislature's weaker sequel to last year's vetoed SB-1047. After Governor Gavin Newsom vetoed SB-1047 last year, he convened the Joint California Policy Working Group on AI Frontier Models. The group's June report recommended transparency, incident reporting, and whistleblower protections as near-term priorities for governing AI systems. SB-53 (the [...] ---Outline:(00:49) California's SB-53 Passes the Legislature(06:33) In Other News(08:37) Discussion about this post --- First published: September 24th, 2025 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-63-californias --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    9:11
  • AISN #62: Big Tech Launches $100 Million pro-AI Super PAC
    Also: Meta's Chatbot Policies Prompt Backlash Amid AI Reorganization; China Reverses Course on Nvidia H20 Purchases. In this edition: Big tech launches a $100 million pro-AI super PAC; Meta's chatbot policies prompt congressional scrutiny amid the company's AI reorganization; China reverses course on buying Nvidia H20 chips after comments by Secretary of Commerce Howard Lutnick. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Big Tech Launches $100 Million pro-AI Super PAC Silicon valley executives and investors are investing more than $100 million in a new political network to push back against AI regulations, signaling that the industry intends to be a major player in next year's U.S. midterms. The super PAC is backed by a16z and Greg Brockman and imitates the crypto super PAC Fairshake. The network, called Leading the Future, is modeled on the crypto-focused super-PAC Fairshake and aims to influence AI [...] ---Outline:(00:46) Big Tech Launches $100 Million pro-AI Super PAC(02:27) Meta's Chatbot Policies Prompt Backlash Amid AI Reorganization(04:45) China Reverses Course on Nvidia H20 Purchases(07:21) In Other News --- First published: August 27th, 2025 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-62-big-tech --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    10:16
  • AISN #61: OpenAI Releases GPT-5
    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this edition: OpenAI releases GPT-5. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. OpenAI Releases GPT-5 Ever since GPT-4's release in March 2023 marked a step-change improvement over GPT-3, people have used ‘GPT-5’ as a stand-in to speculate about the next generation of AI capabilities. On Thursday, OpenAI released GPT-5. While state-of-the-art in most respects, GPT-5 is not a step-change improvement over competing systems, or even recent OpenAI models—but we shouldn’t have expected it to be. GPT-5 is state of the art in most respects. GPT-5 isn’t a single model like GPTs 1 through 4. It is a system of two models: a base model that answers questions quickly and is better at tasks like creative writing (an improved [...] ---Outline:(00:19) OpenAI Releases GPT-5(06:20) In Other News --- First published: August 12th, 2025 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-61-openai-releases --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    9:13

Więcej Społeczeństwo i Kultura podcastów

O AI Safety Newsletter

Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This podcast also contains narrations of some of our publications. ABOUT US The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards. Learn more at https://safe.ai
Strona internetowa podcastu

Słuchaj AI Safety Newsletter, Mao Powiedziane i wielu innych podcastów z całego świata dzięki aplikacji radio.pl

Uzyskaj bezpłatną aplikację radio.pl

  • Stacje i podcasty do zakładek
  • Strumieniuj przez Wi-Fi lub Bluetooth
  • Obsługuje Carplay & Android Auto
  • Jeszcze więcej funkcjonalności

AI Safety Newsletter: Podcasty w grupie

Media spoecznościowe
v7.23.11 | © 2007-2025 radio.de GmbH
Generated: 11/5/2025 - 6:48:56 AM