OpenAI says it may ‘adjust’ its safety requirements if a rival lab releases ‘high-risk’ AI

TechCrunch Startup News - Een podcast door TechCrunch

Categorieën:

In an update to its Preparedness Framework, the internal framework OpenAI uses to decide whether AI models are safe and what safeguards, if any, are needed during development and release, OpenAI said that it may “adjust” its requirements if a rival AI lab releases a “high-risk” system without comparable safeguards. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Visit the podcast's native language site