Future of Life Institute Podcast
Een podcast door Future of Life Institute

Categorieën:
230 Afleveringen
-
Vincent Boulanin on the Dangers of AI in Nuclear Weapons Systems
Gepubliceerd: 1-12-2022 -
Robin Hanson on Predicting the Future of Artificial Intelligence
Gepubliceerd: 24-11-2022 -
Robin Hanson on Grabby Aliens and When Humanity Will Meet Them
Gepubliceerd: 17-11-2022 -
Ajeya Cotra on Thinking Clearly in a Rapidly Changing World
Gepubliceerd: 10-11-2022 -
Ajeya Cotra on how Artificial Intelligence Could Cause Catastrophe
Gepubliceerd: 3-11-2022 -
Ajeya Cotra on Forecasting Transformative Artificial Intelligence
Gepubliceerd: 27-10-2022 -
Alan Robock on Nuclear Winter, Famine, and Geoengineering
Gepubliceerd: 20-10-2022 -
Brian Toon on Nuclear Winter, Asteroids, Volcanoes, and the Future of Humanity
Gepubliceerd: 13-10-2022 -
Philip Reiner on Nuclear Command, Control, and Communications
Gepubliceerd: 6-10-2022 -
Daniela and Dario Amodei on Anthropic
Gepubliceerd: 4-3-2022 -
Anthony Aguirre and Anna Yelizarova on FLI's Worldbuilding Contest
Gepubliceerd: 9-2-2022 -
David Chalmers on Reality+: Virtual Worlds and the Problems of Philosophy
Gepubliceerd: 26-1-2022 -
Rohin Shah on the State of AGI Safety Research in 2021
Gepubliceerd: 2-11-2021 -
Future of Life Institute's $25M Grants Program for Existential Risk Reduction
Gepubliceerd: 18-10-2021 -
Filippa Lentzos on Global Catastrophic Biological Risks
Gepubliceerd: 1-10-2021 -
Susan Solomon and Stephen Andersen on Saving the Ozone Layer
Gepubliceerd: 16-9-2021 -
James Manyika on Global Economic and Technological Trends
Gepubliceerd: 7-9-2021 -
Michael Klare on the Pentagon's view of Climate Change and the Risks of State Collapse
Gepubliceerd: 30-7-2021 -
Avi Loeb on UFOs and if they're Alien in Origin
Gepubliceerd: 9-7-2021 -
Avi Loeb on 'Oumuamua, Aliens, Space Archeology, Great Filters, and Superstructures
Gepubliceerd: 9-7-2021
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.