59 Afleveringen

  1. 35 - Peter Hase on LLM Beliefs and Easy-to-Hard Generalization

    Gepubliceerd: 24-8-2024
  2. 34 - AI Evaluations with Beth Barnes

    Gepubliceerd: 28-7-2024
  3. 33 - RLHF Problems with Scott Emmons

    Gepubliceerd: 12-6-2024
  4. 32 - Understanding Agency with Jan Kulveit

    Gepubliceerd: 30-5-2024
  5. 31 - Singular Learning Theory with Daniel Murfet

    Gepubliceerd: 7-5-2024
  6. 30 - AI Security with Jeffrey Ladish

    Gepubliceerd: 30-4-2024
  7. 29 - Science of Deep Learning with Vikrant Varma

    Gepubliceerd: 25-4-2024
  8. 28 - Suing Labs for AI Risk with Gabriel Weil

    Gepubliceerd: 17-4-2024
  9. 27 - AI Control with Buck Shlegeris and Ryan Greenblatt

    Gepubliceerd: 11-4-2024
  10. 26 - AI Governance with Elizabeth Seger

    Gepubliceerd: 26-11-2023
  11. 25 - Cooperative AI with Caspar Oesterheld

    Gepubliceerd: 3-10-2023
  12. 24 - Superalignment with Jan Leike

    Gepubliceerd: 27-7-2023
  13. 23 - Mechanistic Anomaly Detection with Mark Xu

    Gepubliceerd: 27-7-2023
  14. Survey, store closing, Patreon

    Gepubliceerd: 28-6-2023
  15. 22 - Shard Theory with Quintin Pope

    Gepubliceerd: 15-6-2023
  16. 21 - Interpretability for Engineers with Stephen Casper

    Gepubliceerd: 2-5-2023
  17. 20 - 'Reform' AI Alignment with Scott Aaronson

    Gepubliceerd: 12-4-2023
  18. Store, Patreon, Video

    Gepubliceerd: 7-2-2023
  19. 19 - Mechanistic Interpretability with Neel Nanda

    Gepubliceerd: 4-2-2023
  20. New podcast - The Filan Cabinet

    Gepubliceerd: 13-10-2022

2 / 3

AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.

Visit the podcast's native language site