AIAP: Inverse Reinforcement Learning and Inferring Human Preferences with Dylan Hadfield-Menell

Future of Life Institute Podcast - Een podcast door Future of Life Institute

Categorieën:

Inverse Reinforcement Learning and Inferring Human Preferences is the first podcast in the new AI Alignment series, hosted by Lucas Perry. This series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across a variety of areas, such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we will hope that you join in the conversations by following or subscribing to us on Youtube, Soundcloud, or your preferred podcast site/application. In this podcast, Lucas spoke with Dylan Hadfield-Menell, a fifth year Ph.D student at UC Berkeley. Dylan’s research focuses on the value alignment problem in artificial intelligence. He is ultimately concerned with designing algorithms that can learn about and pursue the intended goal of their users, designers, and society in general. His recent work primarily focuses on algorithms for human-robot interaction with unknown preferences and reliability engineering for learning systems. Topics discussed in this episode include: -Inverse reinforcement learning -Goodhart’s Law and it’s relation to value alignment -Corrigibility and obedience in AI systems -IRL and the evolution of human values -Ethics and moral psychology in AI alignment -Human preference aggregation -The future of IRL

Visit the podcast's native language site