055 - What Can Carol Smith’s Ethical AI Work at the DoD Teach Us About Designing Human-Machine Experiences?

Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management) - Een podcast door Brian T. O’Neill from Designing for Analytics - Dinsdagen

Categorieën:

It’s not just science fiction: As AI becomes more complex and prevalent, so do the ethical implications of this new technology.But don’t just take it from me – take it from Carol Smith, a leading voice in the field of UX and AI. Carol is a senior research scientist in human-machine interaction at Carnegie Mellon University’s Emerging Tech Center, a division of the school’s Software Engineering Institute. Formerly a senior researcher for Uber’s self-driving vehicle experience, Carol-who also works as an adjunct professor at the university’s Human-Computer Interaction Institute-does research on Ethical AI in her work with the US Department of Defense. Throughout her 20 years in the UX field, Carol has studied how focusing on ethics can improve user experience with AI. On today’s episode, Carol and I talked about exactly that: the intersection of user experience and artificial intelligence, what Carol’s work with the DoD has taught her, and why design matters when using machine learning and automation. Better yet, Carol gives us some specific, actionable guidance and her four principles for designing ethical AI systems. In total, we covered: “Human-machine teaming”: what Carol learned while researching how passengers would interact with autonomous cars at Uber (2:17) Why Carol focuses on the ethical implications of the user experience research she is doing (4:20) Why designing for AI is both a new endeavor and an extension of existing human-centered design principles (6:24) How knowing a user’s information needs can drive immense value in AI products (9:14) Carol explains how teams can improve their AI product by considering ethics (11:45) “Thinking through the worst-case scenarios”: Why ethics matters in AI development (14:35) and methods to include ethics early in the process (17:11) The intersection between soldiers and artificial intelligence (19:34) Making AI flexible to human oddities and complexities (25:11) How exactly diverse teams help us design better AI solutions (29:00) Carol’s four principles of designing ethical AI systems and “abusability testing” (32:01) Quotes from Today’s Episode “The craft of design-particularly for #analytics and #AI solutions-is figuring out who this customer is-your user-and exactly what amount of evidence do they need, and at what time do they need it, and the format they need it in.” – Brian “From a user experience, or human-centered design aspect, just trying to learn as much as you can about the individuals who are going to use the system is really helpful … And then beyond that, as you start to think about ethics, there are a lot of activities you can do, just speculation activities that you can do on the couch, so to speak, and think through – what is the worst thing that could happen with the system?” – Carol “[For AI, I recommend] ‘abusability testing,’ or ‘black mirror episode testing,’ where you’re really thinking through the absolute worst-case scenario because it really helps you to think about the people who could be the most impacted. And particularly people who are marginalized in society, we really want to be careful that we’re not adding to the already bad situations that they’re already facing.” – Carol, on ways to think about the ethical implications of an AI system “I think people need to be more open to doing slightly slower work […] the move fast and break things time is over. It just, it doesn’t work. Too many people do get hurt, and it’s not a good way to make things. We can make them better, slightly slower.” – Carol “The four principles of designing ethical AI systems are: accountable to humans, cognizant of speculative risks and benefits, respectful and secure, and honest and usable. And so with these four aspects, we can start to really query the systems and think about different types of protections that we want to provide.” – Carol “Keep asking tough questions. Have these tough conversations. This is really hard work. It’s very uncomfortable work fo

Visit the podcast's native language site