Abstract: Explainable AI (XAI) is a necessary component of interpretable ML systems. Without explanations, end-users are less likely to trust and adopt ML-based technologies. Without a means of understanding model decision making, business stakeholders have a difficult time assessing and launching new ML-based products. And without insights into why an ML application is behaving in a certain way, application developers have a harder time troubleshooting issues. However, the challenge of designing XAI is that the audience's explanations come from varied backgrounds, have different levels of experience with statistics and mathematical reasoning, and are subject to cognitive biases. They will also be relying on ML and Explainable AI in a variety of contexts for a variety of different tasks. In this talk, I'll go over the "basics" of Explainable AI: what it is and when you need it. Then I'll discuss some of the human-factors to consider when designing XAI for all types of end-users. Author Bio: Meg is currently a UX Researcher for Google Cloud AI and Industry Solutions, where she focuses her research on Explainable AI and Model Understanding. She has had a varied career working for start-ups and large corporations alike across fields such as EdTech, weather forecasting, and commercial robotics. She has published articles on topics such as information visualization, educational-technology design, human-robot interaction (HRI), and voice user interface (VUI) design. Meg is also a proud alumnus of Virginia Tech, where she received her Ph.D. in Human-Computer Interaction (HCI).
November 30 – December 1, 2022
10:30 PM – 12:30 AM UTC
10:30 PM | Welcome! |
10:35 PM | Meg present explainable AI + Q&A |
11:30 PM | Break |
11:35 PM | Virtual Networking Hangout |
Lowe's
Director Reliability Engineering
Sabre
Sr Director - Data Analytics & Engineering
Organizer & VP Data and Analytics at Sabre
Sabre Holdings
Director SRE