The Role of Trust in the UX of AI Systems
Human-Centered AI (HCAI)
Who Should Attend
AI Algorithms are somewhat of a black box – even the data scientists don’t understand the meaning of the output at times.This is further complicated by issues with accuracy, completeness and bias in data. In areas that present great opportunities for AI such as healthcare and financial services trust in data accuracy is critical for decision support. The stakes of a mistake in a movie recommendation are much lower than in a loan approval or drug interaction notification.
An important part of the usability of AI solutions is to help users determine when they should trust AI output and when they should question it? What role does confirmation bias play in these interactions?
In human interactions, we have established cues that tell us when to trust and when to question information. As designers, how can we craft experiences that utilize AI and offer the same types of cues so that the appropriate level of trust is engendered in the output.
Tickets available here: https://www.eventbrite.com/e/the-role-of-trust-in-the-ux-of-ai-systems-tickets-128765076811
[EN] English (English)