Explainability in Data Analysis

Explanations are an integral part of human behavior: people provide explanations to justify choices and actions, and seek explanations to understand the world around them. The need for explanations extends to technology, as crucial activities and important societal functions increasingly rely on automation. Yet, today’s data is vast and often unreliable and the systems that process data are increasingly complex. As a result, data and the algorithms that process data are often poorly understood, potentially leading to spurious analyses and insights. Many users even shy away from powerful analysis tools whose processes are too complex for a human to comprehend and digest, instead opting for less sophisticated yet interpretable alternatives. The goal of our research is to promote users’ trust in data and systems through the development of data analysis toolsets where explainability and interpretability are explicit goals and priorities of the systems’ function.

Publications