Research

At the cutting edge of fairness, accountability, and transparency

CICS faculty are engaged in research at the cutting edge of FAT (fairness, accountability, and transparency). From Computer Vision to Public Policy to Data Diversity and more, our faculty are making strides in improving the outcomes of algorithmic systems.

Research Projects

Computer Vision

Erik Learned-Miller
Erik Learned-Miller has been involved in face recognition research since 2005, and has published more than 20 articles on the topic. He and his collaborators created one of the most widely used databases and benchmarks for face recognition, known as Labeled Faces in the Wild. Recently, he has been working to develop standards and principles for the use of face recognition technology in research, in government, and in business. His current interests include the establishment of transparency, intended use, and fairness in the deployment of face recognition algorithms.
Read More

Do language technologies equitably serve all groups of people?  The way we speak and write varies across demographics and social communities — but natural language processing models can be quite brittle to this variation.  If an NLP system, such as machine translation or opinion analysis, works well for some groups of people but not others, that impedes information access and the ability of authors’ voices to be heard, since media communication is now filtered through search and newsfeed relevance algorithms.

We are pursuing an interdisciplinary project to analyze language model’s disparities across social communities, in particular African-American Vernacular English, a major dialect with marked differences compared to mainstream English. While it is used widely in oral and social media communication, it has very little presence in the well-edited texts that comprise traditional NLP corpora.  We have constructed a corpus of informal AAE from publicly available social media posts and found a variety of NLP systems work worse on this text, and have developed more equitable models for analysis tasks such as language identification and parsing. By collaborating between sociolinguistics and computer science, this work seeks to support social scientific analysis goals, as well as use social science insights to inform the construction of more effective and fairer language technologies.

Read More

Safe and Fair Machine Learning

Yuriy Brun, Philip Thomas, Shlomo Zilberstein

In this project we study how the user of a machine learning (ML) algorithm (method) can place constraints on the algorithm’s behavior. We contend that standard ML algorithms are not user-friendly, in that they can require ML and data science expertise to apply responsibly to real-world applications. We present a new type of ML algorithm that shifts, from the user of the algorithm to the researcher who designs the algorithm, many of the challenges associated with ensuring that the ML method is safe to use. The resulting algorithms provide a simple interface for specifying what constitutes undesirable behavior of the ML algorithm, and provide high-probability guarantees that it will not produce this undesirable behavior.

Read More

Engineering Fair Systems

Yuriy Brun, Alexandra Meliou

Many diverse factors can cause software bias, including poor design, implementation bugs, unintended component interactions, and the use of unsafe algorithms or biased data. Our work focuses on using the engineering process to improve software fairness. For example, tools can help domain experts specify fairness properties and detect inconsistencies among those requirements; they can automatically generate test suites to measure software bias to identify bias in black-box systems even when the system’s source code and the data used to train it are unavailable; they can help developers and data scientists debug causes of bias, both in the source code and the data; and they can formally verify fairness properties in the implementation. Our work in engineering fair systems combines research in software engineering with machine learning, vision, natural language processing, and theoretical computer science to create tools that help build more fair systems.

Read More

Data collected about individuals is regularly used to make decisions that impact those same individuals. For example, statistical agencies (e.g., the U.S. Census Bureau) commonly publish statistics about groups of individuals that are then used as input to a number of critical civic decision-making procedures, including the allocation of both funding and political representation.  In these settings there is a tension between the need to perform accurate allocation, in which individuals and groups receive what they deserve, and the need to protect individuals from undue disclosure of their personal information. As formal privacy methods are adopted by statistical agencies and corporations, new questions are arising about the tradeoffs between privacy protection and fairness.  We are investigating these tradeoffs and devising new metrics and algorithms to support a favorable balance between these two social goods.

Read More

Data Diversity

Alexandra Meliou, Gerome Miklau

The big data revolution and advancements in machine learning technologies have revolutionized decision making, advertising, medicine, and even election campaigns. Yet, data is an imperfect medium, often tainted by skews and biases. Learning systems and analysis software learn and amplify these biases. As a result, discrimination shows up in many data-driven applications, such as advertisements, hotel bookings, image search, and vendor services. Since data skew is often a cause of algorithmic bias, the ability to retrieve balanced, diverse datasets can mitigate the underlying problem. Diversification also has usability implications, as it allows us to produce representative samples of a dataset that are small enough for human consumption. Our research focuses on developing methods for producing appropriately diverse subsets of given datasets efficiently and scalably, aiming to alleviate biases in the underlying data and to facilitate user-facing data exploration systems.

Read More

Explainability in Data Analysis

Alexandra Meliou, David Jensen

Explanations are an integral part of human behavior: people provide explanations to justify choices and actions, and seek explanations to understand the world around them. The need for explanations extends to technology, as crucial activities and important societal functions increasingly rely on automation. Yet, today’s data is vast and often unreliable and the systems that process data are increasingly complex. As a result, data and the algorithms that process data are often poorly understood, potentially leading to spurious analyses and insights. Many users even shy away from powerful analysis tools whose processes are too complex for a human to comprehend and digest, instead opting for less sophisticated yet interpretable alternatives. The goal of our research is to promote users’ trust in data and systems through the development of data analysis toolsets where explainability and interpretability are explicit goals and priorities of the systems’ function.

Read More