Fair Machine Learning Post Affirmative Action

The U.S. Supreme Court, in a 6-3 decision on June 29, effectively ended the use of race in college admissions. Indeed, national polls found that a plurality of Americans – 42%, according to a poll conducted by the University of Massachusetts – agree that the policy should be discontinued, while 33% support its continued use … Continue reading "Fair Machine Learning Post Affirmative Action"

Read More

My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning 

Full Abstract  Machine learning technology has become ubiquitous, but, unfortunately, often exhibits bias. As a consequence, disparate stakeholders need to interact with and make informed decisions about using machine learning models in everyday systems. Visualization technology can support stakeholders in understanding and evaluating trade-offs between, for example, accuracy and fairness of models. This paper aims … Continue reading "My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning "

Read More

Towards an AI Accountability Policy

It’s a new day for AI. AI systems can assist us in writing a new web script, decide whether we should worry about that weird spot in an X-ray, and find friends on social media. AI systems help in determining recidivism risk. Music-generating AI can render novel songs by Drake and the Weekend — that … Continue reading "Towards an AI Accountability Policy"

Read More

Computing Must Pay Attention to Outcomes to Achieve Equity

There is a push right now across ACM that is gathering momentum. This push is for increased attention to cultural competency [12] in the training of computing professionals. In fact, the CS202X: ACM/IEEE-CS/AAAI Computer Science Curricula Taskforce [4] has a knowledge area subcommittee devoted to SEP or Society, Ethics, and Professionalism [5]. This subcommittee is … Continue reading "Computing Must Pay Attention to Outcomes to Achieve Equity"

Read More

How to train models that do not propagate discrimination?

Powerful machine learning models can automatize decisions in critical areas of human lives, such as criminal pre-trial detention and hiring. These models are often trained on large datasets of historical decisions. However, past discriminatory human behavior may have tainted these decisions and datasets with discimination. Therefore, it is imperative to ask how can we ensure … Continue reading "How to train models that do not propagate discrimination?"

Read More

Paper: Fairness Guarantees under Demographic Shift

Full Abstract: Recent studies found that using machine learning for social applications can lead to injustice in the form of racist, sexist, and otherwise unfair and discriminatory outcomes. To address this challenge, recent machine learning algorithms have been designed to limit the likelihood such unfair behavior occurs. However, these approaches typically assume the data used … Continue reading "Paper: Fairness Guarantees under Demographic Shift"

Read More