Towards an AI Accountability Policy

It’s a new day for AI. AI systems can assist us in writing a new web script, decide whether we should worry about that weird spot in an X-ray, and find friends on social media. AI systems help in determining recidivism risk. Music-generating AI can render novel songs by Drake and the Weekend — that … Continue reading "Towards an AI Accountability Policy"

Read More

AutoMTL: A Programming Framework for Automating Efficient Multi-Task Learning

Full Abstract  Multi-task learning (MTL) jointly learns a set of tasks by sharing parameters among tasks. It is a promising approach for reducing storage costs while improving task accuracy for many computer vision tasks. The effective adoption of MTL faces two main challenges. The first challenge is to determine what parameters to share across tasks … Continue reading "AutoMTL: A Programming Framework for Automating Efficient Multi-Task Learning"

Read More

Computing Must Pay Attention to Outcomes to Achieve Equity

There is a push right now across ACM that is gathering momentum. This push is for increased attention to cultural competency [12] in the training of computing professionals. In fact, the CS202X: ACM/IEEE-CS/AAAI Computer Science Curricula Taskforce [4] has a knowledge area subcommittee devoted to SEP or Society, Ethics, and Professionalism [5]. This subcommittee is … Continue reading "Computing Must Pay Attention to Outcomes to Achieve Equity"

Read More

How to train models that do not propagate discrimination?

Powerful machine learning models can automatize decisions in critical areas of human lives, such as criminal pre-trial detention and hiring. These models are often trained on large datasets of historical decisions. However, past discriminatory human behavior may have tainted these decisions and datasets with discimination. Therefore, it is imperative to ask how can we ensure … Continue reading "How to train models that do not propagate discrimination?"

Read More

Paper: Fairness Guarantees under Demographic Shift

Full Abstract: Recent studies found that using machine learning for social applications can lead to injustice in the form of racist, sexist, and otherwise unfair and discriminatory outcomes. To address this challenge, recent machine learning algorithms have been designed to limit the likelihood such unfair behavior occurs. However, these approaches typically assume the data used … Continue reading "Paper: Fairness Guarantees under Demographic Shift"

Read More