Full Abstract: Black-box machine learning models are used in critical decision-making domains, giving rise to several calls for more algorithmic transparency. The drawback is that model explanations can leak information about the data used to generate them, thus undermining data privacy. To address this issue, we propose differentially private algorithms to construct feature-based model explanations. … Continue reading "Model Explanations with Differential Privacy"
CommunityClick-Virtual: Supporting Inclusive Participation during Online Public Engagement Events Public engagement is paramount for participatory democracy. For decades, traditional methods, such as town halls, public forums, and workshops have remained the modus operandi for public engagement. The goal of these engagements is to ensure inclusivity of public participation so that decision-makers can engage, exchange thoughts, … Continue reading "CommunityClick-Virtual: Supporting Inclusive Participation during Online Public Engagement Events"
Powerful machine learning models can automatize decisions in critical areas of human lives, such as criminal pre-trial detention and hiring. These models are often trained on large datasets of historical decisions. However, past discriminatory human behavior may have tainted these decisions and datasets with discimination. Therefore, it is imperative to ask how can we ensure … Continue reading "How to train models that do not propagate discrimination?"
Posted:
April 06, 2022
Under:
Core ML
Fairness
Machine Learning
By Stephen Giguere (UT Austin, UMass Alumnus), Blossom Metevier (UMass), Yuriy Brun (UMass), Bruno Castro da Silva (UMass), Philip Thomas (UMass), Scott Niekum (UT Austin)
Full Abstract: Recent studies found that using machine learning for social applications can lead to injustice in the form of racist, sexist, and otherwise unfair and discriminatory outcomes. To address this challenge, recent machine learning algorithms have been designed to limit the likelihood such unfair behavior occurs. However, these approaches typically assume the data used … Continue reading "Paper: Fairness Guarantees under Demographic Shift"
Full Abstract: The goal of this paper is to develop a practical and general-purpose approach to construct confidence intervals for differentially private parametric estimation. We find that the parametric bootstrap is a simple and effective solution. It cleanly reasons about variability of both the data sample and the randomized privacy mechanism and applies “out of … Continue reading "Paper: Parametric Bootstrap for Differentially Private Confidence Intervals"
Full Abstract: Variational inference for state space models (SSMs) is known to be hard in general. Recent works focus on deriving variational objectives for SSMs from unbiased sequential Monte Carlo estimators. We reveal that the marginal particle filter is obtained from sequential Monte Carlo by applying Rao-Blackwellization operations, which sacrifices the trajectory information for reduced … Continue reading "Paper: Variational Marginal Particle Filters"