Full Abstract Machine learning technology has become ubiquitous, but, unfortunately, often exhibits bias. As a consequence, disparate stakeholders need to interact with and make informed decisions about using machine learning models in everyday systems. Visualization technology can support stakeholders in understanding and evaluating trade-offs between, for example, accuracy and fairness of models. This paper aims … Continue reading "My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning "
Read MoreTag: paper
AutoMTL: A Programming Framework for Automating Efficient Multi-Task Learning
Full Abstract Multi-task learning (MTL) jointly learns a set of tasks by sharing parameters among tasks. It is a promising approach for reducing storage costs while improving task accuracy for many computer vision tasks. The effective adoption of MTL faces two main challenges. The first challenge is to determine what parameters to share across tasks … Continue reading "AutoMTL: A Programming Framework for Automating Efficient Multi-Task Learning"
Read MoreEqui-explanation Maps: Concise and Informative Global Summary Explanations
Full Abstract We attempt to summarize the model logic of a black-box classification model in order to generate concise and informative global explanations. We propose equi-explanation maps, a new explanation data-structure that presents the region of interest as a union of equi-explanation subspaces along with their explanation vectors. We then propose E-Map, a method to … Continue reading "Equi-explanation Maps: Concise and Informative Global Summary Explanations"
Read MoreModel Explanations with Differential Privacy
Full Abstract: Black-box machine learning models are used in critical decision-making domains, giving rise to several calls for more algorithmic transparency. The drawback is that model explanations can leak information about the data used to generate them, thus undermining data privacy. To address this issue, we propose differentially private algorithms to construct feature-based model explanations. … Continue reading "Model Explanations with Differential Privacy"
Read MorePaper: Fairness Guarantees under Demographic Shift
Full Abstract: Recent studies found that using machine learning for social applications can lead to injustice in the form of racist, sexist, and otherwise unfair and discriminatory outcomes. To address this challenge, recent machine learning algorithms have been designed to limit the likelihood such unfair behavior occurs. However, these approaches typically assume the data used … Continue reading "Paper: Fairness Guarantees under Demographic Shift"
Read MorePaper: Parametric Bootstrap for Differentially Private Confidence Intervals
Full Abstract: The goal of this paper is to develop a practical and general-purpose approach to construct confidence intervals for differentially private parametric estimation. We find that the parametric bootstrap is a simple and effective solution. It cleanly reasons about variability of both the data sample and the randomized privacy mechanism and applies “out of … Continue reading "Paper: Parametric Bootstrap for Differentially Private Confidence Intervals"
Read MorePaper: Variational Marginal Particle Filters
Full Abstract: Variational inference for state space models (SSMs) is known to be hard in general. Recent works focus on deriving variational objectives for SSMs from unbiased sequential Monte Carlo estimators. We reveal that the marginal particle filter is obtained from sequential Monte Carlo by applying Rao-Blackwellization operations, which sacrifices the trajectory information for reduced … Continue reading "Paper: Variational Marginal Particle Filters"
Read MorePaper: Coresets for Classification – Simplified and Strengthened
We show how to sample a small subset of points from a larger dataset, such that if we solve logistic regression, hinge loss regression (i.e., soft margin SVM), or a number of other problems used to train linear classifiers on the sampled dataset, then we obtain a near optimal solution for the full dataset. This … Continue reading "Paper: Coresets for Classification – Simplified and Strengthened"
Read MorePaper: MAP Propagation Algorithm: Faster Learning with a Team of Reinforcement Learning Agents
Most deep learning algorithms rely on error backpropagation, which is generally regarded as biologically implausible. An alternative way of training an artificial neural network is through treating each unit in the network as a reinforcement learning agent. As such, all units can be trained by REINFORCE. However, this learning method suffers from high variance and … Continue reading "Paper: MAP Propagation Algorithm: Faster Learning with a Team of Reinforcement Learning Agents"
Read MorePaper: Turing Completeness of Bounded-Precision Recurrent Neural Networks
Previous works have proved that recurrent neural networks (RNNs) are Turing-complete. In the proofs, the RNNs allow for neurons with unbounded precision, which is neither practical in implementation nor biologically plausible. To remove this assumption, we propose a dynamically growing memory module made of neurons of fixed precision. We prove that a 54-neuron bounded-precision RNN … Continue reading "Paper: Turing Completeness of Bounded-Precision Recurrent Neural Networks"
Read More