AutoMTL: A Programming Framework for Automating Efficient Multi-Task Learning

Full Abstract  Multi-task learning (MTL) jointly learns a set of tasks by sharing parameters among tasks. It is a promising approach for reducing storage costs while improving task accuracy for many computer vision tasks. The effective adoption of MTL faces two main challenges. The first challenge is to determine what parameters to share across tasks … Continue reading "AutoMTL: A Programming Framework for Automating Efficient Multi-Task Learning"

Read More

Paper: Coresets for Classification – Simplified and Strengthened

We show how to sample a small subset of points from a larger dataset, such that if we solve logistic regression, hinge loss regression (i.e., soft margin SVM), or a number of other problems used to train linear classifiers on the sampled dataset, then we obtain a near optimal solution for the full dataset. This … Continue reading "Paper: Coresets for Classification – Simplified and Strengthened"

Read More

Paper: MAP Propagation Algorithm: Faster Learning with a Team of Reinforcement Learning Agents

Most deep learning algorithms rely on error backpropagation, which is generally regarded as biologically implausible. An alternative way of training an artificial neural network is through treating each unit in the network as a reinforcement learning agent. As such, all units can be trained by REINFORCE. However, this learning method suffers from high variance and … Continue reading "Paper: MAP Propagation Algorithm: Faster Learning with a Team of Reinforcement Learning Agents"

Read More

Paper: Turing Completeness of Bounded-Precision Recurrent Neural Networks

Previous works have proved that recurrent neural networks (RNNs) are Turing-complete. In the proofs, the RNNs allow for neurons with unbounded precision, which is neither practical in implementation nor biologically plausible. To remove this assumption, we propose a dynamically growing memory module made of neurons of fixed precision. We prove that a 54-neuron bounded-precision RNN … Continue reading "Paper: Turing Completeness of Bounded-Precision Recurrent Neural Networks"

Read More

Paper: Cooperative Stochastic Bandits with Asynchronous Agents and Constrained Feedback

This paper studies a cooperative multi-armed bandit problem with M agents cooperating together to solve the same instance of a K-armed stochastic bandit problem. The agents are heterogeneous in their limited access to a local subset of arms; and their decision-making rounds. The goal is to find the global optimal arm and agents are able … Continue reading "Paper: Cooperative Stochastic Bandits with Asynchronous Agents and Constrained Feedback"

Read More

Paper: Pareto-Optimal Learning-Augmented Algorithms for Online Conversion Problems

In this work, we leverage machine-learned predictions to design competitive algorithms for online conversion problems with the goal of improving the competitive ratio when predictions are accurate (i.e., consistency), while also guaranteeing a worst-case competitive ratio regardless of the prediction quality (i.e., robustness). We unify the algorithmic design of both integral and fractional conversion problems, … Continue reading "Paper: Pareto-Optimal Learning-Augmented Algorithms for Online Conversion Problems"

Read More

Paper: Relaxed Marginal Consistency for Differentially Private Query Answering

Differentially private algorithms for answering database queries often involve reconstruction of a discrete distribution from noisy measurements. PRIVATE-PGM is a recent exact inference based technique that scales well for sparse measurements and provides consistent and accurate answers. However it fails to run in high dimensions with dense measurements. This work overcomes the scalability limitation of … Continue reading "Paper: Relaxed Marginal Consistency for Differentially Private Query Answering"

Read More

Paper: Amortized Variational Inference for Simple Hierarchical Models

It is difficult to use subsampling with variational inference in hierarchical models since the number of local latent variables scales with the dataset. Thus, inference in hierarchical models remains a challenge at large scale. It is helpful to use a variational family with structure matching the posterior, but optimization is still slow due to the … Continue reading "Paper: Amortized Variational Inference for Simple Hierarchical Models"

Read More

Paper: Universal Off-Policy Evaluation

When faced with sequential decision-making problems, it is often useful to be able to predict what would happen if decisions were made using a new policy. Those predictions must often be based on data collected under some previously used decision-making rule. Many previous methods enable such off-policy (or counterfactual) estimation of the expected value of … Continue reading "Paper: Universal Off-Policy Evaluation"

Read More

Paper: MCMC Variational Inference via Uncorrected Hamiltonian Annealing

When faced with sequential decision-making problems, it is often useful to be able to predict what would Annealed Importance Sampling (AIS) with Hamiltonian MCMC can be used to get tight lower bounds on a distribution’s (log) normalization constant. Its main drawback is that it uses non-differentiable transition kernels, which makes tuning its many parameters hard. … Continue reading "Paper: MCMC Variational Inference via Uncorrected Hamiltonian Annealing"

Read More