Paper: Amortized Variational Inference for Simple Hierarchical Models

It is difficult to use subsampling with variational inference in hierarchical models since the number of local latent variables scales with the dataset. Thus, inference in hierarchical models remains a challenge at large scale. It is helpful to use a variational family with structure matching the posterior, but optimization is still slow due to the … Continue reading "Paper: Amortized Variational Inference for Simple Hierarchical Models"

Read More

Paper: MCMC Variational Inference via Uncorrected Hamiltonian Annealing

When faced with sequential decision-making problems, it is often useful to be able to predict what would Annealed Importance Sampling (AIS) with Hamiltonian MCMC can be used to get tight lower bounds on a distribution’s (log) normalization constant. Its main drawback is that it uses non-differentiable transition kernels, which makes tuning its many parameters hard. … Continue reading "Paper: MCMC Variational Inference via Uncorrected Hamiltonian Annealing"

Read More

Paper: Universal Off-Policy Evaluation

When faced with sequential decision-making problems, it is often useful to be able to predict what would happen if decisions were made using a new policy. Those predictions must often be based on data collected under some previously used decision-making rule. Many previous methods enable such off-policy (or counterfactual) estimation of the expected value of … Continue reading "Paper: Universal Off-Policy Evaluation"

Read More

Paper: Structural Credit Assignment in Neural Networks using Reinforcement Learning

Consider an n x n Gaussian kernel matrix corresponding to n input points in d dimensions. We show that one In this work, we revisit REINFORCE and investigate if we can leverage other reinforcement learning approaches to improve learning. We formalize training a neural network as a finite-horizon reinforcement learning problem and discuss how this … Continue reading "Paper: Structural Credit Assignment in Neural Networks using Reinforcement Learning"

Read More

Paper: Faster Kernel Matrix Algebra via Density Estimation

Consider an n x n Gaussian kernel matrix corresponding to n input points in d dimensions. We show that one can compute a relative error approximation to the sum of entries in this matrix in just O(dn^{2/3}) time. This is significantly sublinear in the number of entries in the matrix – which is n^2. Our … Continue reading "Paper: Faster Kernel Matrix Algebra via Density Estimation"

Read More

Paper: DeepWalking Backwards: From Node Embeddings Back to Graphs

We investigate whether node embeddings, which are vector representations of graph nodes, can be inverted to approximately recover the graph used to generate them. We present algorithms that invert embeddings from the popular DeepWalk method. In experiments on real-world networks, we find that significant information about the original graph, such as specific edges, is often … Continue reading "Paper: DeepWalking Backwards: From Node Embeddings Back to Graphs"

Read More

Paper: How and Why to Use Experimental Data to Evaluate Methods for Observational Causal Inference

Rectangular matrix-vector products are used extensively throughout machine learning and are fundamental to neural networks such as multi-layer perceptrons, but are notably absent as normalizing flow layers. This Methods that infer causal dependence from observational data are central to many areas of science, including medicine, economics, and the social sciences. We describe and analyze observational … Continue reading "Paper: How and Why to Use Experimental Data to Evaluate Methods for Observational Causal Inference"

Read More

Paper: Towards Practical Mean Bounds for Small Samples

Historically, to bound the mean for small sample sizes, practitioners have had to choose between using methods with unrealistic assumptions about the unknown distribution (e.g., Gaussianity) and methods like Hoeffding’s inequality that use weaker assumptions but produce much looser (wider) intervals. In 1969, Anderson proposed a mean confidence interval strictly better than or equal to … Continue reading "Paper: Towards Practical Mean Bounds for Small Samples"

Read More

Paper: Posterior Value Functions: Hindsight Baselines for Policy Gradient Methods

Hindsight allows reinforcement learning agents to leverage new observations to make inferences about earlier states and transitions. In this paper, we exploit the idea of hindsight and introduce posterior value functions. Posterior value functions are computed by inferring the posterior distribution over hidden components of the state in previous timesteps and can be used to … Continue reading "Paper: Posterior Value Functions: Hindsight Baselines for Policy Gradient Methods"

Read More

Paper: On the Difficulty of Unbiased Alpha Divergence Minimization

Short description: Variational inference approximates a target distribution with a simpler one. While traditional inference minimizes the “inclusive” KL-divergence, several algorithms have recently been proposed to minimize other divergences. Experimentally, however, these algorithms often seem to fail to converge. In this paper we analyze the variance of the underlying estimators for these papers. Our results … Continue reading "Paper: On the Difficulty of Unbiased Alpha Divergence Minimization"

Read More