Theory Seminar

Welcome to the official page for the UMass-Amherst Computer Science Theory Seminar for Fall 2023. The seminar is 4-5 pm on Tuesdays in Room 140 of the Computer Science Building at UMass Amherst, and is free and open to the public. The faculty host is Dave Barrington. If you are interested in giving a talk, please email Professor Barrington or Rik Sengupta. Note that in addition to being a public lecture series, this is also a one-credit graduate seminar (CompSci 891M) that can be taken repeatedly for credit.

All Tuesday talks are in CS Room 140, unless otherwise stated.

 


Fall 2023 Schedule of Speakers

NOTE: In order to ensure you get weekly updates for all the talks, please make sure you are part of the seminars@cs.umass.edu mailing list. If you wish to give a talk, or would like to nominate someone to give one, please email us to let us know!


Tuesday, September 5 @ 4 pm

Organizational Meeting


Tuesday, September 12 @ 4 pm

Leveraging Reviews: Learning to Price with Buyer and Seller Uncertainty
Ellen Vitercik (Stanford)

Abstract

On online marketplaces, customers can access hundreds of reviews for a single product. Buyers often use reviews from other customers that share their attributes—such as height for clothing or skin type for skincare products—to estimate their values, which they may not know a priori. Customers with few relevant reviews may hesitate to buy a product except at a low price, so for the seller, there is a tension between setting high prices and ensuring that there are enough reviews that buyers can confidently estimate their values. In this talk, we formulate this pricing problem through the lens of online learning and provide a no-regret learning algorithm.

Bio

Ellen Vitercik is an Assistant Professor at Stanford University with a joint appointment between the Management Science & Engineering department and the Computer Science department. Her research revolves around machine learning theory, discrete optimization, and the interface between economics and computation. Before joining Stanford, she spent a year as a Miller Fellow at UC Berkeley after receiving a PhD in Computer Science from Carnegie Mellon University. Her thesis won the SIGecom Doctoral Dissertation Award and the CMU School of Computer Science Distinguished Dissertation Award.


Tuesday, September 19 @ 4 pm

Universal Matrix Sparsifiers and Fast Deterministic Algorithms for Linear Algebra
Rajarshi Bhattacharjee (UMass Amherst)

Abstract

Given an n x n matrix A with all entries equal to one, a well known result states that we can construct a matrix S with just O(n/epsilon^2) non-zero entries such that we have ||A-S||_2 <= epsilon*n where || . ||_2 is the spectral norm and epsilon in (0,1). Such an S is said to spectrally sparsify the all ones matrix. In particular, S can be chosen to be the scaled adjacency matrix of a Ramanujan expander graph to satisfy this guarantee. We show that, beyond giving a sparse approximation to the all ones matrix, sampling and scaling the entries of A according to S yields a universal sparsifier for any PSD matrix A with bounded entries. Further, if S is the scaled Ramanujan graph adjacency matrix with \tildeO(n/epsilon^4) entries, it also yields universal sparsifiers for non-PSD bounded entry matrices. We prove that the number of entries sampled for the above universal sparsifiers for both PSD and non-PSD matrices are tight up to logarithmic factors.

Since our sparsifiers can be constructed deterministically without reading all of A, our result for PSD matrices derandomizes and improves upon established results for randomized matrix sparsification, which only give an approximation to any fixed A with high probability. We further show that any randomized algorithm must read at least \Omega(n/epsilon^2) entries to spectrally approximate general A to error epsilon*n, thus proving that these existing randomized algorithms are optimal up to logarithmic factors. We also leverage our deterministic sparsification results to give the first deterministic algorithms for several problems, including singular value and singular vector approximation and positive semidefiniteness testing, that run in faster than matrix multiplication time. This partially addresses a significant gap between randomized and deterministic algorithms for fast linear algebraic computation.


Tuesday, September 26 @ 4 pm

Recorded video: link

Approximation Algorithms for Continuous Clustering and Facility Location Problems
Ankita Sarkar (Dartmouth)

Abstract

I will talk about clustering and facility location in metric spaces — for example, the k-median and k-means problems in a metric space. I will describe approximation algorithms for the continuous case of such problems, where the cluster centers can be chosen from anywhere in the potentially infinite-sized ambient metric space. The motivation behind this work is to compare the approximability of the continuous case with that of the discrete case — in the latter, the cluster centers must themselves be input points.

It is known, via the triangle inequality, that if we have an alpha-approximation for the discrete case, then we also have a beta*alpha-approximation for the continuous case, where beta depends on the specific problem; this beta is 2 for k-median and 4 for k-means. This work asks whether this beta gap is “tight”, i.e. can we do better for the continuous case than simply reducing to the discrete case and suffering a beta-factor blowup?

I will present positive results for k-means and a few related problems; that is, I will describe algorithms for the continuous case of these problems that beat beta times the best known approximation ratio for the discrete case. For k-median, the motivating question remains open. The positive results are via a new linear program, and the use of the round-or-cut method with this linear program. This is joint work with Deeparnab Chakrabarty and Maryam Negahbani, which appeared in ESA 2022.

Bio

Ankita Sarkar is a Ph.D. student in Theoretical Computer Science at Dartmouth College, advised by Prof. Deeparnab Chakrabarty. Her current research focuses on designing approximation algorithms for clustering and covering problems. Her broader interests include online algorithms and graph algorithms. Before Dartmouth, Ankita studied mathematics and computer science at Chennai Mathematical Institute, India.


Tuesday, October 3 @ 4 pm

Efficient Error Bounds for Private Linear Query Release
Brett Mullins (UMass Amherst)

Abstract

In this talk, I discuss the problem of bounding error for linear query release under differential privacy. The usual approach to answering a large collection of queries under differential privacy is to select a smaller set of queries, answer the selected queries with noise to satisfy privacy, and derive answers to the large collection. Upper bounds on the error of derived answers when the selected queries contain as much information as the large collection are well studied. We introduce a method to estimate an upper bound on error in the general case. In particular, we focus on the setting where such bounds can be computed efficiently.


Mean-Field Games for Generative Modeling
Ben Burns (UMass Amherst)

Abstract

Recent work has demonstrated the versatility of mean-field games (MFGs) as a mathematical framework for explaining, enhancing, and designing generative models. Mean-field games are useful for explaining the connections between various types of flow and diffusion-based generative models. In addition, the constraints of the mean-field game can be employed as an alternative objective function to assist the generative model in learning fine features of the target distribution.

In this talk, I will give a brief overview of score-based generative modeling. Then, I will discuss how the constraints of mean-field games, specifically Hamilton-Jacobi-Bellman equations, can be incorporated into the standard objective functions for score-based generative models to improve sample quality.


Tuesday, October 10

No meeting, Monday schedule


Tuesday, October 17 @ 4 pm

Zoom talk: link

Fair Allocation of a Conflict Graph
Arpita Biswas (Harvard)

Abstract

The problem of fair allocation of indivisible items becomes more complicated when certain item pairs conflict with each other, rendering those pairs incompatible while allocating them to the same agent. This problem setting finds its relevance in scenarios such as course allocation, where students (the agents) express preferences for courses (the items), and courses may possess conflicting schedules which is represented by an interval conflict graph. Additionally, courses have finite seat capacities, and students may have constraints on the number of courses they can enroll in. The goal is to obtain a fair and feasible allocation of items among the agents while ensuring that each allocated bundle constitutes an independent set within the interval conflict graph. While the problem is NP-hard under a general conflict graph, we devise efficient solutions when items are represented as intervals, that is, considering an interval conflict graph. In this talk, I’ll discuss various fairness notions, such as maximin fairness and almost envy freeness, that are pertinent to this problem setting and present solutions using a number of interesting techniques that are tailored to different assumptions on the agents’ preferences over a bundle of items — uniform additive, binary additive, identical additive, and non-identical (general) additive preferences.

Bio

Arpita Biswas is currently a Research Associate at the Harvard T.H. Chan School of Public Health and an incoming tenure-track assistant professor at the Department of Computer Science, Rutgers University. Prior to this, she was a CRCS Postdoctoral Fellow at Harvard John A. Paulson School of Engineering and Applied Sciences. She earned her Ph.D. degree from the Department of Computer Science and Automation (CSA), Indian Institute of Science (IISc). She has been awarded the Best Thesis Prize by the Indian Academy of Engineering (INAE) 2021, Best Thesis Award by the Department of CSA, IISc (2020-2021), a Google Ph.D. Fellowship (2016-2020), and a GHCI scholarship (2018). She has been recognized as a Rising Star in STOC 2021 and in the Trustworthy ML Seminar Series for her contribution to algorithms for fair decision-making. Her broad areas of interest include Algorithmic Game Theory, Optimization, and Machine Learning. She has worked on a wide range of problems, including fair resource allocation, health intervention planning, multi-agent learning, and robust sequential decision making. More details about her can be obtained here.


Tuesday, October 24 @ 4 pm

Provable Guarantees on AI/ML for Metrical Task Systems and Classification
Nico Christianson (Caltech)

Abstract

Modern AI/ML algorithms hold immense promise for improving performance in decision-making under uncertainty, where traditional algorithms designed for the worst case may be overly conservative. However, lack of worst-case guarantees hinders the deployment of AI/ML to real-world problem domains where safety and reliability are crucial, such as energy systems. In this talk, I will describe recent work on obtaining provable guarantees on ML performance, focusing on two settings: metrical task systems augmented with “black box” ML advice, and controlling false negative rate in structured classification problems. We will discuss algorithm performance, lower bounds, and applications to economic dispatch and contingency analysis in energy systems.

Bio

Nico Christianson is a fourth-year PhD student in Computing and Mathematical Sciences at Caltech advised by Adam Wierman and Steven Low. His research interests span online algorithms, learning, and optimization, with an emphasis on developing learning-augmented algorithms for problems in energy and sustainability. Nico is an NSF Graduate Research Fellow, and previously received his AB in Applied Mathematics from Harvard in 2020.


Tuesday, October 31 @ 4 pm

Zoom talk: link

Self-Improvement for Circuit-Analysis Problems
Ryan Williams (MIT)

Abstract

Many results in fine-grained complexity reveal intriguing consequences from solving various SAT problems even slightly faster than exhaustive search. We prove a “self-improving” (or “bootstrapping”) theorem for Circuit-SAT, #Circuit-SAT, and its fully-quantified version: solving one of these problems faster for “large” circuit sizes implies a significant speed-up for “smaller” circuit sizes. Our general arguments work for a variety of models solving circuit-analysis problems, including non-uniform circuits and randomized models of computation.

We derive striking consequences for the complexities of these problems. For example, we show that certain fine-grained improvements on the runtime exponents of polynomial-time versions of Circuit-SAT would imply *subexponential-time* algorithms for Circuit-SAT on 2^{o(n)}-size circuits, refuting the Exponential Time Hypothesis. We also show how slightly faster #Circuit-SAT algorithms on large circuits can be used to prove lower bounds against uniform circuits with symmetric gates for functions in deterministic linear time. Our result suggests an “algorithmic method” approach for uniform circuit lower bounds, which trades non-uniformity for a substantial reduction in the complexity of the hard function.

Bio

Ryan Williams is a Professor at MIT in the Department of Electrical Engineering and Computer Science. He got his PhD from CMU under Manuel Blum, and was subsequently a Research Fellow at IBM Research, an Assistant Professor at Stanford, and an Associate Professor at MIT. He works in the design and analysis of efficient algorithms and computational complexity. He is especially interested in relationships between the existence of non-trivial algorithms and proofs of complexity lower bounds. He is also interested in theoretical topics that help give scientific explanations for computational phenomena, such as the unreasonable effectiveness of SAT solvers in practice. He showed among other things that NEXP is not contained in ACC0 (CCC 2011, best paper).


Tuesday, November 7 @ 4 pm

Sylvester’s Problem and the Search for Melchior’s Ordinary Points
Jon Lenchner (IBM T.J. Watson Research Center)

Abstract

In 1893 J. J. Sylvester posed the following celebrated problem: Given a set of n points in the plane, not all lying on a single line, must there be some two of the points such that the line through the points passes through no additional point in the set? In 1943 Tibor Gallai answered the problem in the affirmative and since then many proofs have been found. The positive answer to Sylvester’s problem is now known as the Sylvester-Gallai Theorem. One of the most intriguing proofs of the Sylvester-Gallai Theorem is due to Eberhard Melchior. Melchior’s proof exploits projective duality and through a clever dual counting argument shows that there must in fact be three of the connecting lines through just two points, as asked for in Sylvester’s problem – line that today are referred to as “ordinary lines.” In this talk I will show you my own extremely elementary proof of the Sylvester-Gallai Theorem and describe how it can be used to find Melchior’s three ordinary points. I will also survey the quantitative variant of the Sylvester-Gallai problem, which asks us to characterize the number of ordinary lines there must be as a function of the number of points.

Bio

Jon is a researcher at IBM in exploratory math sciences. His current work is in descriptive complexity (joint work with Neil Immerman, Rik Sengupta and others), information theory and logic, and seeing if it is possible to use AI plus theorem proving to help repair theoretical gaps that are causing foundational problems in physics. Jon’s talk is about a subject he became interested in during his Ph.D. work – a Ph.D. which he completed relatively late in life, at Polytechnic Institute (now part of the NYU, Courant Institute), in 2008. It is a rich and playful subject that he has given many talks about over the years.


Monday, November 13 @ 4 pm (SPECIAL TIME)

Lower Bounds for Two-pass Streaming Algorithms for Maximum Matching
Christian Konrad (University of Bristol)

Abstract

In this talk, I will discuss a recent joint work with my PhD student Kheeran Naidu on lower bounds for the Maximum Matching problem in the semi-streaming model. In this model, an algorithm performs multiple passes over the edges of an input graph and is tasked with computing a matching that is as large as possible using space O(n polylog n), where n is the number of vertices of the input graph.

We will focus on the two-pass setting, and we show that every randomized two-pass streaming algorithm that computes a better than 8/9-approximation to Maximum Matching requires strictly more than semi-streaming space.

Prior to our work, only a conditional two-pass lower bound by Assadi [SODA’22] was known that relates the quality of their lower bound to the maximum density of Ruzsa-Szemeredi graphs (RS-graphs) with matchings of linear sizes. In the best case, i.e., if very dense RS-graphs with linear-sized matchings exist, their lower bound rules out approximation ratios above 0.98, however, with current knowledge, only approximation factors of 1 − o(1) are ruled out.

Our lower bound makes use of the information cost trade-off of the Index problem in the two-party communication setting established by Jain et al. [JACM’09]. To the best of our knowledge, our work is the first that exploits this trade-off result in the context of lower bounds for multi-pass graph streaming algorithms.

Bio

Christian Konrad is a Senior Lecturer (equivalent to Associate Professor in the North American system) at the University of Bristol, UK. Prior to joining Bristol, he held postdoctoral positions at the University of Warwick and the University of Reykjavik. He obtained a PhD from the University of Paris Diderot and a Master’s degree in Computer Science with Mathematics from the Technical University of Munich. His research focus is algorithms theory, in particular, data streaming algorithms, communication complexity, and distributed computation. His work is currently supported by an EPSRC New Investigator Award. He is the head of the algorithms research group at Bristol and acts as the Programme Director of the joint honours Mathematics and Computer Science degree.


Tuesday, November 14 @ 4 pm

Matrix Multiplication via Matrix Groups
Ed Almusalamy (UMass Amherst)

Abstract

In 2003, Cohn and Umans proposed a group-theoretic approach to bounding the exponent of matrix multiplication. Previous work within this approach ruled out certain families of groups as a route to obtaining ω=2, while other families of groups remain potentially viable. In this paper we turn our attention to matrix groups, whose usefulness within this framework was relatively unexplored. We first show that groups of Lie type cannot prove ω=2 within the group-theoretic approach. This is based on a representation-theoretic argument that identifies the second-smallest dimension of an irreducible representation of a group as a key parameter that determines its viability in this framework. Our proof builds on Gowers’ result concerning product-free sets in quasirandom groups. We then give another barrier that rules out certain natural matrix group constructions that make use of subgroups that are far from being self-normalizing.

Our barrier results leave open several natural paths to obtain ω=2 via matrix groups. To explore these routes we propose working in the continuous setting of Lie groups, in which we develop an analogous theory. Obtaining the analogue of ω=2 in this potentially easier setting is a key challenge that represents an intermediate goal short of actually proving ω=2. We give two constructions in the continuous setting, each of which evades one of our two barriers.


Tuesday, November 21 @ 4 pm

Zoom talk: link

Handling Correlated Rounding Error via Preclustering: A 1.73-approximation for Correlation Clustering
Vincent Cohen-Addad (Google Research)

Abstract

We consider the classic Correlation Clustering problem: Given a complete graph where edges are labelled either + or −, the goal is to find a partition of the vertices that minimizes the number of the +edges across parts plus the number of the −edges within parts. Recently, Cohen-Addad, Lee and Newman [CLN22] presented a 1.994-approximation algorithm for the problem using the Sherali-Adams hierarchy, hence breaking through the integrality gap of 2 for the classic linear program and improving upon the 2.06- approximation of Chawla, Makarychev, Schramm and Yaroslavtsev [CMSY15].

We significantly improve the state-of-the-art by providing a 1.73-approximation for the problem. Our approach introduces a preclustering of Correlation Clustering instances that allows us to essentially ignore the error arising from the correlated rounding used by [CLN22]. This additional power simplifies the previous algorithm and analysis. More importantly, it enables a new set-based rounding that complements the previous roundings. A combination of these two rounding algorithms yields the improved bound. Joint work with Euiwoong Lee, Shi Li, Alantha Newman. To appear FOCS’23.

Bio

Vincent is a research scientist at Google Research, France. Vincent graduated from École normale supérieure and held a Marie Sklodowska-Curie individual postdoc fellowship at the University of Copenhagen, before joining CNRS, France and then Google. His research has been awarded the Distinguished Dissertation Award from EATCS and the best paper award at the Symposium on Computational Geometry (SoCG) 2019.


Tuesday, November 28 @ 4 pm

Random Redistricting Maps Via Random Spanning Trees
Jamie Tucker-Foltz (Harvard)

Abstract

In the United States, a now widely-accepted legal framework for judging the fairness of a political redistricting map is to compare it to an ensemble of randomly generated alternatives. But what should we mean by a “random” map, and how can we efficiently generate one? There are a family of algorithms for this task all based on the following idea: draw a uniformly random spanning tree on the graph of indivisible geographic units (e.g., precincts or census blocks), find edge(s) that split the tree into sub-trees of roughly equal size, and then take the graph partition induced by the sub-trees to be your redistricting map. These algorithms are very efficient in practice and have been successfully used in several high-profile court cases. However, numerous graph-theoretic and combinatorial questions about these algorithms remain open, which are of both theoretical and practical interest. In this talk I will discuss recent progress on some of these fronts, including:
– An approximate relationship between the “compactness” of a map and the probability of the map being sampled
– A 1/poly(n) lower bound on the probability that a random spanning tree of a grid graph can be split into exactly equal-sized pieces
– Obstructions to MCMC algorithms on grid graphs with many small districts, which can be thought of as a polyomino tiling problem
No prior knowledge of redistricting will be assumed; I will explain the setting and algorithms from scratch.

Bio

Jamie is a fourth-year computer science PhD student at Harvard University advised by Ariel Procaccia. His research focuses on applying techniques from theoretical computer science to improve political and economic institutions. He works on a range of topics in fair division, social choice theory, and algorithmic game theory, with further interests in descriptive complexity, computational topology, and graph theory. He is especially interested in algorithms for fair redistricting and gerrymandering detection.


Tuesday, December 5 @ 4 pm

Polynomial-Pass Semi-Streaming Lower Bounds for Degeneracy and k-Cores
Prantar Ghosh (DIMACS)

Abstract

A (possibly multi-pass) graph streaming algorithm is called semi-streaming if it processes an n-vertex graph using O(n polylog(n)) space per pass. The following question arises naturally in the study of semi-streaming algorithms: Is there any graph problem which is “not too hard”, in the sense that it can be solved efficiently with O(n polylog(n)) total communication, but for which, nonetheless, any semi-streaming algorithm needs a polynomial n^Ω(1) number of passes? Assadi, Chen, and Khanna [STOC 2019] were the first to give an affirmative answer to this question, albeit with some rather non-standard graph problems. In this talk, I shall present the first polynomial-pass lower bounds for natural “not too hard” graph problems: finding a k-core of a graph or just the value of its degeneracy. First I shall describe a novel communication protocol for both problems with O(n polylog n) total communication, thus showing that they are indeed “not too hard.” Next, I shall describe how we prove that any semi-streaming algorithm (exactly) solving these problems requires (almost) Ω(n^{1/3}) passes. This proof uses a reduction from a generalized version of the Hidden Pointer Chasing (HPC) communication problem introduced by the aforementioned paper of Assadi, Chen, and Khanna. I shall sketch this reduction and give a high level idea of how we generalize HPC to a multilayer version called MultiHPC, whose optimal lower bound allows us to prove stronger (by a polynomial factor) semi-streaming pass lower bounds for a certain class of graph problems.

This is based on a recent joint work with Sepehr Assadi, Bruno Loff, Parth Mittal, and Sagnik Mukhopadhyay.

Bio

Prantar Ghosh is a Postdoctoral Fellow at Georgetown University. He is currently visiting DIMACS, Rutgers University, where he was a Simons Postdoctoral Leader last academic year. He completed his PhD in Computer Science at Dartmouth College in 2022, following which he held a Lecturer position there over the summer. His research interests lie broadly in graph algorithms, with a special focus on streaming algorithms for massive graphs.