# Theory Seminar

Welcome to the official page for the **UMass-Amherst Computer Science Theory Seminar** for Fall 2022. The seminar is **4-5 pm on Tuesdays** in **Room 140** of the Computer Science Building at UMass-Amherst, and is free and open to the public. The faculty host is **Cameron Musco**. If you are interested in giving a talk, please email Professor Musco or **Rik Sengupta**. Note that in addition to being a public lecture series, this is also a one-credit graduate seminar (**CompSci 891M**) that can be taken repeatedly for credit.

All Tuesday talks are in **CS Room 140**, unless otherwise stated. There are some special guest talks this semester in conjunction with the ALPS Seminar, typically on Fridays. These talks are all in **LGRC A215** unless otherwise stated.

*Fall 2022 Schedule of Speakers*

**NOTE:** In order to ensure you get weekly updates for all the talks, please make sure you are part of the seminars@cs.umass.edu mailing list. If you wish to give a talk, or would like to nominate someone to give one, please email us to let us know!

*Tuesday, September 6 @ 4 pm*

** Combinatorial Resultants and Circuit Polynomials in the Algebraic Rigidity Matroid**

Goran Malic (UMass, Smith College)

## ▶ **Abstract**

A graph with prescribed edge-lengths is called rigid if it allows no transformations that preserve the edge-lengths other than translations and rotations, and flexible otherwise. Rigid graphs have finitely many realizations as graphs embedded in the Euclidean plane (with vertices as points and edges as straight segments). The motivating problem is the so-called Localization problem which asks to compute all realizations for a given rigid graph. Moreover, the Localization problem can be reduced to a class of graphs called minimally rigid graphs, i.e. those rigid graphs for which the deletion of any edge results with them becoming flexible. This problem can be solved with Gröbner basis methods, however such an approach becomes infeasible for graphs with as few as 6 vertices.

We instead consider a related problem, called the single unknown distance problem: given a minimally rigid graph G with prescribed edge-lengths, compute the set of possible lengths of a “non-edge” of G, i.e. the set of distances between a pair of vertices of G that aren’t connected by an edge of G. If we can compute these distances for the non-edges of G that together with the edges of G form a trilateration, we can solve the Localization problem for G in linearly many steps. The single-distance problem can also be solved with Gröbner basis methods, but again such an approach suffers from the same issues as in the Localization problem. We will present an algorithm for solving the single-distance problem that outperforms Gröbner basis methods dramatically – in one instance we reduced a computation that took 5 days with a Gröbner basis method to just 15 seconds.

This algorithm exploits the fact that rigid graphs form a matroid, called the rigidity matroid, which can be realized in three ways: as a graphic, a linear, and an algebraic matroid. The bases of this matroid are the minimally rigid graphs, and each of its circuits can be obtained precisely from some minimally rigid graph by converting some “non-edge” into an edge. The algebraic nature of the rigidity matroid puts the circuits into a 1-to-1 correspondence with unique multi-variate polynomials, called the Circuit Polynomials, whose solution sets are exactly the sets of possible distances of non-edges in minimally rigid graphs. Hence the goal is to compute circuit polynomials efficiently, which is a non-trivial task because they can have millions of monomial terms (the largest that we’ve computed so far has close to 10 million terms). We have achieved that by introducing a new operation on graphs called “the combinatorial resultant” which guides the algebraic computation in a much more efficient manner than any Gröbner basis method.

This work is joint with Ileana Streinu.

## ▶ **Bio**

Goran Malic is a postdoctoral researcher with Ileana Streinu’s LinKaGe Lab at Smith College and UMass Amherst. His current research interests are in rigidity theory, computational algebraic geometry and matroid theory, with an emphasis on algebraic matroids. He received his PhD in Pure Mathematics from the University of Manchester, UK, in 2015 under the supervision of Alexandre Borovik. Before joining LinKaGe Lab he held postdoc positions at the University of Manchester and the Zagreb School of Economics and Management.

**SPECIAL TIME:** Friday, September 9 @ 1 pm

**Competitive Caching with Machine Learned Advice**

Thodoris Lykouris (MIT)

## ▶ **Abstract**

Traditional online algorithms encapsulate decision making under uncertainty, and give ways to hedge against all possible future events, while guaranteeing a nearly optimal solution as compared to an offline optimum. On the other hand, machine learning algorithms are in the business of extrapolating patterns found in the data to predict the future, and usually come with strong guarantees on the expected generalization error.

In this talk, we will discuss a framework for augmenting online algorithms with a machine learned oracle to achieve competitive ratios that provably improve upon unconditional worst case lower bounds when the oracle has low error. Our approach treats the oracle as a complete black box, and is not dependent on its inner workings or the exact distribution of its errors. We apply this framework to the traditional caching problem –creating an eviction strategy for a cache of size k. We demonstrate that naively following the oracle’s recommendations may lead to very poor performance, even when the average error is quite low. Instead we show how to modify the Marker algorithm to take into account the oracle’s predictions, and prove that this combined approach achieves a competitive ratio that both (i) decreases as the oracle’s error decreases and (ii) is always capped by O(log k), which can be achieved without any oracle input.

This talk is based on this paper, which is joint work with Sergei Vassilvitskii. A preliminary version appeared at ICML’18 and the journal version appeared in the Journal of the ACM.

## ▶ **Bio**

Thodoris Lykouris is an assistant professor at the MIT Sloan School of Management where he is affiliated with the Operations Management group and the Operations Research Center. His research focuses on data-driven sequential decision-making and spans across the areas of machine learning, dynamic optimization, and economics. Before joining MIT, Thodoris spent two years as a postdoctoral researcher at Microsoft Research NYC and completed his Ph.D. in 2019 from Cornell University where he was advised by Éva Tardos. His dissertation was selected as a finalist in the Dantzig dissertation award competition. He was also a finalist in the INFORMS Nicholson and Applied Probability Society best student paper competitions. Thodoris is the recipient of a Google Ph.D. Fellowship and a Cornell University Fellowship.

*Tuesday, September 13 @ 4 pm*

**Democracy and the Pursuit of Randomness**

Ariel Procaccia (Harvard)

## ▶ **Abstract**

Sortition is a storied paradigm of democracy built on the idea of choosing representatives through lotteries instead of elections. In recent years this idea has found renewed popularity in the form of citizens’ assemblies, which bring together randomly selected people from all walks of life to discuss key questions and deliver policy recommendations. A principled approach to sortition, however, must resolve the tension between two competing requirements: that the demographic composition of citizens’ assemblies reflect the general population and that every person be given a fair chance (literally) to participate. I will describe our work on designing, analyzing and implementing randomized participant selection algorithms that balance these two requirements. I will also discuss practical challenges in sortition based on experience with the adoption and deployment of our open-source system, Panelot.

## ▶ **Bio**

Ariel Procaccia is Gordon McKay Professor of Computer Science at Harvard University. He works on a broad and dynamic set of problems related to AI, algorithms, economics, and society. To translate his research into practice, he has helped create systems and platforms that are regularly used to solve everyday fair division problems, resettle refugees, mitigate bias in peer review, and select citizens’ assemblies. His distinctions include the Social Choice and Welfare Prize (2020), a Guggenheim Fellowship (2018), the IJCAI Computers and Thought Award (2015), and a Sloan Research Fellowship (2015).

**SPECIAL TIME:** Friday, September 16 @ 1 pm

**Learning Augmenting Algorithms via Algorithm Switching**

Antonios Antoniadis (University of Twente)

## ▶ **Abstract**

Learning augmented algorithms is an emerging area attempting to combine the effectiveness of machine learning approaches on practically relevant inputs with the performance guarantees provided by classical worst-case analysis of online algorithms. For many problems machine learning approaches can be readily applied to generate predictions about the structure of the input. An algorithm can then be designed to incorporate such predictions in its decision process and obtain improved performance as long as these predictions are sufficiently accurate. However, in some cases the predictions may be highly inaccurate and decision-making systems that are based on such predictors need to achieve a decent performance in that case too.

Several algorithmic results in the area can, at a high level, be interpreted as a meta-algorithm for “combining” a number of different algorithms at runtime. The individual algorithms usually trust the predictions to a different degree, ranging from blind trust to completely ignoring them. How these algorithms are combined generally depends on the particular problem at hand and how well each individual algorithm performed on the input encountered so far. In this talk, we illustrate this approach on Metrical Task Systems (MTS), a rich class containing several fundamental problems in online optimization as special cases, including caching, k-server, convex body chasing, and convex function chasing. We furthermore give an overview of several variations of this approach from the literature, and highlight differences between these variations.

## ▶ **Bio**

Antonios Antoniadis is an assistant professor at the Discrete Mathematics and Mathematical Programming (DMMP) group at the University of Twente in the Netherlands. He reiceived his Ph.D. in Computer Science from the Humboldt University in Berlin, Germany in 2012. After his Ph.D. and before joining the University of Twente in April 2021 he held PostDoc positions at the University of Pittsburgh, the Max-Planck-Institute for Informatics and Bonn University and an interim professorship at the University of Cologne.

Antonios’ research interests lie in designing online and approximation algorithms with a particular focus on problems related to energy-efficiency, scheduling theory and more recently learning augmented algorithms.

*Tuesday, September 20 @ 4 pm*

**Inherent Trade-Offs in the Fair Determination of Risk Scores**

Sugun Yadla (UMass)

## ▶ **Abstract**

There has been a healthy amount of debate concerning the fairness in machine algorithmic classification. This talk explores three key criteria regarding the machine learning model. These “fairness conditions” cannot simultaneously be satisfied by any method, barring highly precise scenarios. The aim of this talk is to show a way in which certain ideas of fairness are incompatible with each other and therefore imply a constraint in which one must think of trade-offs between said fairness conditions on a case by case basis.

**SPECIAL TIME:** Friday, September 23 @ 1 pm

** Ski-Rental and Job Scheduling with Advice**

Ravi Kumar (Google)

## ▶ **Abstract**

In this talk we study algorithms with predictions using two classical problems — ski rental and non-clairvoyant job scheduling. We will cover the early work on these problems and time-permitting, more recent developments.

## ▶ **Bio**

Ravi Kumar has been a research scientist at Google since 2012. Prior to this, he was at the IBM Almaden Research Center and at Yahoo! Research. His interests include algorithms for massive data, ML/privacy, and the theory of computation.

*Tuesday, September 27 @ 4 pm*

**CDCL versus Resolution**

Marc Vinyals (St Petersburg State University)

## ▶ **Abstract**

The effectiveness of the CDCL algorithm for SAT is complicated to
understand, and so far one of the most successful tools for its
analysis has been proof complexity. CDCL is easily seen to be limited
by the resolution proof system, and furthermore it was shown to be
equivalent to resolution, in the sense that CDCL can reproduce a given
resolution proof with only polynomial overhead.

But the question of the power of CDCL with respect to resolution is
far from closed. To begin with, the previous equivalence is subject to
assumptions, only some of which are reasonable. In addition, in a
setting where algorithms are expected to run in near-linear time, a
polynomial overhead is too coarse a measure.

In this talk we will discuss what breaks when we try to make the
assumptions more realistic, and how much of an overhead CDCL needs in
order to simulate resolution.

## ▶ **Bio**

Marc Vinyals is a researcher in theoretical computer science, currently visiting Memorial University of Newfoundland. His research interests include several areas of computational complexity, and in particular proof complexity, and the theory of satisfiability solving. Previously he was a graduate student at the KTH Royal Institute of Technology, supervised by Jakob Nordström, a postdoc at the Tata Institute of Fundamental Research and the Technion, hosted by Arkadev Chattopadhyay and Yuval Filmus respectively, and a docent (assistant/associate professor) at Saint Petersburg State University.

**Zoom talk:** link

**SPECIAL TIME:** Thursday, September 29 @ 4 pm

**Online and Consistent Correlation Clustering**

Andreas Maggiori (EPFL)

## ▶ **Abstract**

In the correlation clustering problem the input is a signed graph where the sign indicates whether each pair of points should be placed in the same cluster or not. The goal of the problem is to compute a clustering which minimizes the number of disagreements with such recommendation.Thanks to its many practical applications, correlation clustering is a fundamental unsupervised learning problem and has been extensively studied in many different settings. In this talk, I will present the problem in the classic online setting with recourse; The vertices of the graphs arrive in an online manner and the goal is to maintain an approximate clustering while minimizing the number of times each vertex changes cluster. In this setting, I will present an algorithm that achieves logarithmic recourse per vertex in the worst case and also complement this result with a tight lower bound.

This is joint work with Vincent Cohen-Addad, Silvio Lattanzi, and Nikos Parotsidis.

## ▶ **Bio**

I am a fifth (and final) year Ph.D. student at EPFL, advised by Rudiger Urbanke and Ola Svensson. My interests lie in the intersection of theoretical computer science and machine learning and, more specifically, online algorithms (matchings, clustering, etc.), learning augmented online algorithms (i.e., online algorithms with predictions), and recently active learning. During the last two summers, I was a Research Intern at Google Zurich hosted by Nikos Parotsidis and Ehsan Kazemi, and on May 2022, I co-organized ALPS 2022, a workshop on algorithms with predictions. Before moving to Switzerland, I completed my undergrad studies at the National Technical University of Athens, where I studied Computer and Electrical Engineering, and I was advised by Dimitris Fotakis.

**SPECIAL TIME:** Friday, September 30 @ 1 pm

**The Primal-Dual Method for Learning Augmented Algorithms**

Andreas Maggiori (EPFL)

## ▶ **Abstract**

The extension of classical online algorithms, when provided with ML predictions, is a new and active research area. In this area, the goal is to design learning augmented online algorithms that: (1) incorporate predictions in a black-box manner (without making assumptions regarding the quality of the predictions), (2) outperform any online algorithm when the predictions are accurate (3) maintain reasonable worst-case guarantees if the predictions are misleading. In this talk, I will show how to extend the primal-dual method for online algorithms in the learning augmented setting. Using this method, I will focus on how to design algorithms that use predictions for the ski rental problem, the online set cover problem, and (if time permits) the TCP-acknowledgment problem.

This is joint work with Ola Svensson and Ettiene Bamas.

## ▶ **Bio**

I am a fifth (and final) year Ph.D. student at EPFL, advised by Rudiger Urbanke and Ola Svensson. My interests lie in the intersection of theoretical computer science and machine learning and, more specifically, online algorithms (matchings, clustering, etc.), learning augmented online algorithms (i.e., online algorithms with predictions), and recently active learning. During the last two summers, I was a Research Intern at Google Zurich hosted by Nikos Parotsidis and Ehsan Kazemi, and on May 2022, I co-organized ALPS 2022, a workshop on algorithms with predictions. Before moving to Switzerland, I completed my undergrad studies at the National Technical University of Athens, where I studied Computer and Electrical Engineering, and I was advised by Dimitris Fotakis.

*Tuesday, October 4 @ 4 pm*

** Low-Rank Approximation with 1/ε^{1/3} Matrix-Vector Products**

Ainesh Bakshi (MIT)

## ▶ **Abstract**

In this talk, we consider iterative methods based on Krylov subspaces for low-rank approximation under any Schatten-p norm. Here, given access to a matrix A through matrix-vector products, an accuracy parameter ε, and a target rank k, the goal is to find a rank-k matrix Z with orthonormal columns such that ||A (I – ZZ^T)||_{S_p} \leq (1+ε)\min_{U^T U = I_k } ||A (I – U U^T)||_{S_p}, where ||M||_{S_p} denotes the l_p norm of the the singular values of M. For the special cases of p = 2 (Frobenius norm) and p = \infty (Spectral norm), Musco and Musco (NeurIPS 2015) obtained an algorithm based on Krylov methods that uses ~O(k/\sqrt{ε}) matrix-vector products, improving on the naive ~O(k/ε) dependence obtainable by the power method.

Our main result is an algorithm that uses only ~O(kp^{1/6}/ε^{1/3}) matrix-vector products, and works for *all*, not necessarily constant, p \geq 1. For p = 2 our bound improves the previous ~O(k/ε^{1/2}) bound to ~O(k/ε^{1/3}). Since the Schatten-p and Schatten-\infty norms of any matrix are the same up to a 1+ε factor when p \geq (log d)/ε, our bound recovers the result of Musco and Musco for p = \infty.
Further, we prove a matrix-vector query lower bound of Ω(1/ε^{1/3}) for *any* fixed constant p \geq 1, showing that surprisingly ~Θ(1/ε^{1/3}) is the optimal complexity for constant k.

To obtain our results, we introduce several new techniques, including optimizing over multiple Krylov subspaces simultaneously, and pinching inequalities for partitioned operators. Our lower bound for p \in [1,2] uses the Araki-Lieb-Thirring trace inequality, whereas for p>2, we appeal to a norm-compression inequality for aligned partitioned operators.

Based on joint work with Ken Clarkson and David Woodruff.

## ▶ **Bio**

Ainesh Bakshi is a postdoctoral fellow at MIT, with a joint position between Math and CS. He recently obtained his PhD from CMU, under the advisement of Pravesh Kothari and David Woodruff. His research interests include theoretical machine learning, optimization via the sum-of-squares hierarchy, randomized numerical linear algebra and high-dimensional probability.

**SPECIAL TIME:** Friday, October 7 @ 1 pm

**Pareto-optimal learning-augmented algorithms for online search with advice**

Bo Sun (CUHK)

## ▶ **Abstract**

Recently, learning-augmented algorithms have attracted growing interests. Such algorithms leverage machined learned (ML) advice to design competitive algorithms with the objective of improving the competitive ratio of classic online algorithms when the advice is accurate (i.e., consistency), while also providing a worst-case guarantee regardless of the advice quality (i.e., robustness).

In this talk, I will focus on the online search problem that includes 1-max search and one-way trading as special cases. I will first show that the algorithmic design of the online search problem can be unified into the design of a class of online threshold-based algorithms. Then by incorporating ML advice into the design of the threshold function, the learning-augmented algorithm can achieve the Pareto-optimal trade-off of consistency and robustness, i.e., no online algorithm can achieve a better consistency for a given robustness guarantee.

## ▶ **Bio**

Bo Sun is a Postdoctoral Fellow in the Department of Computer Science and Engineering at The Chinese University of Hong Kong (CUHK. Prior to that, he was a Postdoctoral Fellow in the Department of Electronic and Computer Engineering at The Hong Kong University of Science and Technology (HKUST). He also obtained his Ph.D. from HKUST. His research focuses on optimization and decision-making under uncertainty to advance the efficiency, robustness, and sustainability of networked systems, such as energy-transport nexus, cloud/edge computing, and smart grids.

*Tuesday, October 11 @ 4 pm*

**An embarrassingly simple approach to zero-shot learning**

Paula Navarrette (UMass)

## ▶ **Abstract**

Original paper by: B. Romera-Paredes & Philip H. S. Torr

Original abstract: Zero-shot learning consists in learning how to recognise new concepts by just having a description of them. Many sophisticated approaches have been proposed to address the challenges this problem comprises. In this paper we describe a zero-shot learning approach that can be imple- mented in just one line of code, yet it is able to outperform state of the art approaches on standard datasets. The approach is based on a more general framework which models the relationships between features, attributes, and classes as a two linear layers network, where the weights of the top layer are not learned but are given by the environment. We further provide a learning bound on the generalisation error of this kind of approaches, by casting them as domain adaptation methods. In experiments carried out on three standard real datasets, we found that our approach is able to perform significantly better than the state of art on all of them, obtaining a ratio of improvement up to 17%.

**A fast and simple algorithm for the Modular Subset Sum**

Chaolong Tang (UMass)

## ▶ **Abstract**

I will talk about a fast and simple randomized algorithm for the Modular Subset Sum problem running in near-linear time. It is based solely on rolling hash and an elementary data structure for prefix sums. I will go over the problem’s background, previous works, and intuition for the algorithm. Then analyze the correctness and time complexity.

Based on the original paper by Kyriakos Axiotis, Arturs Backurs, Karl Bringmann, Ce Jin, Vasileios Nakos, Christos Tzamos, and Hongxun Wu.

**SPECIAL TIME:** Friday, October 14 @ 1 pm

**Online Bin Packing with Predictions**

Shahin Kamali (York University)

## ▶ **Abstract**

In this talk, we discuss a recent trend in the design of online algorithms in which the online algorithm is enhanced with (potentially erroneous) predictions about the input sequence. The goal is to design algorithms with good “consistency” (i.e., the competitive ratio assuming no prediction error) and “robustness” (i.e., the competitive ratio under adversarial error).

The focus of this talk is on the online bin packing problem. We consider the discrete bin packing problem with predictions that concern the frequency of item sizes in the input sequence. We discuss online algorithms with efficient tradeoffs between their consistency and their robustness and whose performance degrades gently as a function of the error in prediction.

## ▶ **Bio**

Shahin Kamali is an assistant professor of Computer Science at York University, where he is a member of the Theory of Computing Group. Before joining York University, he was an assistant professor at the University of Manitoba. Previously, Shahin was a postdoctoral associate and an NSERC postdoctoral fellow at the CSAIL lab at MIT. He completed his PhD at the University of Waterloo in 2014.

Shahin has a broad interest in the design, analysis, and limitations of algorithms. He is particularly interested in online problems such as bin packing, paging, list update, and k-Server. His research also spans big-data applications of algorithms in data compression, graph partitioning, and resource allocation in the cloud.

*Tuesday, October 18 @ 4 pm*

**Adapting Truth-Table Reducibility to Zero-Knowledge with Logspace**

Jacob Gray (UMass)

## ▶ **Abstract**

Abstract: The zero-knowledge proof model has been around for many years, but recently, NISZK_L, a new subclass of the non-interactive statistical variant, has gained interest due to its relation to the minimum circuit size problem, an important potential NP-intermediate problem. As strong closure properties for NISZK_L appear elusive, one of the routes we took to get better insight into this class was to ensure that an analogous class, SZK_L, maintained SZK’s closure under a boolean-formula truth-table reduction.

In this talk, I plan to give a general, intuitive explanation of what zero-knowledge proofs are, what statistical zero knowledge looks like, and an outline of the adaptation for this closure property to SZK_L. Although NISZK_L is a large motivator for this result, I will only introduce and talk about SZK_L.

This is work done with Eric Allender, Harsha Tirumala, Saachi Mutreja, and Pengxiang Wang as part of the 2022 DIMACS REU at Rutgers.

**Almost Tight Approximation Algorithms for Explainable Clustering**

Jacob Goldman (UMass)

## ▶ **Abstract**

Original Paper by Hossein Esfandiari, Vahab Mirrokni, and Shyam Narayanan.

Abstract: The increasing desire for transparency in artificial intelligence motivates the study of algorithms that are both accurate and interpretable by humans. In this talk, I’ll discuss the work of a Google Research team studying k-medians and k-means clustering under a recent framework of explainable clustering first suggested by Dasgupta et al. Under this framework, the space of points to be clustered is decomposed using a decision tree where each node separates two clusters by simple comparison in one of the dimensions of the space. I’ll present their O(log k log log k)-approximation algorithm for explainable k-medians and their O(k log k)-approximation algorithm for explainable k-means, both of which are improvements over the respective best algorithms for each problem.

**PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing**

Joseph Black (UMass)

## ▶ **Abstract**

Original Paper by Connelly Barnes, Eli Shechtman, Adam Finkelstein, and Dan B Goldman.

Abstract: This paper presents interactive image editing tools using a new
randomized algorithm for quickly finding approximate nearest-
neighbor matches between image patches. Previous research in
graphics and vision has leveraged such nearest-neighbor searches to
provide a variety of high-level digital image editing tools. However,
the cost of computing a field of such matches for an entire image
has eluded previous efforts to provide interactive performance. Our
algorithm offers substantial performance improvements over the
previous state of the art (20-100x), enabling its use in interactive
editing tools. The key insights driving the algorithm are that
some good patch matches can be found via random sampling, and
that natural coherence in the imagery allows us to propagate such
matches quickly to surrounding areas. We offer theoretical analysis
of the convergence properties of the algorithm, as well as empirical
and practical evidence for its high quality and performance. This
one simple algorithm forms the basis for a variety of tools – image
retargeting, completion and reshuffling – that can be used together
in the context of a high-level image editing application. Finally, we
propose additional intuitive constraints on the synthesis process that
offer the user a level of control unavailable in previous methods.

**SPECIAL TIME:** Friday, October 21 @ 1 pm

**Chasing Convex Bodies and Functions with Black-Box Advice**

Nico Christianson (Caltech)

## ▶ **Abstract**

Modern AI/ML algorithms have the potential to improve performance over traditional algorithms for online optimization, which are designed to give worst-case guarantees. However, while AI/ML algorithms might perform well when the training data accurately reflects the deployment environment, they lack worst-case performance guarantees, e.g., under distribution shift. This hinders their use in real-world settings where safety/performance guarantees are crucial. In this talk, I will discuss recent work designing algorithms for online optimization with “black-box” advice that bridge the gap between the good average-case performance of AI/ML and the worst-case guarantees of traditional algorithms. We focus on the problem of chasing convex bodies and functions, discussing several algorithms and fundamental limits on algorithm performance in this setting.

## ▶ **Bio**

Nico Christianson is a third-year PhD student in Computing and Mathematical Sciences at Caltech, where he is advised by Adam Wierman and Steven Low. Before Caltech, he received his AB in Applied Math from Harvard in 2020. He is broadly interested in online algorithms, optimization, and learning, with an emphasis toward designing machine learning-augmented algorithms for complex sequential decision-making problems. Nico’s work is supported by an NSF Graduate Research Fellowship.

*Tuesday, October 25 @ 4 pm*

** LEARN-Uniform Circuit Lower Bounds and Provability in Bounded Arithmetic**

Marco Carmosino (IBM)

## ▶ **Abstract**

We investigate randomized LEARN-uniformity, which captures the power of randomness and equivalence queries (EQ) in the construction of Boolean circuits for an explicit problem. This is an intermediate notion between P-uniformity and non-uniformity motivated by connections to learning, complexity, and logic. We establish the first unconditional lower bounds against LEARN-uniform circuits.

We then employ these LEARN-uniform circuit *lower bounds* to investigate the provability of non-uniform circuit *upper bounds* in formal systems that capture “efficient” reasoning: theories of bounded arithmetic. By extracting computational information from proofs via a direct translation of “proof” into “LEARN-uniformity”, we establish robust unprovability theorems that unify, simplify, and extend nearly all previous results along these lines (Krajicek-Oliveira (2017), Muller-Bydzovsky (2020), and Bydzovsky-Krajicek-Oliveira (2020)).

This is joint work with Valentine Kabanets, Antonina Kolokolova, and Igor Carboni Oliveira, appearing in FOCS 2021. The preprint is available on ECCC.

## ▶ **Bio**

Marco Carmosino is a Research Scientist in the Neuro-symbolic AI group at IBM Research. Marco obtained his PhD from the University of California, San Diego in August of 2019. He received his MS in computer science from UMass Amherst in 2013, and earned a personalized BA in “Theoretical Computer Science and Natural Language Processing” from Hampshire College in 2011. Marco studies circuit complexity, the role of randomness in structural complexity theory, applications of logic to computation, and learning theory. He has been supported by a Bay State Fellowship at UMass, a Pacific Institute of Mathematical Sciences postdoctoral fellowship at Simon Fraser University, and an NSF Computing Innovation postdoctoral fellowship at Boston University.

**SPECIAL TIME:** Friday, October 28 @ 1 pm

**Learning-Augmented Linear Quadratic Control**

Tongxin Li (Caltech, CUHK)

## ▶ **Abstract**

Recently, augmenting learning-based predictions, advice, and black-box AI tools to classic online decision-making algorithms attracts growing interests. Integrating AI techniques into control systems, on the one hand, provides a new approach to handle uncertainties caused by renewable resources and human behaviors, but on the other hand, creates practical issues such as reliability, privacy, and scalability, etc. to the AI-integrated control algorithms.

In this talk, I will present novel problems raised in designing learning-augmented model predictive control algorithms. First, I will introduce a problem in linear quadratic control, where untrusted AI predictions of system perturbations are available. We show that it is possible to design an online algorithm with performance guarantees even with large prediction error. Second, I will present a nonlinear setting wherein pre-trained black-box RL algorithms are available. Next, I will show how these problems relate to practical applications in smart grids and demonstrate that the learning-augmented methods help improve previous classical results. Finally, I will discuss interesting potential research directions down the road.

## ▶ **Bio**

Tongxin Li is a tenure-track assistant professor in the School of Data Science (SDS), The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen). Prior to joining SDS, he received his Ph.D. degree from the California Institute of Technology (Caltech) in Computing and Mathematical Sciences in 2022. He graduated from CUHK in a dual-degree program and obtained a B.Eng. in information engineering, a B.Sc. in mathematics and a M.Phil. in information engineering. His research interests focus on interdisciplinary topics in smart grids, machine learning, control, and optimization, with applications to power systems and sustainability. In particular, he is interested in developing trustworthy artificial intelligence and machine learning techniques that improve the sustainability, robustness, scalability, privacy, and resilience of smart grids. He has been invited to give talks at various international conferences and meetings such as the INFORMS Annual Meeting and INFORMS Applied Probability Conference and has published articles on various peer-reviewed journals and conferences. During his Ph.D., he interned as an applied scientist at AWS security in the summers of 2020 and 2021. He has participated in various projects on power systems in collaboration with NREL, Pasadena Water and Power, and Caltech Facilities. He is a recipient of the 2021 Impact Grants from the Resnick Sustainability Institute.

*Tuesday, November 1 @ 4 pm*

**On the Necessary “Unfairness” of Competitive Online Algorithms**

Adam Lechowicz (UMass)

## ▶ **Abstract**

Abstract: Online algorithms solve a subclass of problems where the input is revealed sequentially over time — they need to make decisions without knowledge of the full input sequence, particularly with regards to future inputs arriving online. To evaluate the performance of an online algorithm, we primarily use competitive analysis, which compares the online solution against a (usually better) offline optimal solution.

The broad topic of socioeconomic fairness has seen much attention in recent ML research, for good reasons. For this talk, we adopt a more abstract definition of “fairness” which analyzes the distribution of actions taken by an online algorithm. This may or may not be empirically relevant, but this talk will focus on the resulting theoretical questions.

In this talk, I will show several patterns of “unfair” decision making in two online problems, knapsack and caching. I will attempt to motivate the idea that there is an interesting tradeoff between “fairness” and the performance of an online algorithm, which may hint at the existence of some lower bounds. Finally, I will close with future research directions in this area and desiderata for “provably-fair” online algorithms.

This is ongoing joint work with Mohammad Hajiesmaili, Bo Sun, and Shahin Kamali.

**An Introduction to Topological Data Analysis**

Ben Burns (UMass)

## ▶ **Abstract**

Original Paper: *Topological Methods for the Analysis of High Dimensional Data Sets and 3D Object Recognition*, Gurjeet Singh, Facundo Mémoli and Gunnar Carlsson.

Abstract: We present a computational method for extracting simple descriptions of high dimensional data sets in the form of simplicial complexes. Our method, called Mapper, is based on the idea of partial clustering of the data guided by a set of functions defined on the data. The proposed method is not dependent on any particular clustering algorithm, i.e. any clustering algorithm may be used with Mapper. We implement this method and present a few sample applications in which simple descriptions of the data present important information about its structure.

**SPECIAL TIME:** Friday, November 4 @ 1 pm

**How much data is sufficient to learn high-performing algorithms?**

Ellen Vitercik (Stanford)

## ▶ **Abstract**

Algorithms often have tunable parameters that have a considerable impact on their runtime and solution quality. A growing body of research has demonstrated that data-driven algorithm design can lead to significant gains in runtime and solution quality. Data-driven algorithm design uses a training set of problem instances sampled from an unknown, application-specific distribution and returns a parameter setting with strong average performance on the training set. We provide a broadly applicable theory for deriving generalization guarantees for data-driven algorithm design, which bound the difference between the algorithm’s expected performance and its average performance over the training set. The challenge is that for many combinatorial algorithms, performance is a volatile function of the parameters: slightly perturbing the parameters can cause a cascade of changes in the algorithm’s behavior. This talk is based on joint work with Nina Balcan, Dan DeBlasio, Travis Dick, Carl Kingsford, and Tuomas Sandholm.

## ▶ **Bio**

Ellen Vitercik is an Assistant Professor at Stanford University with a joint appointment between the Management Science & Engineering department and the Computer Science department. Her research revolves around machine learning theory, discrete optimization, and the interface between economics and computation. Before joining Stanford, she spent a year as a Miller Fellow at UC Berkeley after receiving a PhD in Computer Science from Carnegie Mellon University. Her thesis won the SIGecom Doctoral Dissertation Award and the CMU School of Computer Science Distinguished Dissertation Award.

*Tuesday, November 8 @ 4 pm*

No meeting!

**SPECIAL TIME:** Wednesday, November 9 @ 12:20 pm

** Algorithm Analysis Beyond the Worst-Case**

Anupam Gupta (CMU)

## ▶ **Abstract**

Analyzing the performance of algorithms in the worst-case and the average case have been two of the cornerstones of computer science: these give us techniques to understand how algorithms behave, in two different ways. Over the past two decades, there has been a concerted effort to understand the performance of algorithms in models that go beyond these two “extremes”. In this talk I will discuss some of the proposed models and approaches, particularly for problems related to online algorithms, where decisions must be made sequentially without knowing future portions of the input.

## ▶ **Bio**

Anupam Gupta is a professor in the Computer Science Department at Carnegie Mellon University. His research interests are broadly in the design and analysis of algorithms, primarily in optimization under uncertainty, in developing approximation algorithms for NP-hard optimization problems, and in understanding the algorithmic properties of metric spaces. He is an ACM Fellow, and a recipient of the Alfred P. Sloan Research Fellowship and the NSF Career award.

*Tuesday, November 15 @ 4 pm*

** Agreement Takes Compromise: Exploring Security and Performance Tradeoffs in Byzantine Consensus**

Erica Blum (University of Maryland)

## ▶ **Abstract**

Byzantine consensus is a fundamental problem at the intersection of cryptography and distributed systems, in which a set of devices need to reach agreement even if some fraction of devices deviate arbitrarily from the protocol. Consensus problems have frequently been studied in the context of two contrasting network models, synchrony and asynchrony. In the synchronous network model, any message sent by an honest party will be delivered within time ∆, where ∆ is a fixed parameter known to all parties. In the asynchronous network model, messages may be delayed for arbitrary lengths of time. This talk will explore recent work on security and performance tradeoffs across the two models.

## ▶ **Bio**

Erica is a 5th year computer science PhD student at University of Maryland College Park, where she studies cryptography and consensus algorithms with adviser Jonathan Katz. Her research focuses on the security of consensus protocols under different trust models and network conditions.

**SPECIAL TIME:** Friday, November 18 @ 1 pm

**Online Learning with Hints**

Ashok Cutkosky (Boston University)

## ▶ **Abstract**

Online learning is the study of decision making with streaming data (e.g. repeated prediction problems). Classical results in this field have established tight upper and lower bounds for what can be achieved in the worst-case scenario, but leave open significant room for beyond-worst-case analysis. In this talk, we will describe recent results in a setting in which our learning algorithm has access to some external “hints” about the future of the stream. We will show how to leverage these hints for significantly improved performance, while remaining robust to potentially “bad” or misleading hints.

## ▶ **Bio**

Ashok Cutkosky is an assistant professor in the ECE department at Boston University. Previously, he was a research scientist at Google. He earned a PhD in computer science from Stanford University in 2018. He is interested in all aspects of machine learning and stochastic optimization theory. He has worked extensively on optimization algorithms for machine learning that adaptively tune themselves to apriori unknown the statistical properties of their input datasets, as well as on non-convex stochastic optimization.

*Tuesday, November 22*

No meeting — Friday schedule followed

*Tuesday, November 29 @ 4 pm*

**A Simple and Fast Fair Allocation Algorithm under Binary Submodular Valuations**

Vignesh Viswanathan (UMass)

## ▶ **Abstract**

The classic problem of fair allocation asks how to divide a set of indivisible goods among a set of agents who have different preferences over these goods. In this talk I will focus on fair allocation instances where agent preferences can be modeled using binary submodular functions. Under binary submodular functions, the value that each agent places on each good is binary (either they like a good or dislike it) and each good provides a decreasing marginal gain (adding a good to a large bundle will provide lesser value than adding a good to a small bundle). I will introduce several desirable fairness notions for this problem and then present an algorithm that computes allocations that satisfy (almost) all of these fairness notions. Finally, I will analyze the time complexity of this algorithm and show that it is significantly faster than all the algorithms in prior work.

This is joint work with Yair Zick.

**SPECIAL TIME:** Friday, December 2 @ 1 pm

**Learning-Augmented Mechanism Design**

Eric Balkanski (Columbia)

## ▶ **Abstract**

In this talk, we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in “learning-augmented algorithms”. Aiming to complement the traditional approach in computer science, which analyzes the performance of algorithms based on worst-case instances, this line of work has focused on the design and analysis of algorithms that are enhanced with machine-learned predictions regarding the optimal solution. The algorithms can use the predictions as a guide to inform their decisions, and the goal is to achieve much stronger performance guarantees when these predictions are accurate (consistency), while also maintaining near-optimal worst-case guarantees, even if these predictions are very inaccurate (robustness). So far, these results have been limited to algorithms, but in this work we argue that another fertile ground for this framework is in mechanism design.

We initiate the design and analysis of strategyproof mechanisms that are augmented with predictions regarding the private information of the participating agents. To exhibit the important benefits of this approach, we revisit the canonical problem of facility location with strategic agents in the two-dimensional Euclidean space. We study both the egalitarian and utilitarian social cost functions, and we propose new strategyproof mechanisms that leverage predictions to guarantee an optimal trade-off between consistency and robustness guarantees. This provides the designer with a menu of mechanism options to choose from, depending on her confidence regarding the prediction accuracy. Furthermore, we also prove parameterized approximation results as a function of the prediction error, showing that our mechanisms perform well even when the predictions are not fully accurate. We will conclude by discussing other exciting recent results in this new area of learning-augmented mechanism design.

Joint work with Priyank Agrawal, Vasilis Gkatzelis, Tingting Ou, and Xizhi Tan.

## ▶ **Bio**

Eric Balkanski is an Assistant Professor in the Department of Industrial Engineering and Operations Research at Columbia University. His research focuses on data-driven algorithm design, combinatorial optimization, and mechanism design. He is the recipient of an NSF Fairness in AI grant, a Columbia Center of AI Technology (CAIT) in collaboration with Amazon faculty research award, an NSF SMALL grant, and an ACM SIGecom Doctoral Dissertation Honorable Mention Award.

*Tuesday, December 6 @ 4 pm*

**TBD**

Yi Wei (UMass)

## ▶ **Abstract**

Abstract: TBD.

**A Novel Approach to Partial Coverage in Wireless Sensor Networks via the Roman Dominating Set**

Fatemeh Ghaffari (UMass)

## ▶ **Abstract**

A novel approach to partial coverage in wireless sensor networks via the roman dominating set: One major challenge in deploying wireless sensor networks (WSN) in real-world applications is minimizing the energy consumption by the sensors while maintaining the coverage of the monitored field. However, many applications do not need full coverage of the monitored area all the time, which can help us reduce the network’s energy consumption. One approach to exploit this property is to set up a sleep/wake-up schedule for each node such that no redundant nodes are active in an area of coverage simultaneously. This will allow the existence of monitoring holes in a controlled manner, which the authors call partial coverage. In this study, the partial coverage condition is imposed by constructing a Roman Dominating Set of awake nodes in the network. Roman domination is a method for colouring a graph’s vertices with three labels (0, 1, 2), such that all the vertices with label 0 have an adjacent vertex with label 2, while the sum of the labels of nodes is minimized. Based on this formulation, a simple greedy algorithm is proposed to construct such a structure supported by three theorems. Furthermore, the performance of the authors’ proposal in different scenarios is evaluated.

**SPECIAL TIME:** Thursday, December 8 @ 1 pm

**Online Algorithms with Multiple Advice**

Debmalya Panigrahi (Duke)

## ▶ **Abstract**

The bulk of the literature on online algorithms with ML advice focuses on a single predictor, which may or may not be correct. But, in reality, multiple ML models are often used to make future predictions for the same application, and their predictions/advice can differ from one another. In this case, how should an online algorithm choose among these different suggestions? This talk will focus on models, problems, and algorithms to address this question. The talk will include some recent results with Keerti Anand, Rong Ge, and Amit Kumar, and survey older results including an ICML ’19 paper with Sreenivas Gollapudi.

## ▶ **Bio**

Debmalya Panigrahi is a professor of computer science at Duke University. His research interests are broadly in the design and analysis of algorithms, particularly in graph algorithms and in the design of algorithms under uncertainty. His research has been recognized by an NSF CAREER Award and faculty research awards from Google and Yahoo. He obtained his PhD from MIT in 2012, where he was an MIT Presidential Fellow.