Approximation Algorithms
Org: Chaitanya Swamy (University of Waterloo)
[PDF]

DEEPARNAB CHAKRABARTY, Dartmouth College
Algorithms for minimum norm combinatorial optimization  [PDF]

In many optimization problems, a feasible solution induces a multi-dimensional cost vector. For example, in load-balancing a schedule induces a load vector across the machines. In k-clustering, opening k facilities induces an assignment cost vector across the clients. In this paper we consider the following minimum norm optimization problem : given an arbitrary monotone, symmetric norm, find a solution which minimizes the norm of the induced cost-vector. This generalizes many fundamental NP-hard problems. We give a general framework to tackle the minimum norm problem, and illustrate its efficacy in load balancing and, time permitting, in the clustering setting.

CHANDRA CHEKURI, University of Illinois, Urbana-Champaign
Covering Multiple Submodular Constraints and Applications  [PDF]

Given a finite ground set $N$, a weight function $w: N \rightarrow \mathbb{R}_+$, $r$ monotone submodular functions $f_1,f_2,\ldots,f_r$ over $N$ and requirements $k_1,k_2,\ldots,k_r$, we consider the problem of finding a min-weight subset $S \subseteq N$ such that $f_i(S) \ge k_i$ for $1 \le i \le r$. This generalizes the well-known Submodular Set Cover problem. A simple greedy algorithm gives an $O(\log (kr))$ approximation where $k = \sum_i k_i$. We improve the greedy approximation via mathematical programming relaxations in several interesting special cases, and point out various applications. Joint work with Tanmay Inamdar, Kent Quanrud, Kasturi Varadarajan and Zhao Zhang.

ANUPAM GUPTA, Carnegie Mellon University
Matroid-Based TSP Rounding for Half-Integral Solutions  [PDF]

We show how to round any half-integral solution to the subtour-elimination relaxation for the TSP, while losing a less-than-1.5 factor. Such a rounding algorithm was recently given by Karlin, Klein, and Oveis Gharan based on sampling from max-entropy distributions. Our result is based on sampling from the matroid intersection polytope.

This is based on joint work with Euiwoong Lee, Jason Li, and Marcin Mucha.

ALEKSANDAR NIKOLOV, University of Toronto
Maximizing Determinants under Combinatorial Constraints  [PDF]

Given a set of rank-one PSD matrices matrices $v_1 v_1^T, …, v_n v_n^T$, we consider the problem of selecting a subset $S$ of them whose sum has maximum determinant, under combinatorial constraints on $S$. This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, and network design. I will outline some of these connections, and describe how randomized rounding, convex optimization, and the theory of completely log-concave polynomials lets us design approximation and estimation algorithms for this problem.

The talk is based on works with Mohit Singh, Uthaipon (Tao) Tantipongpipat, and Vivek Madan.

OLA SVENSSON, Ecole Polytechnique Fédérale de Lausanne
The Primal-Dual method for Learning Augmented Algorithms  [PDF]

We extend the primal-dual method for online algorithms in order to incorporate predictions that advise the online algorithm about the next action to take. We use this framework to obtain novel algorithms for a variety of online covering problems. We compare our algorithms to the cost of the true and predicted offline optimal solutions and show that these algorithms outperform any online algorithm when the prediction is accurate while maintaining good guarantees when the prediction is misleading.

This is joint work with Étienne Bamas and Andreas Maggiori.