Theoretical CS Reading Group


The reading group meets once a week usually, Monday 1:00-2:00pm to discuss a particular paper or topic.

Please join our mailing list to get notifications about the weekly meetings and discussions.

You might also be interested in the
Theory Seminar.

Spring 2019
Fall 2018
Spring 2018
Fall 2017
Spring 2017
Fall 2016
Spring & Summer 2016
Fall 2015
Spring & Summer 2015
Fall 2014
Spring 2014
Summer & Fall 2013

Spring 2018

Friday, 12nd January 2018 — Theory Seminar - Simina Brânzei, Purdue University
Universal Growth in Production Economies

Click to view Abstract

We study a basic model of an economy over time, in which players invest their money in different goods and use the bundles obtained for production. We show that a simple decentralized dynamic, where players update their bids proportionally to how useful the investments were, leads to growth of the economy in the long term (whenever growth is possible) but also creates unbounded inequality, i.e. very rich and very poor players emerge. We analyze several other phenomena, such as how the relation of a player with others influences its development and the Gini index of the system. Joint work with Ruta Mehta and Noam Nisan.


Monday, 22nd January 2018 — Theory Seminar - Samson Zhou, Purdue University
On the Computational Complexity of Minimal Cumulative Cost Graph Pebbling

Click to view Abstract

We consider the computational complexity of finding a legal black pebbling of a DAG G=(V,E) with minimum cumulative cost. A black pebbling is a sequence P_0,..., P_t \subseteq V of sets of nodes which must satisfy the following properties: P_0 = \emptyset (we start off with no pebbles on G), \sinks(G) \subseteq \bigcup_{j<=t} P_j (every sink node was pebbled at some point) and \parents(P_{i+1}\P_i) \subseteq P_i (we can only place a new pebble on a node v if all of v&aposs parents had a pebble during the last round). The cumulative cost of a pebbling P_0, P_1,..., P_t \subseteq V is cc(P) = P_1 + ... + |P_t|. The cumulative pebbling cost is an especially important security metric for data-independent memory hard functions, an important primitive for password hashing. Thus, an efficient (approximation) algorithm would be an invaluable tool for the cryptanalysis of password hash functions as it would provide an automated tool to establish tight bounds on the amortized space-time cost of computing the function. We show that such a tool is unlikely to exist in the most general case. In particular, we prove the following results. It is NP-hard to find a pebbling minimizing cumulative cost. The natural linear program relaxation for the problem has integrality gap \tilde{O}(n), where n is the number of nodes in G. We conjecture that the problem is hard to approximate. We show that a related problem, find the minimum size subset S\subseteq V such that depth(G-S) <= d, is also NP-hard. In fact, under the Unique Games Conjecture there is no (2-\epsilon)-approximation algorithm. Joint work with Jeremiah Blocki


Thursday, 25th January 2018 — Theory Seminar - Daniel Lokshtanov, University of Bergen
Coping with NP-hardness

Click to view Abstract

The vast majority of optimization problems arising in practical applications are NP-hard. Thus it is difficult to over-state the importance of understanding how to handle NP-hard problems algorithmically. When a problem is NP-hard, this means that (unless P=NP) there cannot exist an algorithm that solves all instances of the problem optimally using time polynomial in the size of the instance. However, this does not mean that algorithms stand powerless in the face of NP-hardness. Indeed, an NP-hard problem may still admit algorithms of the following kind. - An algorithm that solves all instances of the problem optimally in time O(1.001^n), or even O(2^sqrt(n)), where n is the size of the input instance. Such algorithms are studied in the field of exact exponential time algorithms, and may still be useful even for moderate size worst-case input instances. - An algorithm that solves all instances of the problem within a factor 1.01 of optimality using time polynomial in the size of the instance. In many cases, being just 1% worse than the optimal solution is acceptable. Such algorithms are studied in the field of approximation algorithms. - An algorithm that solves structurally simple instances of the problem optimally using time polynomial in the size of the instance. This is very useful if we know that the instances that we actually want to solve are structurally simple. Algorithms of this kind are studied in the fields of restricted input and parameterized algorithms. - An algorithm that works in polynomial time and reduces possibly large yet structurally simple instances to equivalent small instances. This can be very useful provided that the output instance is so small that a solution can be found by brute force. The performance of such algorithms is studied in the field of kernelization. In this talk I will give a sample of each of the above fields, viewed through the lens of my contributions.


Monday, 29th January 2018 — Negin Karisani
Homology and Some of Its Applications


Friday, 9th Feburary 2018 — Theory Seminar - Karthik Chandrasekaran, UIUC
Hypergraph k-cut in randomized polynomial time

Click to view Abstract

In the hypergraph k-cut problem, the input is a hypergraph and the goal is to find a smallest subset of hyperedges whose removal ensures that the remaining hypergraph has at least k connected components. When k is part of the input, the hypergraph k-cut problem is at least as hard as the densest k-subgraph problem (Chekuri and Li, 2015). When k is a constant, the graph k-cut problem is solvable efficiently (Goldschmidt and Hochbaum, 1994) while the complexity of the hypergraph k-cut problem is open. In this talk, I will present a randomized polynomial-time algorithm to solve the hypergraph k-cut problem. Based on joint work with Chao Xu and Xilin Yu.


Monday, 12nd Feburary 2018 — Akash Kumar
Sum of Squares


Monday, 26th Feburary 2018 — Minshen Zhu
Uniform Sampling through the Lov´asz Local Lemma (Author: Heng Guo, Mark Jerrum and Jingcheng Liu)

Click to view Abstract

We propose a new algorithmic framework, called “partial rejection sampling”, to draw samples exactly from a product distribution, conditioned on none of a number of bad events occurring. Our framework builds (perhaps surprising) new connections between the variable framework of the Lov´asz Local Lemma and some classical sampling algorithms such as the “cycle-popping” algorithm for rooted spanning trees by Wilson. Among other applications, we discover new algorithms to sample satisfying assignments of k-CNF formulas with bounded variable occurrences.


Friday, 2nd March 2018 — Quantitative Methods Seminar - Ariel Procaccia, CMU
Extreme Democracy


Wednesday, 7th March 2018 — Theory Seminar - Jing Chen, Stony Brook University
Algorand: A Secure and Efficient Distributed Ledger

Click to view Abstract

A distributed ledger is a tamperproof sequence of data that can be publicly accessed and augmented by everyone, without being maintained by a centralized party. Distributed ledgers stand to revolutionize the way a modern society operates. They can secure all kinds of traditional transactions, such as payments, asset transfers and titles, in the exact order in which the transactions occur; and enable totally new transactions, such as cryptocurrencies and smart contracts. They can remove intermediaries and usher in a new paradigm for trust. As currently implemented, however, distributed ledgers scale poorly and cannot achieve their enormous potential. In this work we propose Algorand, an alternative, secure and efficient distributed ledger. Algorand is permissionless and works in a highly asynchronous environment. Unlike prior implementations of distributed ledgers based on "proof of work," Algorand dispenses with "miners" and requires only a negligible amount of computation. Moreover, its transaction history "forks" only with negligible probability: that is, Algorand guarantees the finality of a transaction the moment the transaction enters the ledger. Joint work with Silvio Micali.


Monday, 19th March 2018 — Theory Seminar - Jason Hartline, Northwestern University
Data Science and Mechanism Design

Click to view Abstract

Computer systems have become the primary mediator of social and economic interactions. A defining aspect of such systems is that the participants have preferences over system outcomes and will manipulate their behavior to obtain outcomes they prefer. Such manipulation interferes with data-driven methods for designing and testing system improvements. A standard approach to resolve this interference is to infer preferences from behavioral data and employ the inferred preferences to evaluate novel system designs. In this talk I will describe a method for estimating and comparing the performance of novel systems directly from behavioral data from the original system. This approach skips the step of estimating preferences and is more accurate. Estimation accuracy can be further improved by augmenting the original system; its accuracy then compares favorably with ideal controlled experiments, a.k.a., A/B testing, which are often infeasible. A motivating example will be the paradigmatic problem of designing an auction for the sale of advertisements on an Internet search engine.


Thursday, 22nd March 2018 — Distinguished Lecture - Eva Tardos, Cornell University
Learning and Efficiency of Outcomes in Games


Wednesday, 28th March 2018TCS+ Talk - Artur Czumaj, University of Warwicky
Round Compression for Parallel Matching Algorithms


Monday, 2nd April 2018 — Theory Seminar - Parinaz Naghizadeh, Purdue University
Network Games with Applications in Cyber-security

Click to view Abstract

The outcomes of many social and economic interactions are affected by an underlying network of connections among their agents. For instance, networks play a central role in the provision of public goods (e.g. cyber security), where they determine the externalities among agents, and consequently, the aggregate level of public good in the network (e.g. network security). In this talk, we first study the design of incentive mechanisms, and in particular cyber-insurance contracts, for improving the provision of security over a network of agents with interdependent security. We discuss the challenges and opportunities that arise due to the agents' interdependence, and show that knowledge of the network structure plays a key role in the design of incentive mechanisms in these environments. Motivated by this finding, we develop a unified framework for characterizing the role of the underlying graph structure in determining outcomes of strategic interactions over networks. To this end, we establish a connection between the Nash equilibrium outcomes of network games with non-linear (linear) best-response functions and variational inequality (linear complementarity) problems. Using these connections, we identify sufficient (and necessary) conditions for existence and uniqueness of Nash equilibria in these games. We further derive a connection between agents' efforts and their centralities in the network, and illustrate how this finding can provide guidelines for targeted interventions that improve a network's state of cyber-security.


Wednesday, 11th March 2018 — Theory Seminar - Young-San Lin, Purdue University
Stable Network Flow with Piecewise Linear Constraints

Click to view Abstract

We consider a general stable flow problem in a directed and capacitated network, where each vertex has strict preference lists over the incoming and going edges. A flow is stable if no group of vertices can mutually benefit by rerouting the flow along a path contained in the group. Motivated by applications in supply chain networks, we generalize the traditional Kirchhoff's law, requiring the outflow is equal to the inflow, to a monotone piecewise linear relationship between the inflows and the outflows. We show the existence of a stable flow using Scarf's Lemma, and provide a polynomial time algorithm to find such a stable flow. This variant of stable flow possesses a partial ordering structure and can be computed efficiently by using dynamic trees. We also introduce the optimization version of this problem. If edges are associated with a cost, then finding a stable flow with minimum cost for the traditional problem is polynomial time solvable, while for the new variation, it is NP-complete. Joint work with Thanh Nguyen


Wednesday, 25th April 2018 — Theory Seminar - Vladimir Braverman, JHU
Streaming Algorithms for Software Defined Networks, Cosmological Simulations, and Beyond.

Click to view Abstract

In this talk we will give a gentle introduction to streaming and sketching algorithms and demonstrate their applicability in cosmological N-body simulations, network monitoring for Software Defined Networks (SDN) and other applications. For SDN, we will present the UnivMon (short for Universal Monitoring) framework that can simultaneously achieve both generality and high fidelity across a broad spectrum of monitoring tasks. UnivMon builds on and extends recent theoretical advances in universal sketching and we will describe the connection of universal sketches to the concentration of measure and Milman’s theorem. For cosmological simulations, we will describe efficient algorithms for the identification of “halos,” which are concentrations of mass in the output of the simulations. Traditional in-memory methods for these tasks do not scale to the datasets that are forbiddingly large in modern simulations. Our experiments show that our tool can scale to datasets with up to 1012 particles, while using less than an hour of running time on a single Nvidia GTX GPU. Our methods are based on streaming algorithms for Heavy Hitters, or most popular items in a stream and we will discuss our recent and nearly optimal algorithms for L2 heavy hitters in insertion only streams. If time permits we will discuss other applications in healthcare, machine learning and statistical inference.