Restricted models


Map 1



Map 2


To understand Life, what should you study?

a. People’s dreams.

b. The AMPK gene of the fruit fly.

Studying restricted computational models corresponds to b. Just like microbes constitute a wealth of open problems whose solutions are sometimes far-reaching, so restricted computational models present a number of challenges whose study is significant. For one example, Valiant’s study of arithmetic lower bounds boosted the study of superconcentrators, an influential type of graphs closely related to expanders.

The maps above, taken from here, include a number of challenges together with their relationships. Arrows go towards special cases (which are presumably easier). As written in the manuscript, my main aim was to put these challenges in perspective, and to present some connections which do not seem widely known. Indeed, one specific reason why I drew the first map was the realization that an open problem that I spent some time working on can actually be solved immediately by combining known results. The problem was to show that multiparty (number-on-forehead) communication lower bounds imply correlation bounds for polynomials over GF(2). The classic work by Hastad and Goldman does show that k-party protocols can simulate polynomials of degree k-1, and so obviously that correlation bounds for k-party protocols imply the same bounds for polynomials of degree k-1. But what I wanted was a connection with worst-case communication lower bounds, to show that correlation bounds for polynomials (survey) are a prerequisite even for that.

As it turns out, and as the arrows from (1.5) to (1.2) in the first map show, this is indeed true when k is polylogarithmic. So, if you have been trying to prove multiparty lower bounds for polylogarithmic k, you may want to try correlation bounds first. (This connection is for proving correlation bounds under some distribution, not necessarily uniform.)

Another reason why I drew the first map was to highlight a certain type of correlation bound (1.3), discussed in this paper with Razborov. It is a favorite example of mine of a seemingly very basic open problem that is, again, a roadblock for much of what we’d like to know. The problem is to obtain correlation bounds against polynomials that are real valued, with the convention that whenever the polynomial does not output a boolean value we count that as a mistake, thus making the goal of proving a lower bound presumably easier. Amazingly, the following is still open:

Prove that the correlation of the parity function on n bits is at most 1/n with any real polynomial of degree log(n).

To be precise, correlation is defined as the probability that the polynomial correctly computes parity, minus 1/2. For example, the constant polynomial 1 has correlation zero with parity — it gets it right half the times. Whereas the polynomial x1+x2+…+xn does a lot worse, it has negative correlation with parity or in fact any boolean function, just because it is unlikely that its output is in {0,1}.

What we do in the paper, in my opinion, is to begin to formalize the intuition that these polynomials cannot do much. We show that the correlation with parity is zero (not very small, but actually zero) as long as the polynomial has degree 0.001 loglog(n). This is different from the more familiar models of polynomials modulo m or sign polynomials, because those can achieve non-zero correlation even with constant degree.

On the other hand, with a simple construction, we can obtain non-zero correlation with polynomials of degree O(sqrt(n)). Note the huge gap with the 0.001 loglog(n) lower bound.

Question: what is the largest degree for which the correlation is zero?

The second map gives another slice of open problems. It highlights how superlinear-length lower bounds for branching programs are necessary for several notorious circuit lower bounds.

A third map was scheduled to include Valiant’s long-standing rigidity question and algebraic lower bounds. In the end it was dropped because it required a lot of definitions while I knew of very few arrows. But one problem that was meant to be there is a special case of the rigidity question from this work with Servedio. The question is basically a variant of the above question of real polynomials, where instead of considering low-degree polynomials we consider sparse polynomials. What may not be immediately evident, although in hindsight it is technically immediate, is that this problem is indeed a special case of the rigidity question. The question is to improve on the rigidity bounds in this special case.

In the paper we prove some variant that does not seem to be known in the rigidity world, but what I want to focus on right now is an application that such bounds would have, if established for the Inner Product function modulo 2 (IP). They would imply that IP cannot be computed by polynomial-size AC0-Parity circuits, i.e., AC0 circuits which take as input a layer of parity gates that’s connected to the input. It seems ridiculous that IP can be computed by such circuits, of course. It is easy to handle Or-And-Parity circuits, but circuits of higher depth have resisted attacks.

The question was reasked by Akavia, Bogdanov, Guo, Kamath, and Rosen.

Cheraghchi, Grigorescu, Juba, Wimmer, and Xie have just obtained some lower bounds for this problem. For And-Or-And-Parity circuits they obtain almost quadratic; the bounds degrade for larger depth but stay polynomial. Their proof of the quadratic lower bound looks nice to me. Their first moves are relatively standard: first they reduce to an approximation question for Or-And-Parity circuits; then they fix half the variables of IP so that IP becomes a parity that is “far” from the parities that are input to the DNF. The more interesting step of the argument, in my opinion, comes at this point. They consider the random variable N that counts the number of And-parity gates that evaluate to one, and they observe that the distribution of several moments of this variable is the same in the case where the parity that comes from IP is zero or one. From this, they use approximation theory to argue about the probability that N will be zero in the two cases. They get that these probabilities are also quite close, as long as the circuit is not too large, which shows that the circuit is not correctly computing IP.

Is Nature a low-complexity sampler?

“It is often said that we live in a computational universe. But if Nature “computes” in a classical, input-output fashion then our current prospect to leverage this viewpoint to gain fundamental insights may be scarce. This is due to the combination of two facts. First, our current understanding of fundamental questions such as “P=NP?” is limited to restricted computational models, for example the class AC0 of bounded-depth circuits. Second, those restricted models are incapable of modeling many processes which appear to be present in nature. For example, a series of works in complexity theory culminating in [Hås87] shows that AC0 cannot count.

But what if Nature, instead, “samples?” That is, what if Nature is better understood as a computational device that given some initial source of randomness, samples the observed distribution of the universe? Recent work by the Project Investigator (PI) gives two key insights in this direction. First, the PI has highlighted that, when it comes to sampling, restricted models are capable of surprising behavior. For example, AC0 can count, in the sense that it can sample a uniform bit string together with its hamming weight.[Vio12a] Second, despite the growth in power given by sampling, for these restricted models the PI was still able to answer fundamental questions of the type of “P=NP?”[Vio14]

Thus begins my application for the Turing Centenary Research Fellowship. After reading it, perhaps you too, like me, are not surprised that it was declined. But I was unprepared for the strange emails that accompanied its rejection. Here’s an excerpt:

“[…] A reviewing process can be thought of as a kind of Turing Test for fundability. There is a built-in fallibility; and just as there is as yet no intelligent machine or effective algorithm for recognising one (otherwise why would we bother with a Turing Test), there is no algorithm for either writing the perfect proposal, or for recognising the worth of one.

Of course, the feedback may well be useful, and will come. But we will be grateful for your understanding in the meantime.”

Well, I am still waiting for comments.

Even the rejection was sluggish: for months I and apparently others were told that our proposal didn’t make it, but was so good that they were looking for extra money to fund it anyway. After the money didn’t materialize, I was invited to try the regular call (of the sponsoring foundation). The first step of this was submitting a preliminary proposal, which I took: I re-sent them the abstract of my proposal. I was then invited to submit the full proposal. This is a rather painstaking process which requires you to address a seemingly endless series of minute questions referring to mysterious concepts such as the “Theory of Change.” Nevertheless, given that they had suggested I try the regular call, they had seen what I was planning to submit, and they had still invited me for the full proposal, I did answer all the questions and re-sent them what they already had, my Turing Research Fellowship application. Perhaps it only makes sense that the outcome was as it was.

The proposal was part of a research direction which started exactly five years ago, when the question was raised of proving computational lower bounds for sampling. Since then, there has been progress: [Vio12aLV12DW11Vio14Vio12bBIL12BCS14]. One thing I like of this area is that it is uncharted – wherever you point your finger chances are you find an open problem. While this is true for much of Complexity Theory, questions regarding sampling haven’t been studied nearly as intensely. Here’s three:

A most basic open question. Let D be the distribution on n-bit strings where each bit is independently 1 with probability 14. Now suppose you want to sample D given some random bits x1,x2,. You can easily sample D exactly with the map

(x1 x2,x3 x4,,x2n1 x2n).

This map is 2-local, i.e., each output bit depends on at most 2 input bits. However, we use 2n inputs bits, whereas the entropy of the distribution is H(14)n 0.81n. Can you show that any 2-local map using a number of bits closer to H(14)n will sample a distribution that is very far from D? Ideally, we want to show that the statistical distance between the distribution is very high, exponentially close to n.

Such strong statistical distance bounds also enable a connection to lower bounds for succinct dictionaries; a problem that Pǎtraşcu thinks important. A result for d-local maps corresponds to a result for data structures which answer membership queries with d non-adaptive bit probes. Adaptive bit probes correspond to decision trees. While d cell probes correspond to samplers whose input is divided in blocks of O(log n) bits, and each output bit depends on d cells, adaptively.

There are some results in [Vio12a] on a variant of the above question where you need to sample strings whose Hamming weight is exactly n∕4, but even there there are large gaps in our knowledge. And I think the above case of 2-local maps is still open, even though it really looks like you cannot do anything unless you use 2n random bits.

Stretch. With Lovett we suggested [LV12] to prove negative results for sampling (the uniform distribution over a) subset S ⊆{0, 1}n by bounding from below the stretch of any map

f : {0, 1}r S.

Stretch can be measured as the average Hamming distance between f(x) and f(y), where x and y are two uniform input strings at Hamming distance 1. If you prove a good lower bound on this quantity then some complexity lower bounds for f follow because local maps, AC0 maps, etc. have low stretch.

We were able to apply this to prove that AC0 cannot sample good codes. Our bounds are only polynomially close to 1; but a nice follow-up by Beck, Impagliazzo, and Lovett, [BIL12], improves this to exponential. But can this method be applied to other sets that do not have error-correcting structure?

Consider in particular the distribution UP which is uniform over the upper-half of the hypercube, i.e., uniform over the n-bit strings whose majority is 1. What stretch is required to sample UP? At first sight, it seems the stretch must be quite high.

But a recent paper by Benjamini, Cohen, and Shinkar, [BCS14], shows that in fact it is possible with stretch 5. Moreover, the sampler has zero error, and uses the minimum possible number of input bits: n 1!

I find their result quite surprising in light of the fact that constant-locality samplers cannot do the job: their output distribution has Ω(1) statistical distance from UP [Vio12a]. But local samplers looked very similar to low-stretch ones. Indeed, it is not hard to see that a local sampler has low average stretch, and the reverse direction follows from Friedgut’s theorem. However, the connections are only average-case. It is pretty cool that the picture changes completely when you go to worst-case computation.

What else can you sample with constant stretch?

AC0 vs. UP. Their results are also interesting in light of the fact that AC0 can sample UP with exponentially small error. This follows from a simple adaptation of the dart-throwing technique for parallel algorithms, known since the early 90’s [MV91Hag91] – the details are in [Vio12a]. However, unlike their low-stretch map, this AC0 sampler uses superlinear randomness and has a non-zero probability of error.

Can AC0 sample UP with no error? Can AC0 sample UP using O(n) random bits?

Let’s see what the next five years bring.


[BCS14]   Itai Benjamini, Gil Cohen, and Igor Shinkar. Bi-lipschitz bijection between the boolean cube and the hamming ball. In IEEE Symp. on Foundations of Computer Science (FOCS), 2014.

[BIL12]    Chris Beck, Russell Impagliazzo, and Shachar Lovett. Large deviation bounds for decision trees and sampling lower bounds for AC0-circuits. In IEEE Symp. on Foundations of Computer Science (FOCS), pages 101–110, 2012.

[DW11]    Anindya De and Thomas Watson. Extractors and lower bounds for locally samplable sources. In Workshop on Randomization and Computation (RANDOM), 2011.

[Hag91]    Torben Hagerup. Fast parallel generation of random permutations. In 18th Coll. on Automata, Languages and Programming (ICALP), pages 405–416. Springer, 1991.

[Hås87]    Johan Håstad. Computational limitations of small-depth circuits. MIT Press, 1987.

[LV12]    Shachar Lovett and Emanuele Viola. Bounded-depth circuits cannot sample good codes. Computational Complexity, 21(2):245–266, 2012.

[MV91]    Yossi Matias and Uzi Vishkin. Converting high probability into nearly-constant time-with applications to parallel hashing. In 23rd ACM Symp. on the Theory of Computing (STOC), pages 307–316, 1991.

[Vio12a]   Emanuele Viola. The complexity of distributions. SIAM J. on Computing, 41(1):191–218, 2012.

[Vio12b]   Emanuele Viola. Extractors for turing-machine sources. In Workshop on Randomization and Computation (RANDOM), 2012.

[Vio14]    Emanuele Viola. Extractors for circuit sources. SIAM J. on Computing, 43(2):355–972, 2014.

The sandwich revolution: behind the paper

Louay Bazzi’s breakthrough paper “Polylogarithmic Independence Can Fool DNF Formulas” (2007) introduced the technique of sandwiching polynomials which is used in many subsequent works.  While some of these are about constant-depth circuits, for example those referenced in Bazzi’s text below, sandwiching polynomials have also been used to obtain results about sign polynomials, including for example a central limit theorem for k-wise independent random variables.  Rather than attempting an exhaustive list, I hope that the readers who are familiar with a paper using sandwiching polynomials can add a reference in the comments.

I view the technique of sandwiching polynomials as a good example of something simple — it follows immediately from LP duality — which is also extremely useful.

Bazzi has kindly provided  the following text for the second post of the series behind the paper.



Originally, my objective was to show that the quadratic residues PRG introduced by Alon, Goldreich, Hastad, and Peralta in 1992 looks random to something more powerful than parity functions such as DNF formula or small-width branching programs. My motivation was that the distribution of  quadratic residues promises great derandomization  capabilities. I was hoping to be able to use tools from number theory to achieve this goal.   I worked on this problem for some time, but all the approaches  I tried didn’t use more than what boils down to the small-bias property. In the beginning, my goal was to go beyond this property, but eventually I started to question how far we can go with this property alone in the context of DNF formula. I turned to investigating the question of whether spaces with small-bias  fool DNF formulas, which lead me to the dual question of whether one can construct sandwiching polynomials with low L1-norm in the Fourier domain for DNF formulas. I was not able to use the high frequencies in the Fourier spectrum in the context of DNF formulas. Thus I dropped the low L1-norm requirement and I focused on the simpler low-degree polynomials special case,  which is equivalent to trying to show that limited independence fools DNF formulas. The approaches I  tried in the beginning were based on lifting the k-wise independent probability distribution to the clauses and trying to reduce  the problem  to a LP with  moments constraints. I started to believe that this approach won’t work because  I was ignoring the values of the moments  which are specific to DNF formulas. While  trying to understand the limitations of this approach and researching the related literature, I came across the 1990 paper of Linial and Nisan on approximate inclusion-exclusion,  which excludes the approach I was having trouble with and   conjectures the correctness  of what I was trying to prove.  The attempts I tried later  were all based  on an L2-approximation of the Formula by low-degree polynomials  subject to the constraint that the polynomial is zero on all the zeros of the DNF Formula. The difficulty was in the zeros constraints which  was needed to construct the sandwiching polynomials. Without the zeros constraint, the conjecture would follow from Linial-Mansour-Nisan  energy bound.  I was not hoping that the LMN energy bound can be applied to the problem I was working on since one can construct boolean functions which satisfy the LMN bound  but violates the claim I was after. I was trying to construct the sandwiching polynomials by other methods …
Eventually, I was able to derive many  DNF formulas from the original formula and apply LMN energy bound to each of those formulas to prove the conjecture. Later on, the proof was simplified by Razborov  and extended by Braverman to AC0.

Local reductions: Behind the paper

In the spirit of Reingold’s research-life stories, the series “behind the paper” collects snapshots of the generation of papers. For example, did you spend months proving an exciting bound, only to discover it was already known? Or what was the key insight which made everything fit together? Records of this baffling process are typically expunged from research publications. This is a place for them. The posts will have a technical component.




The classical Cook-Levin reduction of non-deterministic time to 3SAT can be optimized along two important axes.

Axis 1: The size of the 3SAT instance. The tableau proof reduces time T to a 3SAT instance of size O(T2), but this has been improved to a quasilinear T x polylog(T) in the 70s and 80s, notably using the oblivious simulation by Pippenger and Fischer, and the non-deterministic, quasilinear equivalence between random-access and sequential machines by Gurevich and Shelah.

Axis 2: The complexity of computing the 3SAT instance. If you just want to write down the 3SAT instance in time poly(T), the complexity of doing so is almost trivial.  The vast majority of the clauses are fixed once you fix the algorithm and its running time, while each of the rest depends on a constant number of bits of the input to the algorithm.

However, things get more tricky if we want our 3SAT instance to enjoy what I’ll call clause-explicitness. Given an index i to a clause, it should be possible to compute that clause very efficiently, say in time polynomial in |i| = O(log T), which is logarithmic in the size of the formula. Still, it is another classical result that it is indeed possible to do so, yielding for example the NEXP completeness of succinct-3SAT (where your input is a circuit describing a 3SAT instance). More uses of clause explicitness can be found in a 2009 paper by Arora, Steurer, and Wigderson, where they show that interesting graph problems remain hard even on exponential-size graphs that are described by poly-size AC0 circuits.


I got more interested in the efficiency of reductions after Williams’ paper Improving exhaustive search implies superpolynomial lower bounds, because the in-efficiency of available reductions was a bottleneck to the applicability of his connection to low-level circuit classes. Specifically, for a lower bound against a circuit class C, one needed a reduction to 3SAT that both has quasilinear blowup and is C-clause-explicit: computing the ith clause had to be done by a circuit from the class C on input i. For one thing, since previous reductions where at best NC1-clause-explicit, the technique wouldn’t apply to constant-depth classes.

I had some ideas how to obtain an AC0-clause-explicit reduction, when Williams’ sequel came out. This work did not employ more efficient reductions, instead it used the classical polynomial-size-clause-explicit reduction as a black-box together with an additional argument to more or less convert it to a constant-depth-clause-explicit one. This made my preliminary ideas a bit useless, since there was a bypass. However disappointing, a lot worse was to come.

I was then distracted by other things, then eventually returned to the topic. I still found it an interesting question whether a very clause-explicit reduction could be devised. First, it would remove Williams’ bypass, resulting in a possibly more direct proof. Second, the in-efficiency of the reduction was still a bottleneck to obtain further lower bounds (more on this later).

The first step for me was to gain a deeper understanding of the classical quasilinear-size reduction — ignoring clause explicitness — so I ran a mini-polymath project in a Ph.D. class at NEU. The result is this survey, which presents a proof using sorting networks that may be conceptually simpler than the one based on Pippenger and Fischer’s oblivious simulation.  The idea to use sorting is from the paper by Gurevich and Shelah, but if you follow the reductions without thinking you will make the sorting oblivious using the general, complicated simulation.  About one hour after posting the fruit of months of collaborative work on ECCC, we are notified that this is Dieter van Melkebeek’s proof from Section 2.3 in his survey, and that this is the way he’s been teaching it for over a decade. This was a harder blow, yet worse was to come.

On the positive side, I am happy I have been exposed to this proof, which is strangely little-known.  Now I never miss an opportunity to teach my students



 To try to stay positive I’ll add that our survey has reason to exist, perhaps, because it proves some technicalities that I cannot find elsewhere, and for completeness covers the required sorting network which has disappeared from standard algorithms textbooks.

Armed with this understanding, we went back to our original aim, and managed to show that reductions can be made constant-locality clause-explicit: each bit of the ith clause depends only on a constant number of bits of the index i. Note with constant locality you can’t even add 1 to the input in binary. This is a joint work with two NEU students: Hamid Jahanjou and Eric Miles. Eric will start a postdoc at UCLA in September.


The proof

Our first and natural attempt involved showing that the sorting network has the required level of explicitness, since that network is one of the things encoded in the SAT instance. We could make this network pretty explicit (in particular, DNF-clause-explicit). Kowalski and Van Melkebeek independently obtained similar results, leading to an AC0-clause-explicit reduction.

But we could not get constant locality, no matter how hard we dug in the bottomless pit of different sorting algorithms… on the bright side, when I gave the talk at Stanford and someone whom I hadn’t recognized asked “why can’t you just use the sorting algorithm in my thesis?” I knew immediately who this person was and what he was referring to.  Can you guess?

Then a conversation with Ben-Sasson made us realize that sorting was an overkill, and that we should instead switch to switching networks, as has long been done in the PCP literature, starting, to my knowledge, with the ’94 work of Polishchuk and Spielman. Both sorting and switching networks are made of nodes that take two inputs and output either the same two, or the two swapped. But whereas in sorting networks the node is a deterministic comparator, in switching networks there is an extra switch bit to select whether you should swap or not. Thanks to this relaxation the networks can be very simple. So this is the type of network that appears in our work.

Sorting isn’t all that there is to it.  One more thing is that any log-space uniform circuit can be made constant-locality uniform, in the sense that given an index to a gate you can compute its children by a map where each output bit depends on a constant number of input bits.  The techniques to achieve this are similar to those used in various equivalences between uniformity conditions established by Ruzzo in the 1979-1981 paper On Uniform Circuit Complexity, which does not seem to be online.  Ruzzo’s goal probably was not constant locality, so that is not established in his paper.  This requires some more work; for one thing, with constant locality you can’t check if your input is a valid index to a gate or a junk string, so you have to deal with that.

Of course, in the 3rd millennium we should not reduce merely to SAT, but to GAP-SAT. In a more recent paper with Ben-Sasson we gave a variant of the BGHSV PCP reduction where each query is just a projection of the input index (and the post-process is a 3CNF). Along the way we also get a reduction to 3SAT that is not constant-locality clause-explicit, but after you fix few bits it becomes locality-1 clause-explicit.  In general, it is still an open problem to determine the minimum amount of locality, and it is not even clear to me how to rule out locality 1.


One thing that this line of works led to is the following. Let the complexity of 3SAT be cn. The current (deterministic) record is

c < 1.34…

We obtain that if

c < 1.10…

then you get some circuit lower bounds that, however modest, we don’t know how to prove otherwise.