**With a historic effort, the residents of Newton MA have collected in a very short time 6,000+ signatures thanks to which a forthcoming ballot will include a question on banning recreational Marijuana sales in the city. (For background see the previous post and comments.)**

# Month: August 2018

# bounded independence plus noise fools space

There are many classes of functions on bits that we know are fooled by bounded independence, including small-depth circuits, halfspaces, etc. (See this previous post.)

On the other hand the simple parity function is not fooled. It’s easy to see that you require independence at least . However, if you just *perturb the bits with a little noise ,* then parity will be fooled. You can find other examples of functions that are not fooled by bounded independence alone, but are if you just perturb the bits a little.

In [3] we proved that any distribution with independence about fools space-bounded algorithms, if you perturb it with noise. We asked, both in the paper and many people, if the independence could be lowered. Forbes and Kelley have recently proved [2] that the independence can be lowered all the way to , which is tight [1]. Shockingly, their proof is nearly identical to [3]!

This exciting result has several interesting consequences. First, we now have almost the same generators for space-bounded computation in a fixed order as we do for any order. Moreover, the proof greatly simplifies a number of works in the literature. And finally, an approach in [4] to prove limitations for the sum of small-bias generators won’t work for space (possibly justifying some optimism in the power of the sum of small-bias generators).

My understanding of all this area is inseparable from the collaboration I have had with Chin Ho Lee, with whom I co-authored all the papers I have on this topic.

### The proof

Let be a function. We want to show that it is fooled by , where has independence , is the noise vector of i.i.d. bits coming up with probability say , and is bit-wise XOR.

The approach in [3] is to decompose as the sum of a function with Fourier degree , and a sum of functions where has no Fourier coefficient of degree less than , and and are bounded. The function is immediately fooled by , and it is shown in [3] that each is fooled as well.

To explain the decomposition it is best to think of as the product of functions on bits, on disjoint inputs. The decomposition in [3] is as follows: repeatedly decompose each in low-degree and high-degree . To illustrate:

This works, but the problem is that even if each time has degree , the function increases the degree by at least per decomposition; and so we can afford at most decompositions.

The decomposition in [2] is instead: pick to be the degree part of , and are all the Fourier coefficients which are non-zero in the inputs to and whose degree in the inputs of is . The functions can be written as , where is the high-degree part of and is .

Once you have this decomposition you can apply the same lemmas in [3] to get improved bounds. To handle space-bounded computation they extend this argument to matrix-valued functions.

### What’s next

In [3] we asked for tight “bounded independence plus noise” results for any model, and the question remains. In particular, what about high-degree polynomials modulo ?

### References

[1] Ravi Boppana, Johan Håstad, Chin Ho Lee, and Emanuele Viola. Bounded independence vs. moduli. In Workshop on Randomization and Computation (RANDOM), 2016.

[2] Michael A. Forbes and Zander Kelley. Pseudorandom generators for read-once branching programs, in any order. In IEEE Symp. on Foundations of Computer Science (FOCS), 2018.

[3] Elad Haramaty, Chin Ho Lee, and Emanuele Viola. Bounded independence plus noise fools products. SIAM J. on Computing, 47(2):295–615, 2018.

[4] Chin Ho Lee and Emanuele Viola. Some limitations of the sum of small-bias distributions. Theory of Computing, 13, 2017.