With a historic effort, the residents of Newton MA have collected in a very short time 6,000+ signatures thanks to which a forthcoming ballot will include a question on banning recreational Marijuana sales in the city. (For background see the previous post and comments.)
There are many classes of functions on bits that we know are fooled by bounded independence, including small-depth circuits, halfspaces, etc. (See this previous post.)
On the other hand the simple parity function is not fooled. It’s easy to see that you require independence at least . However, if you just perturb the bits with a little noise , then parity will be fooled. You can find other examples of functions that are not fooled by bounded independence alone, but are if you just perturb the bits a little.
In  we proved that any distribution with independence about fools space-bounded algorithms, if you perturb it with noise. We asked, both in the paper and many people, if the independence could be lowered. Forbes and Kelley have recently proved  that the independence can be lowered all the way to , which is tight . Shockingly, their proof is nearly identical to !
This exciting result has several interesting consequences. First, we now have almost the same generators for space-bounded computation in a fixed order as we do for any order. Moreover, the proof greatly simplifies a number of works in the literature. And finally, an approach in  to prove limitations for the sum of small-bias generators won’t work for space (possibly justifying some optimism in the power of the sum of small-bias generators).
My understanding of all this area is inseparable from the collaboration I have had with Chin Ho Lee, with whom I co-authored all the papers I have on this topic.
Let be a function. We want to show that it is fooled by , where has independence , is the noise vector of i.i.d. bits coming up with probability say , and is bit-wise XOR.
The approach in  is to decompose as the sum of a function with Fourier degree , and a sum of functions where has no Fourier coefficient of degree less than , and and are bounded. The function is immediately fooled by , and it is shown in  that each is fooled as well.
To explain the decomposition it is best to think of as the product of functions on bits, on disjoint inputs. The decomposition in  is as follows: repeatedly decompose each in low-degree and high-degree . To illustrate:
This works, but the problem is that even if each time has degree , the function increases the degree by at least per decomposition; and so we can afford at most decompositions.
The decomposition in  is instead: pick to be the degree part of , and are all the Fourier coefficients which are non-zero in the inputs to and whose degree in the inputs of is . The functions can be written as , where is the high-degree part of and is .
Once you have this decomposition you can apply the same lemmas in  to get improved bounds. To handle space-bounded computation they extend this argument to matrix-valued functions.
In  we asked for tight “bounded independence plus noise” results for any model, and the question remains. In particular, what about high-degree polynomials modulo ?