My last 3.5 years

I haven’t breathed (freely) since 3.5 years ago.  Precisely since the day before I left my Cambridge flat, when the Pods guy told me he couldn’t park. I had to vacate within 24 hours, had no place to put all the stuff I had never used since moving there in 2008, and also happened to have a 3-hour CPR course planned long ago, starting in minutes.  I took that life-saving course on the edge of the seat, each 5-minute break dashing out to call movers who might have had an unlikely last minute cancellation in the busiest day of the year (August 31).

Oh the times I wished that the fireproof storage where the things eventually went burned down to the ground.  Instead I was going to have to move my never used belongings a million times up and down stairs.

Anyway, after Cambridge I went to the Simons institute. Even with all the help from the staff, finding housing was atrocious, and I had to change it during the semester. I didn’t have a place to come back, and from Berkeley I eventually found a short-term rental in Needham, MA.  The idea was to buy a house in that short term.  This proved impossible.  So we had to find another rental.  In the process, I was discriminated against three times.  One time the landlord rejected in writing my application claiming that they did not want to rent to families. The other two times the landlord simply rejected my application, and then lowered the price. I thought these moves made them dumb, but maybe they are actually much smarter than me, because after toying with the idea I did not, in fact, sue.

Eventually we found another longer-term rental.  From there, with more excruciating difficulties I wrote about earlier, I bought a house, which however required 1 year of renovations (not exactly cosmetic — more about this later).  These were completed just in time to store my useless stuff there: I left for another semester at the Simons institute.

My second visit to the institute was also great.  In fact I enjoyed it even more than the semester on fine-grained: I was there for the program on lower bounds, which are exactly the problems I went into computer science to study. I had the best time, and lots of research exchanges.

But again, the housing situation in Berkeley was desperate.  Twice I lost a house for 1 hour. Meaning, the landlord called to make the deal, I couldn’t pick up the phone, and when I called back 1 hour later the place was gone.  I still think it would be better if the institute bought a block of houses, and also provided computers.  Even better if they make it easier to print, rather than having to stand in a corner or go through a complicated set up.

Another interesting pattern is that during my first visit there was a heat wave and the AC broke, and it was hot.  This time there was a rather serious wildfire, causing very unhealthy conditions in the bay area, and at times they couldn’t run the heating systems to avoid sucking in the smoke, and it was cold.

Berkeley isn’t Princeton, but it’s hard for me not to compare the logistics of my visits to Simons and the IAS in Princeton.  In the latter I was put in a house steps from the Institute, with minimal effort and at a fraction of the price.  In my office there was already a working computer, connected to a printer.

Here’s the meaning of cloud computing, remote desktop, telnet, etc in 2019, here’s the progress, the sustainability, the sharing economy: everybody brings their own laptop.

Back from Simons, I can’t help but be surprised that I still have an office.  In fact this happens every time I go up the stairs, turn the corner and see my name on the tag, and it says “Professor”. Really? Under my name? I have a startle each time.  I know this feeling is irrational, but is there.  Coming back from California, the feeling is intense.

Back to business, I am now teaching algorithms.  I am running an online section, for which I am making videos on my youtube channel. It’s the future.

Advertisements

$50M to Northeastern Computer Science

Northeastern Computer Science is receiving a $50M gift.  If you are looking for a faculty position, check out our many openings, including the joint math-cs position. Also if you are applying for a PhD take a look at our college.  In particular as I mentioned already I am looking for students.

Symmetry

Yesterday at the Simon institute there was a fun talk about The edge of physics by its author, Anil Ananthaswamy. To divulgate the theory of computation isn’t as easy, since you can’t talk about mega experiments done at the South Pole or massive telescopes built on top of mountains.  (It would also be a lot easier if we could resolve P vs. NP.)  For some inspiration we can look at books related to mathematics.  Here I would like to recommend Symmetry, by Marcus du Sautoy.  I enjoyed very much reading and re-reading this book, much more than his previous book The music of the primes, which I don’t really recommend.  Symmetry is a gripping history of group theory.  The purpose isn’t so much explaining the math as making you excited about the historical developments of the theory and the people that worked and are working on it.

shoving marijuana down the throats of Newton’s residents

Congratulations to the marijuana industry and the Newton MA administration for rigging the elections and pouring > $70K into a campaign strategist who lives in a neighboring city where recreational pot shops are banned, thereby snatching a narrow victory and shoving marijuana down the throats of Newton’s residents. When the pot shops open, owned by people who live in the same neighboring city which does not have them, I’ll have a toast to you with a marijuana drink.

Well, I think I am taking a break from politics, at least until I have a stronger financial backing. I have a bigger impact on society with my research.

Just coincidence?

Proving lower bounds is one of the greatest intellectual challenges of our time. Something that strikes me is when people reach the same bounds from seemingly different angles.  Two recent examples:

What does this mean?  Again, the only things that matter are those that you can prove.  Still, here are some options:

  • Lower bounds are true, and provable with the bag of tricks people are using.  The above is just coincidence. Given the above examples (and others) I find this possibility quite bizarre. To illustrate the bizarre in a bizarre way, imagine a graph where one edge is a trick from the bag, and each node is a bound. Why should different paths lead to the same sink, over and over again?
  • Lower bounds are true, but you need to use a different bag of tricks. My impression is that two types of results are available here.  The first is for “infinitary” proof systems, and includes famous results like the Paris-Harrington theorem. The second is for “finitary” proof systems, and includes results like Razborov’s proof that superpolynomial lower bounds cannot be proved in Res(k). What I really would like is a survey that explains what these and all other relevant proof systems are and can do, and what would it mean to either strengthen the proof system or make the unprovable statement closer to the state-of-the-art. (I don’t even have the excuse of not having a background in logic.  I took classes both in Italy and in the USA.  In Italy I went to a summer school in logic, and took the logic class in the math department.  It was a rather tough class, one of the last offerings before the teacher was forced to water it down.  If I remember correctly, it lasted an entire year (though now it seems a lot).  As in the European tradition, at least of the time, instruction was mostly one-way: you’d sit there for hours each week and just swallow this avalanche of material. At the very end, there was an oral exam where you sit with the instructor — face-to-face — and they mostly ask you to repeat random bits of the lectures.  But for the bright student some simple original problems are also asked — to be solved on the spot.  So there is substantial focus on memorization, a word which has acquired a negative connotation, some of which I sympathize with.  However a 30-minute oral exam does have its benefits, and on certain aspects I’d argue it can’t quite be replaced by written exams, let alone take-home.  But I digress.)
  • Lower bounds are false. That is, all “simple” functions have say n^3 formula size.  You can prove this using computational checkpoints, a notion which in hindsight isn’t too complicated, but alas has not yet been invented.  To me, this remains the more likely option.

What do you think?

How to rig an election

After the historic signature collection there was a pitched battle to decide which questions to put on the ballot.  Alas, the battle resulted in somewhat of a defeat for the residents of Newton.  The councilors of Newton saw it fit to put two conflicting questions on the ballot, and to resolve the conflict by stipulating that if both questions pass, the one with the highest number of yes votes will prevail. As explained below, this forces residents to strategize, take a risk, and in a way answer questions against their true preference — a well-known, and bad, situation in election theory.

The two questions are:

  • Question 1:  Shall the City adopt the following general ordinance?
    All recreational marijuana retail establishments shall be prohibited from operating in the City of NewtonCouncilors unanimously approved the inclusion of this question on the ballot.
  • Question 2:  Shall the City adopt the following zoning ordinance?
    The number of recreational marijuana retail establishments shall be not fewer than two (2) nor more than four (4). Councilors approved the inclusion of this question on the ballot by a vote of 11 to 10.

Yes, the motion to put Question 2 on the ballot passed by 1 vote. Each of those 11 councilors can go home feeling satisfied that they bear full responsibility for ignoring the clear preference of their constituents.  It doesn’t matter what the chief of the Newton police says, or what the former head of the Newton-Wellsely hospital says, or what any of the other dozens of high-profile people say, or that you collected thousands of signatures.  Those 11 councilors know what’s best for Newton. (Oh, and by the way, the upper bound is meaningless and can be easily increased. )

Before they convened to deliberate I sent them this message:

  • If you want to put another question on the ballot besides a simple YES/NO question, then you should first collect 7,000 signatures.

I doubt they could have even collected 70 for Question 2.

But the real problem is the rule I mentioned before, that if both questions have a majority of yes votes, the one with the highest number of yes votes will prevail.  To illustrate, consider the following realistic scenario.  Suppose that a resident of Newton loathes recreational marijuana establishments.  When they go to the ballot, they obviously vote yes on Question 1.  What should they do about Question 2?  If Question 1 loses, they are better off if Question 2 wins.  Suppose they also vote yes on 2, and that 99% of Newton residents behaves this way. Then it’s enough that a merry 1% band of business(wo)men vote no on Question 1 and yes on Question 2, and they harness all the votes that people cast to their own advantage.

There do exist fair ways of having both questions on the ballot, but this isn’t one. The current setup forces people who really want to ban recreational marijuana to strategize by voting no on question 2, and risk that if Question 1 loses, they end up with unlimited recreational stores.

Maybe it’s a little hard to understand this in terms of marijuana.  Consider the following scenario:

  1. Question 1: Do you want to ban torture?
  2. Question 2: Do you want to limit the amount of torture that can be inflicted upon you?
  3. Default: Unlimited torture can be inflicted upon you.
  4. If both Questions 1 and 2 have majority Yes, the one with the highest number of yes prevails.

It is not going to be easy, but it seems that in the upcoming campaign we will have to convince people to answer ‘NO’ to question 2.

 

 

 

bounded independence plus noise fools space

There are many classes of functions on n bits that we know are fooled by bounded independence, including small-depth circuits, halfspaces, etc. (See this previous post.)

On the other hand the simple parity function is not fooled. It’s easy to see that you require independence at least n-1. However, if you just perturb the bits with a little noise N, then parity will be fooled. You can find other examples of functions that are not fooled by bounded independence alone, but are if you just perturb the bits a little.

In [3] we proved that any distribution with independence about n^{2/3} fools space-bounded algorithms, if you perturb it with noise. We asked, both in the paper and many people, if the independence could be lowered. Forbes and Kelley have recently proved [2] that the independence can be lowered all the way to O(\log n), which is tight [1]. Shockingly, their proof is nearly identical to [3]!

This exciting result has several interesting consequences. First, we now have almost the same generators for space-bounded computation in a fixed order as we do for any order. Moreover, the proof greatly simplifies a number of works in the literature. And finally, an approach in [4] to prove limitations for the sum of small-bias generators won’t work for space (possibly justifying some optimism in the power of the sum of small-bias generators).

My understanding of all this area is inseparable from the collaboration I have had with Chin Ho Lee, with whom I co-authored all the papers I have on this topic.

The proof

Let f:\{0,1\}^{n}\to \{0,1\} be a function. We want to show that it is fooled by D+E, where D has independence k, E is the noise vector of i.i.d. bits coming up 1 with probability say 1/4, and + is bit-wise XOR.

The approach in [3] is to decompose f as the sum of a function L with Fourier degree k, and a sum of t functions H_{i}=h_{i}\cdot g_{i} where h_{i} has no Fourier coefficient of degree less than k, and h_{i} and g_{i} are bounded. The function L is immediately fooled by D, and it is shown in [3] that each H_{i} is fooled as well.

To explain the decomposition it is best to think of f as the product of \ell :=n/k functions f_{i} on k bits, on disjoint inputs. The decomposition in [3] is as follows: repeatedly decompose each f_{i} in low-degree f_{L} and high-degree f_{H}. To illustrate:

\begin{aligned} f_{1}f_{2}f_{3} & =f_{1}f_{2}(f_{3H}+f_{3L})=f_{1}f_{2}f_{3H}+f_{1}(f_{2H}+f_{2L})f_{3L}=\ldots \\ = & f_{1H}f_{2L}f_{3L}+f_{1}f_{2H}f_{3L}+f_{1}f_{2}f_{3H}+f_{1L}f_{2L}f_{3L}\\ = & H_{1}+H_{2}+H_{3}+L. \end{aligned}

This works, but the problem is that even if each time f_{iL} has degree 1, the function L increases the degree by at least 1 per decomposition; and so we can afford at most k decompositions.

The decomposition in [2] is instead: pick L to be the degree k part of f, and H_{i} are all the Fourier coefficients which are non-zero in the inputs to f_{i} and whose degree in the inputs of f_{1},\ldots ,f_{i} is \ge k. The functions H_{i} can be written as h_{i}\cdot g_{i}, where h_{i} is the high-degree part of f_{1}\cdots f_{i} and h_{i} is f_{i+1}\cdots f_{\ell }.

Once you have this decomposition you can apply the same lemmas in [3] to get improved bounds. To handle space-bounded computation they extend this argument to matrix-valued functions.

What’s next

In [3] we asked for tight “bounded independence plus noise” results for any model, and the question remains. In particular, what about high-degree polynomials modulo 2?

References

[1]   Ravi Boppana, Johan Håstad, Chin Ho Lee, and Emanuele Viola. Bounded independence vs. moduli. In Workshop on Randomization and Computation (RANDOM), 2016.

[2]   Michael A. Forbes and Zander Kelley. Pseudorandom generators for read-once branching programs, in any order. In IEEE Symp. on Foundations of Computer Science (FOCS), 2018.

[3]   Elad Haramaty, Chin Ho Lee, and Emanuele Viola. Bounded independence plus noise fools products. SIAM J. on Computing, 47(2):295–615, 2018.

[4]   Chin Ho Lee and Emanuele Viola. Some limitations of the sum of small-bias distributions. Theory of Computing, 13, 2017.