Just coincidence?

Proving lower bounds is one of the greatest intellectual challenges of our time. Something that strikes me is when people reach the same bounds from seemingly different angles.  Two recent examples:

• Static Data Structure Lower Bounds Imply Rigidity, by Golovnev, Dvir, Weinstein.  They show that improving static data-structure lower bounds, for linear data structures, implies new lower bounds for matrix rigidity.  My understanding (the paper isn’t out) is that the available weak but non-trivial data structure lower bounds imply the available weak but non-trivial rigidity lower bounds, and there is absolutely no room for improvement on the former without improving the latter.
• Toward the KRW Composition Conjecture: Cubic Formula Lower Bounds via Communication Complexity, by Dinur and Meir.  They reprove the $n^3$ bound on formula size via seemingly different techniques.

What does this mean?  Again, the only things that matter are those that you can prove.  Still, here are some options:

• Lower bounds are true, and provable with the bag of tricks people are using.  The above is just coincidence. Given the above examples (and others) I find this possibility quite bizarre. To illustrate the bizarre in a bizarre way, imagine a graph where one edge is a trick from the bag, and each node is a bound. Why should different paths lead to the same sink, over and over again?
• Lower bounds are true, but you need to use a different bag of tricks. My impression is that two types of results are available here.  The first is for “infinitary” proof systems, and includes famous results like the Paris-Harrington theorem. The second is for “finitary” proof systems, and includes results like Razborov’s proof that superpolynomial lower bounds cannot be proved in Res(k). What I really would like is a survey that explains what these and all other relevant proof systems are and can do, and what would it mean to either strengthen the proof system or make the unprovable statement closer to the state-of-the-art. (I don’t even have the excuse of not having a background in logic.  I took classes both in Italy and in the USA.  In Italy I went to a summer school in logic, and took the logic class in the math department.  It was a rather tough class, one of the last offerings before the teacher was forced to water it down.  If I remember correctly, it lasted an entire year (though now it seems a lot).  As in the European tradition, at least of the time, instruction was mostly one-way: you’d sit there for hours each week and just swallow this avalanche of material. At the very end, there was an oral exam where you sit with the instructor — face-to-face — and they mostly ask you to repeat random bits of the lectures.  But for the bright student some simple original problems are also asked — to be solved on the spot.  So there is substantial focus on memorization, a word which has acquired a negative connotation, some of which I sympathize with.  However a 30-minute oral exam does have its benefits, and on certain aspects I’d argue it can’t quite be replaced by written exams, let alone take-home.  But I digress.)
• Lower bounds are false. That is, all “simple” functions have say $n^3$ formula size.  You can prove this using computational checkpoints, a notion which in hindsight isn’t too complicated, but alas has not yet been invented.  To me, this remains the more likely option.

What do you think?

5 thoughts on “Just coincidence?”

1. With respect to relevant proof systems, I believe there are good reasons (see https://philosophy.stackexchange.com/questions/43356/alternatives-to-axiomatic-method/43438#43438) for singling out Friedman’s elementary function arithmetic. One important reason is that independence results from EFA would really tell use something interesting about a problem at hand, and not just something about EFA (as is often the case for weaker proof systems):

At least for fragments of arithmetic, a more explicit dividing line between weak and strong systems has been used (in the cited chapter 2 http://www.math.ucsd.edu/~sbuss/ResearchWeb/handbookII/): “The line between strong and weak fragments is somewhat arbitrarily drawn between those theories which can prove the arithmetized version of the cut-elimination theorem and those which cannot; in practice, this is equivalent to whether the theory can prove that the superexponential function is total.”

According to this dividing line, Friedman’s exponential function arithmetic (EFA) is a weak fragment of arithmetic. But if the question is whether independence results for P != NP can be proved, then EFA feels like a really interesting candidate, precisely because it cannot prove cut-elimination. It would cast an interesting light on the role of higher-order reasoning.

Of course, it is unrealistic to hope to demonstrate that “P!=NP cannot be proven in EFA”. But if you believe P=NP, then also EXP=NEXP, …, 3EXP=N3EXP, … follows. So maybe it is less unrealistic to hope to demonstrate that “3EXP!=N3EXP cannot be proven in EFA”, or at least that there is some K for which it is possible to demonstrate that “KEXP!=NKEXP cannot be proven in EFA”…

2. think says:

Define ‘simple’ by more ‘provable means’.