I believe P=NP

The only things that matter in a theoretical study are those that you can prove, but it’s always fun to speculate. After worrying about P vs. NP for half my life, and having carefully reviewed the available “evidence” I have decided I believe that P = NP.

A main justification for my belief is history:

  1. In the 1950’s Kolmogorov conjectured that multiplication of n-bit integers requires time \Omega (n^{2}). That’s the time it takes to multiply using the method that mankind has used for at least six millennia. Presumably, if a better method existed it would have been found already. Kolmogorov subsequently started a seminar where he presented again this conjecture. Within one week of the start of the seminar, Karatsuba discovered his famous algorithm running in time O(n^{\log _{2}3})\approx n^{1.58}. He told Kolmogorov about it, who became agitated and terminated the seminar. Karatsuba’s algorithm unleashed a new age of fast algorithms, including the next one. I recommend Karatsuba’s own account [9] of this compelling story.
  2. In 1968 Strassen started working on proving that the standard O(n^{3}) algorithm for multiplying two n\times n matrices is optimal. Next year his landmark O(n^{\log _{2}7})\approx n^{2.81} algorithm appeared in his paper “Gaussian elimination is not optimal” [12].
  3. In the 1970s Valiant showed that the graphs of circuits computing certain linear transformations must be a super-concentrator, a graph which certain strong connectivity properties. He conjectured that super-concentrators must have a super-linear number of wires, from which super-linear circuit lower bounds follow [13]. However, he later disproved the conjectured [14]: building on a result of Pinsker he constructed super-concentrators using a linear number of edges.
  4. At the same time Valiant also defined rigid matrices and showed that an explicit construction of such matrices yields new circuit lower bounds. A specific matrix that was conjectured to be sufficiently rigid is the Hadamard matrix. Alman and Williams recently showed that, in fact, the Hadamard matrix is not rigid [1].
  5. After finite automata, a natural step in lower bounds was to study sightly more general programs with constant memory. Consider a program that only maintains O(1) bits of memory, and reads the input bits in a fixed order, where bits may be read several times. It seems quite obvious that such a program could not compute the majority function in polynomial time. This was explicitly conjectured by several people, including [5]. Barrington [4] famously disproved the conjecture by showing that in fact those seemingly very restricted constant-memory programs are in fact equivalent to log-depth circuits, which can compute majority (and many other things).
  6. [Added 2/18] Mansour, Nisan, and Tiwari conjectured [10] in 1990 that computing hash functions on n bits requires circuit size \Omega (n\log n). Their conjecture was disproved in 2008 [8] where a circuit of size O(n) was given.

And these are just some of the more famous ones. The list goes on and on. In number-on-forehead communication complexity, the function Majority-of-Majorities was a candidate for being hard for more than logarithmically many parties. This was disproved in [3] and subsequent works, where many other counter-intuitive protocols are presented. In data structures, would you think it possible to switch between binary and ternary representation of a number using constant time per digit and zero space overhead? Turns out it is [117]. Do you believe factoring is hard? Then you also believe there are pseudorandom generators where each output bit depends only on O(1) input bits [2]. Known algorithms for directed connectivity use either super-polynomial time or polynomial memory. But if you are given access to polynomial memory full of junk that you can’t delete, then you can solve directed connectivity using only logarithmic (clean) memory and polynomial time [6]. And I haven’t even touched on the many broken conjectures in cryptography, most recently related to obfuscation.

On the other hand, arguably the main thing that’s surprising in the lower bounds we have is that they can be proved at all. The bounds themselves are hardly surprising. Of course, the issue may be that we can prove so few lower bounds that we shouldn’t expect surprises. Some of the undecidability results I do consider surprising, for example Hilbert’s 10th problem. But what is actually surprising in those results are the algorithms, showing that even very restricted models can simulate more complicated ones (same for the theory of NP completeness). In terms of lower bounds they all build on diagonalization, that is, go through every program and flip the answer, which is boring.

The evidence is clear: we have grossly underestimated the reach of efficient computation, in a variety of contexts. All signs indicate that we will continue to see bigger and bigger surprises in upper bounds, and P=NP. Do I really believe the formal inclusion P=NP? Maybe, let me not pick parameters. What I believe is that the idea that lower bounds are obviously true and we just can’t prove them is not only baseless but even clashes with historical evidence. It’s the upper bounds that are missing.

References

[1]   Josh Alman and R. Ryan Williams. Probabilistic rank and matrix rigidity. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC, Canada, June 19-23, 2017, pages 641–652, 2017.

[2]   Benny Applebaum, Yuval Ishai, and Eyal Kushilevitz. Cryptography in NC^0. SIAM J. on Computing, 36(4):845–888, 2006.

[3]   László Babai, Anna Gál, Peter G. Kimmel, and Satyanarayana V. Lokam. Communication complexity of simultaneous messages. SIAM J. on Computing, 33(1):137–166, 2003.

[4]   David A. Mix Barrington. Bounded-width polynomial-size branching programs recognize exactly those languages in NC^1. J. of Computer and System Sciences, 38(1):150–164, 1989.

[5]   Allan Borodin, Danny Dolev, Faith E. Fich, and Wolfgang J. Paul. Bounds for width two branching programs. In Proceedings of the 15th Annual ACM Symposium on Theory of Computing, 25-27 April, 1983, Boston, Massachusetts, USA, pages 87–93, 1983.

[6]   Harry Buhrman, Richard Cleve, Michal Koucký, Bruno Loff, and Florian Speelman. Computing with a full memory: catalytic space. In ACM Symp. on the Theory of Computing (STOC), pages 857–866, 2014.

[7]   Yevgeniy Dodis, Mihai Pǎtraşcu, and Mikkel Thorup. Changing base without losing space. In 42nd ACM Symp. on the Theory of Computing (STOC), pages 593–602. ACM, 2010.

[8]   Yuval Ishai, Eyal Kushilevitz, Rafail Ostrovsky, and Amit Sahai. Cryptography with constant computational overhead. In 40th ACM Symp. on the Theory of Computing (STOC), pages 433–442, 2008.

[9]   A. A. Karatsuba. The complexity of computations. Trudy Mat. Inst. Steklov., 211(Optim. Upr. i Differ. Uravn.):186–202, 1995.

[10]   Yishay Mansour, Noam Nisan, and Prasoon Tiwari. The computational complexity of universal hashing. Theoretical Computer Science, 107:121–133, 1993.

[11]   Mihai Pǎtraşcu. Succincter. In 49th IEEE Symp. on Foundations of Computer Science (FOCS). IEEE, 2008.

[12]   Volker Strassen. Gaussian elimination is not optimal. Numer. Math., 13:354–356, 1969.

[13]   Valiant. On non-linear lower bounds in computational complexity. In ACM Symp. on the Theory of Computing (STOC), pages 45–53, 1975.

[14]   Leslie G. Valiant. Graph-theoretic arguments in low-level complexity. In 6th Symposium on Mathematical Foundations of Computer Science, volume 53 of Lecture Notes in Computer Science, pages 162–176. Springer, 1977.

 

23 thoughts on “I believe P=NP

  1. Two more:
    1) The graph minor theorem gives a way to show lots or problems in P. I’ve heard it said that the GMT shows why P vs NP will be hard to prove. But it may instead be evidence that P=NP

    2) Closure of NL under complementation.

    bill gasarch

      1. The power structure in Stalinist USSR seems absolutely amazing. Karatsuba’s result is presented officially in a paper by Karatsuba and Offman, the paper was written by Kolmogorov (and, who knows, perhaps Offman, or possibly, totally by Offman at Kolmogorov’s request) Karatsuba becomes aware of this, when Kolmogorov hands him the paper.
        Worth noting that Kolmogorov was a really good guy. It is possible (likely?) that this was the best way to get the result in print.

  2. What do you mean by ‘..P=NP. Do I really believe the formal inclusion P=NP?’? Seems kind of contradictory.

    1. There are lots of problems for which you could prove surprising upper bounds (see e.g. this post), in the same spirit of P=NP. The textbook “P vs. NP” problem is just the one that got most publicity (or you may call it the scapegoat).

  3. I agree there are many places upper bounds are missing. My favorite is whether multiplication is same in complexity as addition. Is there any implication to CS if this is true? It looks important to me.

  4. If you had linear-size circuits for multiplication you would slightly improve the asymptotic complexity of a large number of algorithms which use multiplication, like hashing. I don’t know that any deployed system would benefit, and I think most people would just be in awe with the insight that this proof would yield, more than care about any specific implication.

  5. I may be wrong here. $P\neq NP$ is $\Pi_2$ sentence and $P=NP$ is $\Sigma_2$ sentence. It may be good to state which other problems might be similar in structure to $P\neq NP$ and were proved false above. That will (in my opinion (which might be nothing)) bolster the view upper bounds matter more than lower bounds (at least at this point in the history of the problem).

  6. What would a CNFSAT in DTIME(n) with benign constants world look like for humanity? Is there any point in survival of human race after that?

    1. I would expect an overwhelming number of people to become more interested in computer science theory, and study all the exciting ramifications that such a result would have.

  7. That sounds very anticlimactic. CNFSAT in DTIME[n] with a constant of say 1000 would forever written in stone truths of all reasonable questions we can ever expect an answer to. Why would we matter anymore once we know all we can hope of knowing? Surely there should be more ramifications than more people studying CS theory.

  8. Another point in favor of your view: it is always possible to satisfy at least ⅞ of the clauses of a 3-SAT instance, and there is a simple polynomial-time algorithm to find a ⅞-satisfying assignment. A theorem of Håstad shows that if for any X>⅞, there is a polynomial-time algorithm that finds an X-fraction satisfying assignment for any 3-SAT instance that has one, then P=NP. The algorithm for X=0.875 is not complicated. Are we really so sure that a more clever algorithm couldn’t achieve X = 0.875000001?

Leave a reply to L Cancel reply