The only things that matter in a theoretical study are those that you can prove, but it’s always fun to speculate. After worrying about P vs. NP for half my life, and having carefully reviewed the available “evidence” I have decided I believe that P = NP.

A main justification for my belief is history:

- In the 1950’s Kolmogorov conjectured that multiplication of -bit integers requires time . That’s the time it takes to multiply using the method that mankind has used for at least six millennia. Presumably, if a better method existed it would have been found already. Kolmogorov subsequently started a seminar where he presented again this conjecture. Within one week of the start of the seminar, Karatsuba discovered his famous algorithm running in time . He told Kolmogorov about it, who became agitated and terminated the seminar. Karatsuba’s algorithm unleashed a new age of fast algorithms, including the next one. I recommend Karatsuba’s own account [9] of this compelling story.
- In 1968 Strassen started working on proving that the standard algorithm for multiplying two matrices is optimal. Next year his landmark algorithm appeared in his paper “Gaussian elimination is not optimal” [12].
- In the 1970s Valiant showed that the graphs of circuits computing certain linear transformations must be a
*super-concentrator*, a graph which certain strong connectivity properties. He conjectured that super-concentrators must have a super-linear number of wires, from which super-linear circuit lower bounds follow [13]. However, he later disproved the conjectured [14]: building on a result of Pinsker he constructed super-concentrators using a linear number of edges. - At the same time Valiant also defined
*rigid*matrices and showed that an explicit construction of such matrices yields new circuit lower bounds. A specific matrix that was conjectured to be sufficiently rigid is the Hadamard matrix. Alman and Williams recently showed that, in fact, the Hadamard matrix is not rigid [1]. - After finite automata, a natural step in lower bounds was to study sightly more general programs with constant memory. Consider a program that only maintains bits of memory, and reads the input bits in a fixed order, where bits may be read several times. It seems quite obvious that such a program could not compute the majority function in polynomial time. This was explicitly conjectured by several people, including [5]. Barrington [4] famously disproved the conjecture by showing that in fact those seemingly very restricted constant-memory programs are in fact equivalent to log-depth circuits, which can compute majority (and many other things).
- [Added 2/18] Mansour, Nisan, and Tiwari conjectured [10] in 1990 that computing hash functions on bits requires circuit size . Their conjecture was disproved in 2008 [8] where a circuit of size was given.

And these are just some of the more famous ones. The list goes on and on. In number-on-forehead communication complexity, the function Majority-of-Majorities was a candidate for being hard for more than logarithmically many parties. This was disproved in [3] and subsequent works, where many other counter-intuitive protocols are presented. In data structures, would you think it possible to switch between binary and ternary representation of a number using constant time per digit and *zero* space overhead? Turns out it is [11, 7]. Do you believe factoring is hard? Then you also believe there are pseudorandom generators where each output bit depends only on input bits [2]. Known algorithms for directed connectivity use either super-polynomial time or polynomial memory. But if you are given access to polynomial memory full of junk that you can’t delete, then you can solve directed connectivity using only logarithmic (clean) memory and polynomial time [6]. And I haven’t even touched on the many broken conjectures in cryptography, most recently related to obfuscation.

On the other hand, arguably the main thing that’s surprising in the lower bounds we have is that they can be proved at all. The bounds themselves are hardly surprising. Of course, the issue may be that we can prove so few lower bounds that we shouldn’t expect surprises. Some of the undecidability results I do consider surprising, for example Hilbert’s 10th problem. But what is actually surprising in those results are the *algorithms*, showing that even very restricted models can simulate more complicated ones (same for the theory of NP completeness). In terms of lower bounds they all build on diagonalization, that is, go through every program and flip the answer, which is boring.

The evidence is clear: we have grossly underestimated the reach of efficient computation, in a variety of contexts. All signs indicate that we will continue to see bigger and bigger surprises in upper bounds, and P=NP. Do I really believe the formal inclusion P=NP? Maybe, let me not pick parameters. What I believe is that the idea that lower bounds are obviously true and we just can’t prove them is not only baseless but even clashes with historical evidence. It’s the upper bounds that are missing.

### References

[1] Josh Alman and R. Ryan Williams. Probabilistic rank and matrix rigidity. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC, Canada, June 19-23, 2017, pages 641–652, 2017.

[2] Benny Applebaum, Yuval Ishai, and Eyal Kushilevitz. Cryptography in NC. SIAM J. on Computing, 36(4):845–888, 2006.

[3] László Babai, Anna Gál, Peter G. Kimmel, and Satyanarayana V. Lokam. Communication complexity of simultaneous messages. SIAM J. on Computing, 33(1):137–166, 2003.

[4] David A. Mix Barrington. Bounded-width polynomial-size branching programs recognize exactly those languages in NC. J. of Computer and System Sciences, 38(1):150–164, 1989.

[5] Allan Borodin, Danny Dolev, Faith E. Fich, and Wolfgang J. Paul. Bounds for width two branching programs. In Proceedings of the 15th Annual ACM Symposium on Theory of Computing, 25-27 April, 1983, Boston, Massachusetts, USA, pages 87–93, 1983.

[6] Harry Buhrman, Richard Cleve, Michal Koucký, Bruno Loff, and Florian Speelman. Computing with a full memory: catalytic space. In ACM Symp. on the Theory of Computing (STOC), pages 857–866, 2014.

[7] Yevgeniy Dodis, Mihai Pǎtraşcu, and Mikkel Thorup. Changing base without losing space. In 42nd ACM Symp. on the Theory of Computing (STOC), pages 593–602. ACM, 2010.

[8] Yuval Ishai, Eyal Kushilevitz, Rafail Ostrovsky, and Amit Sahai. Cryptography with constant computational overhead. In 40th ACM Symp. on the Theory of Computing (STOC), pages 433–442, 2008.

[9] A. A. Karatsuba. The complexity of computations. Trudy Mat. Inst. Steklov., 211(Optim. Upr. i Differ. Uravn.):186–202, 1995.

[10] Yishay Mansour, Noam Nisan, and Prasoon Tiwari. The computational complexity of universal hashing. Theoretical Computer Science, 107:121–133, 1993.

[11] Mihai Pǎtraşcu. Succincter. In 49th IEEE Symp. on Foundations of Computer Science (FOCS). IEEE, 2008.

[12] Volker Strassen. Gaussian elimination is not optimal. Numer. Math., 13:354–356, 1969.

[13] Valiant. On non-linear lower bounds in computational complexity. In ACM Symp. on the Theory of Computing (STOC), pages 45–53, 1975.

[14] Leslie G. Valiant. Graph-theoretic arguments in low-level complexity. In 6th Symposium on Mathematical Foundations of Computer Science, volume 53 of Lecture Notes in Computer Science, pages 162–176. Springer, 1977.

Two more:

1) The graph minor theorem gives a way to show lots or problems in P. I’ve heard it said that the GMT shows why P vs NP will be hard to prove. But it may instead be evidence that P=NP

2) Closure of NL under complementation.

bill gasarch

Also the Primes in P result.

Reading your post is like taking a breath of fresh air. Regardless of whether P=NP or not.

I couldn’t find a translation of Karatsuba’s paper in English. Is there one?

The power structure in Stalinist USSR seems absolutely amazing. Karatsuba’s result is presented officially in a paper by Karatsuba and Offman, the paper was written by Kolmogorov (and, who knows, perhaps Offman, or possibly, totally by Offman at Kolmogorov’s request) Karatsuba becomes aware of this, when Kolmogorov hands him the paper.

Worth noting that Kolmogorov was a really good guy. It is possible (likely?) that this was the best way to get the result in print.

The question is will the P vs NP question remain interesting?

The P vs. NP question will always be interesting, regardless of anything else that happens. (Until it is solved that is, then something else may become more interesting.) In fact, the same is true for very crippled models of computation (discussed earlier on this blog e.g. at https://emanueleviola.wordpress.com/2015/03/13/restricted-models/). One reason is that so many people have thought so hard about it.

Note: There is no need to send me any paper solving P vs. NP, and it will not appear on this blog. The right venue for such a paper is https://www.win.tue.nl/~gwoegi/P-versus-NP.htm

Remember Savitch’s theorem…History has taugh us “never, say never…”

P=NP implies EndOfTheWorld.

What do you mean by ‘..P=NP. Do I really believe the formal inclusion P=NP?’? Seems kind of contradictory.

There are lots of problems for which you could prove surprising upper bounds (see e.g. this post), in the same spirit of P=NP. The textbook “P vs. NP” problem is just the one that got most publicity (or you may call it the scapegoat).

I think you are missing a link in ‘see e.g. this post’.

I meant

thispost.I agree there are many places upper bounds are missing. My favorite is whether multiplication is same in complexity as addition. Is there any implication to CS if this is true? It looks important to me.

‘ Is there any implication to CS, MATH or PHYSICS if this is true?’

If you had linear-size circuits for multiplication you would slightly improve the asymptotic complexity of a large number of algorithms which use multiplication, like hashing. I don’t know that any deployed system would benefit, and I think most people would just be in awe with the insight that this proof would yield, more than care about any specific implication.

I may be wrong here. $P\neq NP$ is $\Pi_2$ sentence and $P=NP$ is $\Sigma_2$ sentence. It may be good to state which other problems might be similar in structure to $P\neq NP$ and were proved false above. That will (in my opinion (which might be nothing)) bolster the view upper bounds matter more than lower bounds (at least at this point in the history of the problem).

What would a CNFSAT in DTIME(n) with benign constants world look like for humanity? Is there any point in survival of human race after that?

I would expect an overwhelming number of people to become more interested in computer science theory, and study all the exciting ramifications that such a result would have.

That sounds very anticlimactic. CNFSAT in DTIME[n] with a constant of say 1000 would forever written in stone truths of all reasonable questions we can ever expect an answer to. Why would we matter anymore once we know all we can hope of knowing? Surely there should be more ramifications than more people studying CS theory.