What is the P vs. NP problem? My two cents

PvsNPproveit

I just had to convert a movie clip into a different format. The conversion took ten minutes. Given that the clip can be loaded into memory in one second, ten minutes is a long time. Could not this be done faster? In fact, why not one second?

Afterwards, I played video games. I have a playstation 3, but I heard that on the playstation 4 the games look better because with the faster processor the 3D scenes have more details. But do I really need a faster processor for those details? Could not my playstation 3 be programmed to run those games? In fact, could there be a way to have playstation 4 games run on a Commodore 64, or even… see the picture. There do exist hardware limitations of course, but these are not the bottleneck. A present-day 3D game engine on a Commodore 64 would be a stunning achievement, even if the resolution were a bit coarser and the cut scenes deleted.

These are two examples of a wide-open question that is central to theoretical computer science, and to a growing number of fields of science: Are there computational tasks that require a long time?

This may sound puzzling at first, since computers that take a long time are everyday experience, from the cases mentioned above to every time the mouse pointer turns into a sand clock for a ridiculous amount of time, including just now when I booted my computer so that I could continue this post. But the point is that nobody knows whether computers could be programmed to run much faster.

To be sure, there does exist one problem that is known to take a long time. I call this the program-enumeration problem, and is as follows: consider a computer program that goes through every possible program of length n, runs each for n steps, and does something different from each. For example the program could output the first number that is not output by any program of size n within n steps. By construction, this strange task cannot be performed in time n. On the other hand, it can be done in time exponential in n. (Think n = 10000000000000000, which is roughly how many instructions a modern computer can do in a year.)

A crude, pessimistic summary of our understanding of the limitations of computation is that this is it: all we know is that the program-enumeration problem cannot be solved in time n. This single result does have many applications, for example it can be used to show that there is no algorithm for checking the correctness of programs, somewhat justifying the immense industry devoted to that. But the result is very unsatisfactory because most problems that we face, including those mentioned above, have nothing to do with program enumeration. It also feels unsatisfactory because it does not give any information on computation beyond the simple fact that programs can run other programs.

The wide-open question can now be reformulated more precisely as follows.

Grand question: Is there any computational task, besides program enumeration, that requires a long time?

A negative answer, meaning that computers are all-powerful, would have dramatic consequences, well beyond the above examples. What would you give for a computer that executes any task with no perceptible delay? What would you give to play today the game engines of the next fifty years? Certain companies would see a dramatic increase in profits. And, what I find the most interesting application, scientists would be able to solve very complex models, pushing way beyond current knowledge. Let me elaborate on this last point. Interestingly, the situation in several branches of science is somewhat similar to theoretical computer science. Specifically, scientists have identified a number of features which are desirable in a model of whatever it is that they are studying. However, nobody is able to solve models with these feature, except in toy cases, because known programs are too slow.

That computers are all-powerful sounds too good to be true, and in fact the common belief is that there do exist many problems that take a long time. This also has many desirable applications: the security of many everyday electronic systems relies on this belief for specific problems, such as factoring numbers. But we cannot be completely confident of the security until the belief is proved.

The P vs. NP problem is a young, prominent special case of the grand challenge. I think it should be presented as such, which is not always done. The problem asks whether a specific class of problems requires at least a specific amount of time to solve.

P stands for the computational tasks that can be done somewhat efficiently. The letter P is short for “polynomial time:” the time required to solve the problem must scale with a polynomial in the size of the problem. For example, if you have a program that converts movies of size n in time quadratic in n, that would be efficient, and the conversion task would be in P. This is of course a theoretical approximation which does not necessarily guarantee efficiency in practice. But this approximation is convenient, and meaningful when compared to seemingly exponential-time tasks such as program enumeration.

I don’t like the terminology “polynomial time.” Calling something a polynomial emphasizes that that something is made of many monomials. However, the only monomial that is relevant in the definition of P is the one with the highest power. I thought of “power time.” I find it more to the point, and simpler, while preserving the initial.

NP is what you can solve in power time on a non-deterministic computer. This class captures many problems that people care about, but that are not known to be in P. To keep with the spirit of the post, I’ll mention that among these problems are appropriate generalizations of Tetris, Lemmings, and Super Mario.

The P vs. NP problem can also be presented as the difference between computing a solution (P) and checking it (NP). For example, if someone can solve a tough Tetris level, it is easy for you to check that. By contrast, if someone can solve the program-enumeration problem, it is not clear how you would check that. Indeed, that problem is not believed to be in NP.

I have mixed feelings about this presentation. I do find it catchy, but I don’t like that it suggests to me that to check solutions we will use the same variety of algorithms that we use in computing them. Actually, solutions can be encoded so that the verification is extremely simple, as is known since the original formulations. On the other hand, the length of the solution does increase for this encoding, and a variety of algorithms may be recovered from the encoding.

Do I think that P is different from NP? I feel that I have no idea. In the few years that I have been following this research I have already seen several efficient algorithms for tasks that at first sight looked impossible, and in some cases were conjectured to be. Here are two:

1. Suppose your computer has no memory (i.e., it only has a constant number of registers and the program counter). Can you determine in power time if a.b > c, for integers a,b, and c with many digits? This is actually possible, by Barrington’s theorem.

2. Is there a pseudorandom generator where each output bit depends on only five input bits? This is also believed to be possible, by a result of Applebaum, Ishai, and Kushilevitz.

Both relate to restricted computational models, which is not that surprising, since we do not understand general computation. However, by analogy, should there not be similar surprises for P? Or will the only surprises in computational classes be of the type that something in P is shown to be doable even in a more restricted computational model?

8 thoughts on “What is the P vs. NP problem? My two cents

  1. I think the reason for using polynomials instead of just monomials as you suggest even though they give the same deduction for P is that the former has better closure properties. For example, polynomials are closed under addition while monomials are not.

    1. Thanks for the comment.

      Monomials an^b are closed under addition because of course an^b+a’n^b=(a+a’)n^b. Polynomials obviously give sharper closure properties (i.e., I don’t need to “waste” another term n^b if I just need +1) but I know of very few cases where this matters. Do you have in mind specific applications where this matters?

      1. Both n and n^2 are monomials, but their sum n + n^2 is not. However, n + n^2 < 2 n^2, which is a monomial. This comes up when running two algorithms in series. If the runtime of each algorithm is bounded by a polynomial, then the runtime of the resulting algorithm is clearly bounded by the sum of these polynomials, which is a polynomial. If you use monomials instead, then your argument requires one more step, which is an inequality like the one above.

  2. Great Post. It is hard to imagine a discussion of the P vs NP problem avoiding the overly used points and cliches, and in fact putting on a fresh perspective. But I think this post just manages to do so.

    Personally, the most reasonable meta-heuristic for believing P != NP for me is the fact that we know random functions are hard to compute (in the sense of requiring a large circuits to compute) … So it is hard to believe most reasonable functions we can come up with (anything in polynomial hierarchy) actually belong to this little corner of problems with smallish circuits.

    1. Thanks!

      But the functions we look for are explicit and so they also constitute a “little corner of problems.” That’s why I don’t find this counting argument convincing.

  3. a virtually identical question was asked recently on cs.se …. have you read the impaggliazzos world paper (cited there)? it is undeservably obscure…. ps you might also like fortnows new book on the subject….. it seems there is little scientific agreement and writing on what P=NP would mean practically….

Leave a reply to vznvzn Cancel reply