I am teaching and writing some notes on complexity. I hope they will become a book, so they are organized as such. The notes will be serialized on this blog, and you can find the latest version of the book in pdf here, which has a better rendering of tables, pictures, comic strips, etc.
0.1 Conventions, choices, and caveats
I write this section before the work is complete, so some of it may change.
This book covers basic results in complexity theory and can be used for a course on the subject. At the same time, it is perhaps sui generis in that it also tells a story of the quest for impossibility results, includes some personal reflections, and makes some nonstandard choices about topics and technical details. Some of this is discussed next.
To test your understanding of the material…
this book is interspersed with mistakes, some subtle, some blatant, some not even mistakes but worrying glimpses into the author’s mind. Please send all bug reports and comments to (my fiveletter last name)@ccs.neu.edu to be included in the list of heroes.
The notation.
The mathematical symbol has a special meaning in this text. Every occurrence of denotes a real number . There exist choices for these numbers such that the claims in this book are (or are meant to be) correct. This replaces, is more compact than, and is less prone to abuse than the bigOh notation (sloppiness hides inside brackets).
Example 0.1. “For all sufficiently large ” can be written as .
“For every and all sufficiently large ” can be written as .
The following are correct statements:
“It is an open problem to show that some function in NP requires circuits of size .” At the moment of this writing, one can replace this occurrence with . Note such a claim will remain true if someone proves a lower bounds. One just needs to “recompile” the constants in this book.
“”, e.g. assign to the first occurrence, to the second.
“”, for="" all="" large="" enough="" .="" assign="" c="16$."
The following are not true:
“ for every ”. No matter what we assign to, we can pick a large enough . Note the assignment to is absolute, independent of .
More generally, when subscripted this notation indicates a function of the subscript. There exist choices for these functions such that the claims in this book are (or are meant to be) correct. Again, each occurrence can indicate a different function.
For the reader who prefers the bigOh notation a quick an dirty fix is to replace every occurrence of in this book with .
The alphabet of TMs.
I define TMs with a fixed alphabet. This choice slightly simplifies the exposition (one parameter vs. two), while being more in line with common experience (it is more common experience to increase the length of a program than its alphabet). This choice affects the proof of Theorem ??. But it isn’t clear that the details are any worse.
Partial vs. total functions (a.k.a. on promise problems).
Recall that promise problems offer the most direct way of formulating natural computational problems. […] In spite of the foregoing opinions, we adopt the convention of focusing on standard decision and search problems. [2]
I define complexity w.r.t. partial functions whereas most texts consider total functions, i.e. we consider computing functions with arbitrary domains rather than any possible string. This is sometimes called “promise problems.” This affects many things, for example the hierarchy for (Exercise ??).
References and names.
I decided to keep references in the main text to a minimum, just to avoid having a long list later with items “Result X is due to Y,” but relegate discussion to bibliographic notes. I have also decided to not spell out names of authors, which is increasingly awkward. Central results, such as the PCP theorem, are coauthored by five or more people.
Polynomial.
It is customary in complexity theory to bound quantities by a polynomial, as in polynomial time, when in fact the only terms that matters is the leading time. This also lends itself to confusion since polynomials with many terms are useful for many other things. I use power instead of polynomial, as in power time.
Randomaccess machines.
“Random access” also leads to strange expressions like “randomized randomaccess” [1].
Reductions.
Are presented as an implication.
Randomness and circuits.
While randomness and circuits are everywhere in current research, and seem to be on everyone’s mind, they are sometimes still relegated to later chapters, almost as an afterthought. This book starts with them right away, and attempts to weave them through the narrative.
Data structures
Their study, especially negative results, squarely belongs to complexity theory. Yet data structures are strangely omitted in common textbooks. Results on data structures even tend to miss main venues for complexity theory to land instead on more algorithmic venues! We hope this book helps to revert this trend.
Algorithms & Complexity
…are of course two sides of the same coin. The rule of thumb I follow is to present algorithms that are surprising and challenge our intuition of computation, and ideally match lower bounds, even though they may not be immediately deployed.
Exercises and problems.
Exercises are interspersed within the narrative and serve as “concept check.” They are not meant to be difficult or new, though some are. Problems are collected at the end and tend to be harder and more original, though some are not.
Summary of some terminological and not choices.
Here it is:



Some other sources  this book  acronym 






,  



Turing machine  tape machine  TM 



randomaccess machine  rapidaccess machine  RAM 



polynomial time  power time  P 



mapping reduction (sometimes)  reduces to in means  



Extended ChurchTuring thesis  Powertime computability thesis  



pairwise independent  pairwise uniform  



FP, promiseP  P  



TM with any alphabet  TM with fixed alphabet  



classes have total functions  classes have partial functions  



Copyright 2022present by Emanuele Viola
Chapter 1
A teaser
Consider a computer with three bits of memory. There’s also a clock, beating In one clock cycle the computer can read one bit of the input and update its memory arbitrarily based on the value of the bit and the current memory, or stop and return a value.
Let’s give a few examples of what such computer can do.
First, it can compute the function on bits:
If return 0
We didn’t really use the memory. Let’s consider a slightly more complicated example. A word is palindrome if it reads the same both ways, like racecar, non, anna, and so on. Similarly, example of palindrome bit strings are , and so on.
Let’s show that the computer can decide if a given string is palindrome quickly, in steps
and write it in memory bit
If return 0
That was easy. Now consider the function on bits, which is iff the sum of the input bits is and otherwise. Majority, like any other function on bits, can be computed on such a computer in time exponential in . To do that, you do a pass on the input and check if it’s all zero, using the program for And given above. If it is, return . If it is not, you do another pass now checking if it’s all zero except the last bit is . If it is, return . You continue this way until you exhausted all the possible inputs with Majority equals to . If you never returned you can now safely return .
As we said, this works for any function, but it’s terribly inefficient. Can we do better for Majority? Can we compute it in time which is just a power of ?
Exercise 1.1. Convince yourself that this is impossible. Hint: If you start counting bits, you’ll soon run out of memory.
If you solved the exercise, you are not alone.
And yet, we will see the following shocking result:
Shocking theorem:
Majority can be computed on such a computer in time .
And this is not a trick tailored to majority. Many other problems, apparently much more complicated, can also be solved in the same time.
But, there’s something possibly even more shocking.
Shocking situation:
It is consistent with our state of knowledge that every “textbook algorithm” can be solved in time on such a computer! Nobody can disprove that. (Textbook algorithms include sorting, maxflow, dynamic programming algorithms like longest common subsequence etc., graph problems, numerical problems, etc.)
The Shocking theorem gives some explanation for the Shocking situation. It will be hard to rule out efficient programs on this model, since they are so powerful and counterintuitive. In fact, we will see later that this can be formalized. Basically, we will show that the model is so strong that it can compute functions that provably escape the reach of current mathematics… if you believe certain things, like that it’s hard to factor numbers. This now enters some of the mysticism that surrounds complexity theory, where different beliefs and conjectures are pitted against each other in a battle for ground truth.
References
In the algorithm for checking palindromes, where is ni computed and stored? Given i you can refer to x_(ni), but you can’t store the number of ones?
Which bit to read depends on the clock and on the input length only. Rereading, it’s good to clarify this. Thanks! Let me know if it makes more sense now.