Getting started with OpenGL and C++

Below is a template that shows a fractal plant that you can modify with the keys z,w,s. Getting this to work was not straightforward, but the idea is that now you can just have fun and start your videogame. The code is not meant to be optimized at all, but simple. It’s pretty much self-explanatory, basically you can just replace the call to drawFrac in display to show whatever you want, and you handle key strokes in the idle function. I only tested it on Linux.

/*
This is a template to get started with OpenGL (which needs to be installed)
To compile: g++ GLtemplate.c++ -o GLtemplate -lglut -lGLU -lGL
To run: ./GLtemplate

It shows a fractal plant that you can modify with the keys z,w,s

*/

#include <GL/glut.h>
#include <stdio.h>
#include <math.h>

GLfloat userAngle = M_PI/2, userLength = 0.5;

//Keyboard code.  keyStates[x] is true if x is pressed.

bool keyStates[256];     //Key state values

void keyPressed (unsigned char key, int x, int y) { keyStates[key] = true;  }
void keyUp      (unsigned char key, int x, int y) { keyStates[key] = false; }

void idle() {
  if (keyStates['z']) {userAngle += 0.01;}
  if (keyStates['w']) {userLength += 0.01;}
  if (keyStates['s']) {userLength -= 0.01; if (userLength < 0) userLength = 0;}
}

/* Draws a plant from x,y with first piece length l, and angle a
The window has coordinates from (-1,-1) to (1,1).  The center is 0,0.
*/

void drawFrac(GLfloat x,GLfloat y,GLfloat l, GLfloat a) {
  if ( l < 0.001 )
    return;

  glColor3f(0, l*30, 0);
  glLineWidth(l*10);  //Must be before glBegin(GL_LINES)

  glBegin(GL_LINES);

  glVertex2d(x,y);
  glVertex2d(x+cos(a)*l,y+sin(a)*l);

  glEnd();

  drawFrac(x+cos(a)*l*0.3,y+sin(a)*l*0.3,l*0.3,a+M_PI/4);
  drawFrac(x+cos(a)*l*0.6,y+sin(a)*l*0.6,l*0.4,a-M_PI/4);

  drawFrac(x+cos(a)*l,y+sin(a)*l,l*0.5,a+M_PI/4);
  drawFrac(x+cos(a)*l,y+sin(a)*l,l*0.5,a-M_PI/4);
}

//This is the function that draws everything on the screen
void display() {
  glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); //Clear screen

  /* Draw whatever you need here */
  drawFrac(0,0,userLength,userAngle);

  glutSwapBuffers();
  glutPostRedisplay();
}

int main(int argc, char** argv)
{
    glutInit(&argc, argv);
    glutInitDisplayMode(GLUT_SINGLE);
    glutInitWindowSize(1000, 1000);
    glutInitWindowPosition(0,0);
    glutCreateWindow("GLtemplate");
    glutDisplayFunc(display);
    glutIdleFunc(idle);
    glutKeyboardUpFunc(keyUp);
    glutKeyboardFunc(keyPressed);

    glutMainLoop();
    return 0;
}

Myth creation: The switching lemma

The history of science is littered with anecdotes about misplaced credit. Because it does not matter if it was A or B who did it; it only matters if it was I or not I. In this spirit I am starting a series of posts about such misplaced credit, which I hesitated before calling more colorfully “myth creation.” Before starting, I want to make absolutely clear that I am in no way criticizing the works themselves or their authors. In fact, many are among my favorites. Moreover, at least in the examples I have in mind right now, the authors do place their work in the appropriate context with the use of citations etc. My only point is the credit that the work has received within and without our community (typically due to inertia and snowball effects rather than anything else).

Of course, at some level this doesn’t matter. You can call Chebichev’s polynomials rainbow sprinkles and the math doesn’t change. And yet at some other level maybe it does matter a little, for science isn’t yet a purely robotic activity. With these posts I will advertise unpopular points of views that might be useful, for example to researchers who are junior or from different communities.

The switching lemma

I must admit I had a good run — Johan Hastad (privately to the blogger)

Random restrictions have been used in complexity theory since at least the 60’s [Sub61]. The first dramatic use in the context of AC0 is due to [FSS84Ajt83]. These works proved a switching lemma the amazing fact that a DNF gets simplified by a random restriction to the point that it can be written as a CNF, so you can collapse layers and induct. (An exposition is given below.) Using it, they proved super-polynomial lower bounds for AC0. The proof in [FSS84] is very nice, and if I want to get a quick intuition of why switching is at all possible, I often go back to it. [Ajt83] is also a brilliant paper, and long, unavailable online for free, filled with a logical notation which makes some people twitch. The first symbol of the title says it all, and may be the most obscene ever chosen:

\begin{aligned} \Sigma _{1}^{1}. \end{aligned}

Subsequently, [Yao85] proved exponential lower bounds of the form 2^{n^{c}}, with a refined analysis of the switching lemma. The bounds are tight, except for the constant c which depends on the depth of the circuit. Finally, the star of this post [Has86Has87] obtained c=1/(depth-1).

Yao’s paper doesn’t quite state that a DNF can be written exactly as a CNF, but it states that it can be approximated. Hastad’s work is the first to prove that a DNF can be written as a CNF, and in this sense his statement is cleaner than Yao’s. However, Yao’s paper states explicitly that a small circuit, after being hit by a restriction, can be set to constant by fixing few more bits.

The modern formulation of the switching lemma says that a DNF can be written as a shallow decision tree (and hence a small CNF). This formulation in terms of decision trees is actually not explicit in Hastad’s work. Beame, in his primer [Bea94], credits Cai with this idea and mentions several researchers noted Hastad’s proof works in this way.

Another switching lemma trivia is that the proof in Hastad’s thesis is actually due to Boppana; Hastad’s original argument — of which apparently no written record exists — was closer to Razborov’s later proof.

So, let’s recap. Random restrictions are already in [Sub61]. The idea of switching is already in [FSS84Ajt83]. You already had three analyses of these ideas, two giving superpolynomial lower bounds and one [Yao85] giving exponential. The formulation in terms of decision trees isn’t in [Has87], and the proof that appears in [Has87] is due to Boppana.

Still, I would guess [Has87] is more well known than all the other works above combined. [Yao85] did have a following at the time — I think it appeared in the pop news. But hey — have you ever heard of Yao’s switching lemma?

The current citation counts offer mixed support for my thesis:

FSS: 1351

Y: 732

H – paper “Almost optimal…:” 867

H – thesis: 582

But it is very hard to use citation information. The two H citations overlap, and papers are cited for various reasons. For example FSS got a ton of citations for the connection to oracles (which has nothing to do with switching lemmas).

Instead it’s instructive to note the type of citations that you can find in the literature:

Hastad’s switching lemma is a cornerstone of circuit complexity [No mention of FSS, A, Y]

Hastad‘s Switching Lemma is one of the gems of computational complexity [Notes below in passing it builds on FSS, A, Y]

The wikipedia entry is also telling:

In computational complexity theory, Hastad’s switching lemma is a key tool for proving lower bounds on the size of constant-depth Boolean circuits. Using the switching lemma, Johan Hastad (1987) showed that... [No mention of FSS,A,Y]

I think that 99% of the contribution of this line of research is the amazing idea that random restrictions simplify a DNF so that you can write it as a CNF and collapse. 90% of the rest is analyzing this to get superpolynomial lower bounds. And 90% of whatever is left is analyzing this to get exponential lower bounds.

Going back to something I mentioned at the beginning, I want to emphasize that Hastad during talks makes a point of reminding the audience that the idea of random restrictions is due to Sipser, and of Boppana’s contribution. And I also would like to thank him for his help with this post.

OK — so maybe this is so, but it must then be the case that [Has87] is the final word on this stuff, like the ultimate tightest analysis that kills the problem. Actually, it is not tight in some regimes of interest, and several cool works of past and recent times address that. In the end, I can only think of one reason why [Has87] entered the mythology in ways that other works did not, the reason that I carefully sidestepped while composing this post: å.

Perhaps one reason behind the aura of the switching lemma is that it’s hard to find examples. It would be nice to read: If you have this extreme DNF here’s what happens, on the other hand for this other extreme DNF here’s what happens, and in general this always works and here’s the switching lemma. Examples are forever – Erdos. Instead the switching lemma is typically presented as blam!: an example-free encoding argument which feels deus ex machina, as in this crisp presentation by Thapen. For a little more discussion, I liked Bogdanov’s lecture notes. Next I give a slightly different exposition of the encoding argument.

The simplest case: Or of n bits.

Here the circuit C is simply the Or of n bits x_{1},x_{2},\ldots ,x_{n}. This and the next case can be analyzed in more familiar ways, but the benefit of the encoding argument presented next is that it will extend to the general case more easily… arguably. Anyway, it’s also just fun to learn a different argument.

So, let’s take a random restriction \rho with exactly s stars. Some of the bits may become 0, others 1, and others yet may remain unfixed, i.e., assigned to stars. Those that become 0 you can ignore, while if some become 1 then the whole circuit becomes 1.

We will show that the number of restrictions for which the restricted circuit C|_{\rho } requires decision trees of depth \ge d is small. To accomplish this, we are going to encode/map such restrictions using/to a restriction… with no stars (that is, just a 0/1 assignment to the variables). The gain is clear: just think of a restriction with zero stars versus a restriction with one star. The latter are more by a factor about the number n of variables.

A critical observation is that we only want to encode restrictions for which C|_{\rho } requires large depth. So \rho does not map any variable to 1, for else the Or is 1 which has decision trees of depth 0.

The way we are going to encode \rho is this: Simply replace the stars with ones. To go back, replace the ones with stars. We are using the ones in the encoding to “signal” where the stars are.

Hence, the number of bad restrictions is at most 2^{n}, which is tiny compared to the number \binom {n}{s}2^{n-s} of restrictions with s stars.

The medium case: Or of functions on disjoint inputs.

Instead of working with DNFs, I will consider a circuit C which is the Or of arbitrary functions f_{i} each on w bits. You can immediately get this formulation from the usual one for DNFs, but I still find it a little useful since otherwise you might think there is something special about DNFs. What is special is that you take the Or of the functions, and we will exploit this again shortly.

In this warm-up case, we start with functions on disjoint inputs. So, again, let’s take a random restriction \rho with exactly s stars. Some of the functions may become 0, others 1, and others yet may remain unfixed. Those that become 0 you can ignore, while if some become 1 then the whole circuit becomes 1.

As before, we will show that the number of restrictions for which the restricted circuit C|_{\rho } requires decision trees of depth \ge d is small. To accomplish this, we are going to encode/map such restrictions using/to a restriction with just s-d stars, plus a little more information. As we saw already, the gain in reducing the number of stars is clear. In particular, standard calculations show that saving d stars reduces the number of restrictions by a factor O(s/n)^{d}. The auxiliary information will give us a factor of w^{d}, leading to the familiar bound O(ws/n)^{d}.

As before, recall that we only want to encode restrictions for which C|_{\rho } requires large depth. So no function in C|_{\rho } is 1, for else the circuit is 1 and has decision trees of depth 0. Also, you have d stars among inputs to functions that are unfixed (i.e., not even fixed to 0), for else again you can compute the function reading less than d bits. Because the functions are unfixed, there is a setting for those d stars (and possibly a few more stars – that would only help the argument) that make the corresponding functions 1. We are going to pick precisely that setting in our restriction \rho ' with s-d stars. This allows us to “signal” which functions had inputs with the stars we are saving (namely, those that are the constant 1). To completely recover \rho , we simply add extra information to indicate where the stars were. The saving here is that we only have to say where the stars are among w symbols, not n.

The general case: Or of functions on any subset of w bits.

First, the number of functions does not play a role, so you can think you have functions on any possible subset of w bits, where some functions may be constant. The idea is the same, except we have to be slightly more careful because when we set values for the stars in one function we may also affect other functions. The idea is simply to fix one function at the time. Specifically, starting with \rho , consider the first function f that’s not made constant by \rho . So the inputs to f have some stars. As before, let us replace the stars with constants that make the function f equal to the constant 1, and append the extra information that allows us to recover where these stars were in \rho .

We’d like to repeat the argument. Note however we only have guarantees about C|_{\rho }, not C|_{\rho } with some stars replaced with constants that make f equal to 1. We also can’t just jump to the 2nd function that’s not constant in C|_{\rho }, since the “signal” fixing for that might clash with the fixing for the first – this is where the overlap in inputs makes things slightly more involved. Instead, because C|_{\rho } required decision tree depth at least d, we note there have to be some assignments to the m stars in the input to f so that the resulting, further restricted circuit still requires decision tree depth \ge d-m (else C|_{\rho } has decision trees of depth <d).  We append this assignment to the auxiliary information and we continue the argument using the further restricted circuit.

References

[Ajt83]    Mikl�s Ajtai. \Sigma \sp {1}\sb {1}-formulae on finite structures. Annals of Pure and Applied Logic, 24(1):1–48, 1983.

[Bea94]   Paul Beame. A switching lemma primer. Technical Report UW-CSE-95-07-01, Department of Computer Science and Engineering, University of Washington, November 1994. Available from http://www.cs.washington.edu/homes/beame/.

[FSS84]   Merrick L. Furst, James B. Saxe, and Michael Sipser. Parity, circuits, and the polynomial-time hierarchy. Mathematical Systems Theory, 17(1):13–27, 1984.

[Has86]   Johan H�stad. Almost optimal lower bounds for small depth circuits. In Juris Hartmanis, editor, Proceedings of the 18th Annual ACM Symposium on Theory of Computing, May 28-30, 1986, Berkeley, California, USA, pages 6–20. ACM, 1986.

[H�s87]   Johan H�stad. Computational limitations of small-depth circuits. MIT Press, 1987.

[Sub61]   B. A. Subbotovskaya. Realizations of linear functions by formulas using +, *, -. Soviet Mathematics-Doklady, 2:110–112, 1961.

[Yao85]   Andrew Yao. Separating the polynomial-time hierarchy by oracles. In 26th IEEE Symp. on Foundations of Computer Science (FOCS), pages 1–10, 1985.

Talk in two days on correlation bounds

In two days, this Thursday at 12PM EST, I am giving a talk on Correlation bounds and all that.  The talk is hosted by Igor Oliveira and is open to everybody.  Follow this link to join.

In the talk I will discuss correlation bounds for polynomials, and give full proofs of touch on three recent works of mine:

  1. Fourier conjectures, correlation bounds, and Majority (link)
  2. On correlation bounds against polynomials, with Peter Ivanov and Liam Pavlovic (link)
  3. Fooling polynomials using invariant theory, with Harm Derksen (upcoming)

The ab-normal reach of norm-al proofs in non-abelian Fourier analysis

Fourier analysis over (not necessarily abelian) groups is a cool proof technique that yields many results of interest to theoretical computer science. Often the goal is to show “mixing” or “pseudo/quasi randomness” of appropriate distributions. This post isn’t about the formal statements or applications or proofs or even the credit of these results; for some of this you can see e.g. the references below, or a survey of mine [Vio19], or a survey [Gow17] by Gowers.

Instead this post is about an uncanny development of the proofs of some of these results. Whereas the original proofs were somewhat complicated, in some cases involving heavy mathematical machinery, later there emerged proofs that I propose to call norm-al (or normal for simplicity) because they only involve manipulations that can be cast as norm inequalities, such as Cauchy-Schwarz. Normal proofs I view as just a little up in complexity from proofs that are simply opening up definitions. They can involve the latter, or norm inequalities, and they need not be tight. An example of an ab-norm-al proof would be one that involves induction or iterative arguments, or probabilistic/double-counting/pigeon-hole methods. Making this a little more precise seems to require a lot of discussion, and may not even be possible, so let me stop here and move on with the examples of the proofs which became norm-al. They all involve quasirandom groups [Gow08], but even non-quasirandom groups mix in a certain sense [GV22] and the proofs there are again norm-al (it’s just that I don’t know of earlier proofs in this case).

The first example concerns the quintessential mixing result in this area: If you’ve got independent distributions X and Y, and if each distribution is uniform over say a constant fraction of the group, then the the product XY (a.k.a. convolution, a.k.a. sample from each and output the product) is close to uniform over the entire group. A norm-al proof appears in [Gow17] which also contains pointers to previous proofs.

The second is mixing of three-term progressions. A norml-al proof appears in [BHR22]. From their abstract: “Surprisingly, unlike the proofs of Tao and Peluse, our proof is elementary and only uses basic facts from nonabelian Fourier analysis.”

The third is interleaved mixing, see a recent preprint with Derksen.

Moreover, in the second and third example the proofs are not only simpler, they are also more general in that they apply to any quasirandom group whereas previous proofs only applied to prominent special cases.

Why is all of this happening? One can only speculate that the reach of norml-al proofs in non-abelian Fourier analysis is still just emerging.

References

[BHR22]   Amey Bhangale, Prahladh Harsha, and Sourya Roy. Mixing of 3-term progressions in quasirandom groups. In Mark Braverman, editor, ACM Innovations in Theoretical Computer Science conf. (ITCS), volume 215 of LIPIcs, pages 20:1–20:9. Schloss Dagstuhl – Leibniz-Zentrum f�r Informatik, 2022.

[Gow08]    W. T. Gowers. Quasirandom groups. Combinatorics, Probability & Computing, 17(3):363–387, 2008.

[Gow17]    W. T. Gowers. Generalizations of Fourier analysis, and how to apply them. Bull. Amer. Math. Soc. (N.S.), 54(1):1–44, 2017.

[GV22]    W. T. Gowers and Emanuele Viola. Mixing in non-quasirandom groups. In ACM Innovations in Theoretical Computer Science conf. (ITCS), 2022.

[Vio19]    Emanuele Viola. Non-abelian combinatorics and communication complexity. SIGACT News, Complexity Theory Column, 50(3), 2019.

Q&A

  1. What led you to academia?

Much to the shock of my fellow students, I have wanted to be a professor since my freshman year in Italy. The pursuit of knowledge and the lifestyle freedom really appealed to me. Naturally, I did not have a clue; but I would do it all over again, changing nothing… alright, almost nothing.

  1. How did you become interested in theoretical computer science?

I was totally hooked to computers since I was eight, and also worked on videogames in my early teens. In high school I didn’t mind math, but it was not my passion: It was mostly calculus which I thought was “only good for physics.” In college I had a first glimpse of theoretical computer science, and I was struck. The field was asking the right questions: What is a proof, what is knowledge, what is randomness, when you are short on time?

  1. What are you currently working on?

The most famous open problem in the field is the P vs. NP problem which can be phrased as asking to prove that Super Mario levels, properly constructed, are hard to beat. The general challenge is to exhibit problems that require a lot of time on computers. Much of my research concerns proving such limitations for restricted models of computers, for example computers with no memory. Even understanding such restricted models has proved enormously difficult, and a good reason for that is that they are extremely powerful, much more so than anyone anticipated.

Lichess

After my previous post, a reader (who amazingly has a nearly identical playing routine) pointed me to it. It’s great! Much better than chess.com.

No ads, no subscriptions; but open-source and passion.

Are some features reserved to Patrons? No, because Lichess is entirely free, forever, and for everyone. That’s a promise.

It doesn’t get better than that, in this world; and that’s why I love computers and the community surrounding them.

Another good news is that I got tired of 5 | 5! The games are just too slow, and my chess isn’t even worth this much of my time. I am now playing 3 | 2. It’s fun the adrenaline kick I get with every match. Also I got much stricter with my routine, and I don’t lose more than once a day.

Here’s my profile. It’s an anagram of my unpronounceable name which can be loosely translated as …and I loved the clouds.

IMPORTANT: Excessive activity and blunders merely reflect my generous sharing of credentials.

And then I thought, why don’t we automatically analyze existing games to find interesting situations, and present them to the user as a puzzle? Naturally, the beauty — and dismay — of living in the AI (after-internet) era is that… it’s already done.

OK, time for my daily fix.

FOCS 2022

I am on the FOCS 2022 Program Committee, and I am overjoyed that the PC meeting will be virtual. Hopefully, the days are over when the program chair can brush aside all cost-benefit considerations, impose their backyard on far-away scholars who need service items, and then splash their offspring at the welcome party.

I am also overjoyed that we will be implementing double-blind reviews. This issue has been discussed and ridiculed at length. Admittedly, it makes it harder to adhere to Leonid Levin’s 1995 influential STOC Criteria. For example, if a reviewer wanted to trash a paper based on the fact that the authors are not in the position to judge their own work, now they’ll have to check online for talks or preprints to know who the authors are. Given the volume of reviews, it’s reasonable to expect that in some cases the reviewer won’t be able to conclusively exclude that a letter-writer is among the authors. In such a situation they can resort to writing a very long, thorough, and competent review whose most significant digit is the STOC/FOCS death sentence: weak accept.

No, I actually do have something more constructive to say about this. I was — as they say — privileged to serve on many NSF panels. As an aside, it’s interesting that there the track-record of the investigators is a key factor in the decision; in fact, according to many including myself, it should carry even more weight, rather than forcing investigators to fill pages with made-up directions most of which won’t pan out. But that’s another story; what is relevant for this post is that each panel begins with a quick “de-biasing” briefing, which I actually enjoy and from which I learnt something. For example, there’s a classic experiment where the ratio of females hired as musicians increases if auditions hide the performer behind a tent and make them walk in on carpet so you can’t tell what shoes they are wearing. Similar experiments exist that hide names from the folders of applicants, etc. What I propose is to do a similar thing when reviewing papers. That is, start with a de-biasing briefing: tell reviewers to ask themselves whether their attitude towards a paper would be different if:

  1. The authors of this paper/previous relevant work were ultra-big shots, or
  2. The authors of this paper/previous relevant work were hapless nobodies, or
  3. The proofs in this paper could be simplified dramatically to the point that even I understand them, or
  4. This result came with a super-complicated proof which I can’t even begin to follow, or

What other questions would be good to ask?

The story of stuff, and installing Linux on a 32-bit computer from 15 years ago

If you haven’t seen it, now is the time to drop everything you are doing and watch The story of stuff. Computer scientists like myself will latch onto the scene where the old computer which looks like a washing machine is thrashed because a tiny component doesn’t fit anymore. In this spirit, I am happy to announce that after substantial challenges I was able to install Linux on a 32-bit laptop from about 15 years ago. First I tried Ubuntu, but I was saddened to learn that they dropped support for 32-bit platforms. Luckily, Debian still supports it for a number of years, and I went for it. The incidents I had during the installation include missing non-free firmware for the wifi which I had to fetch and stick in random places in the USB stick, and an inexplicable failure of the base installation which can be fixed by mounting the USB stick during it.

When you say these things, you get the familiar shrug which means “In the same time you spent doing this I can write a grant proposal and make enough money to buy two new machines.” While this is true, it does not mean it is the right thing to do. We should strive to prolong the life of computers and their components indefinitely, instead of taking pride in showing off the latest gadget purchase. I am writing this post from an Ubuntu system installed on a 2008 desktop.

Uncharacteristically, I see a bright future in this domain. We can store drivers for every hardware configuration ever assembled, and create easy-to-use installation packages. Moreover, all of this can be automated, while we still are at the stage where human intervention is required. And one thing that gives me a lot of hope and continues to impress me is the amazing network of people that are willing to help each other to get computers to work, especially when they run free software. You can have the most technical question and quickly get — for free — the most caring and expert advice, say by posting on stackexachange.

Terrible things about conferences, ICALP 2022 requiring submissions in LIPIcs style, more terrible things about conferences, and a new life

I’m on the ICALP 2022 program committee (PC) and was apparently the first to complain when I saw the draft call for papers, and I asked that the requirement that submissions must be in LIPIcs style be put to a vote. Despite substantial support among the PC for not requiring the majority of authors to squander their time formatting in an exotic style submissions that will be rejected, the Supreme Court blocked the vote. Their main rationale seems to be ensuring correspondence between submissions and proceedings. Ironically, at the same time other conferences are allowing authors to publish in proceedings an abridged version of the (accepted) submission, to make it easier to publish the paper in a journal (which may not want to have a full version already in a conference). We are at the nadir in history where human effort is systematically wasted to produce versions that do not even pretend to be useful for readers — only to check boxes.

When I was at the IAS and was briefed about logistics, I was stunned when they asked me if I type my own papers, or if I needed help with that. I guess the next level will be if you need help with formatting your submissions.

Picture a 100-year old retired scientist who followed the field for 75 years, and who wants to make one more contribution. Even though they are no match for the raw power of a 15-year old, their ability to draw connections almost makes up for it, and their perspective is invaluable to the community. This scientist, whose equipment consists of a desktop from two human generations ago, miraculously held together and kept working through sheer love and intimate knowledge, is now being asked to format their paper in LIPIcs style… or to ask someone else to do it for them (talk about fostering collaboration).

Here’s a non-exclusive list of true, terrible things about conferences, most of which were discussed earlier on this blog (see tag utopia-tcs):

  1. Requiring submissions in proceedings format.
  2. Pre-registering papers by a deadline different than the submission deadline.
  3. Requiring authors to submit a separate 2-page summary of the paper.
  4. Ultra-strict page limits, summary rejection for any deviation.
  5. Suddenly being forced on a short notice to submit a dummy .pdf so that PC assignments can be made.
  6. Extending the list of topics two days before the deadline.
  7. Moving deadlines.
  8. Forcing authors to check multiple times the call for papers and/or constantly monitor email for any of these last-minute changes.
  9. In the lucky case of acceptance, following up with a barrage of emails involving minutiae such as underfull \hbox.
  10. Writing “this only takes 60 seconds.” For me (and, apparently, many others — otherwise why delay answering emails that take 10 seconds to answer?) attending to any task, following any instruction, even “blink” or “say yes” has a huge fix cost, maybe 30 minutes. So I translate 60 seconds to 31 minutes.
  11. Hosting the conference in places which are inconvenient, for the attendees and planet earth.
  12. Hosting the conference at times which are inconvenient for the attendees.
  13. Requiring in-person PC meetings whose outcome does not justify the costs.

As usual, people in a position of power do not have much incentive to fight to change the very practices that put them in that position in the first place. But even this cynical remark doesn’t tell the whole story, since the practices got worse with time, unlike the case of journals. One impression I have is that people have gotten so used to buggy software, constant updates, and general techno slavery that they just accept all of this as another inevitable misery in life… and by far not the worst! This last point is important, as I think a seeping philosophy that permeates much of the discussion and the decisions is that much more important things are at stake, so who cares about this? If you really want to spend cycles fixing something, then what about X? (It’s fun to think where you end up if you stretch this point.) But another life is possible.

One day, I will throw in my two cents and launch a new journal. Besides being electronic and free (like ToC) it will have the following features.

  1. Papers, if accepted, will appear in the authors’ chosen format. No conversion, and no additional effort, will be required for publication. All the effort will be exclusively directed at improving the content of the paper.
  2. Submissions can be by pdf attachment or by overlay, but not only to arxiv: to *any* repository. In particular, authors will not be required to get their paper to compile on the arxiv.
  3. The identities of the reviewers will be made public (in case of acceptance). This I am still debating, but so far I am leaning towards it. The problem that this is trying to address is that while you get reputation points for being on a program committee or on an editorial board, you don’t get many for being an unknown journal referee who is actually supposedly doing the hard work or finally really checking the proofs. Instead with public identities people can show off the hard work and their role in the community, as happens on more recent platforms like say stackexchange.
  4. Authors will be free to submit anonymously, or not. This choice will be made by the authors, not dictated by a committee.
  5. And perhaps the most consequential thing that I am still debating, should all submissions be public? (Maybe not.)

To finish on a high note, a great thing about conferences is the fast turn-around. It is really depressing — in other fields — to have to wait so long for any quantifiable feedback on your work, and this is definitely something which I liked about the TCS community. But, to come back to the mood of the post, this good aspect (and others) of conferences can probably be kept while rethinking the not so good. (For example, there could be a journal with deadlines similar to conferences, which is also what happens for grant proposals etc.)