If you are a resident of Newton, MA, sign this petition.

In 2016 Massachusetts voters voted to legalize Marijuana. Except they didn’t know what they were voting for! In Colorado and Washington, the question of legalization and commercialization were completely separate. The marijuana industry apparently learned from that and rigged the Massachusetts ballot question so that a voter legalizing marijuana would also be mandating communities to open marijuana stores. For Newton, MA, this means at least 8 stores. When voters were recently polled, it became clear that the vast majority did not know that this was at stake, and that the majority of them in fact does not want to open marijuana stores in their communities. For example, when I voted I didn’t know that this was at stake. Read the official Massachusetts document to inform voters, see especially the summary on pages 12-13. There is no hint that a community would be mandated by state law to open marijuana stores unless it goes through an additional legislative crusade. Instead it says that communities can choose. I think I even read the summary back then.

Now to avoid opening stores in Newton, MA, we need a new ballot question. The City Council could have put this question on the ballot easily, but a few days ago decided that it won’t by a vote of 13 to 8. You can find the list of names of councilors and how they voted here.

Note that the council was not deciding whether or not to open stores, it was just deciding whether or not we should have a question about this on the ballot.

Instead now we are stuck doing things the hard way. To put this question on the ballot, we need to collect 6000 signatures, or 9000 if the city is completely uncooperative, a possibility which now unfortunately cannot be dismissed.

However we must do it, for the alternative is too awful. Most of the surrounding towns (Wellesley, Weston, Needham, Dedham, etc.) have already opted out. So if Newton opens stores, it basically becomes the hub for west suburban marijuana users, at least some of whom would drive under the influence of marijuana (conveniently undetectable). Proposed store locations include sites on the way to elementary schools, and there is an amusing proposal to open a marijuana store in a prime Newton Center Location, after Peet’s Coffee moves out (they lost the bid for renewal of the lease). The owners of the space admit that people have asked them for a small grocery store instead, but they think that a marijuana store would bring more traffic and business to Newton Center. I told them to open a gym instead. That too would bring traffic and business, but in addition it would have other benefits that cannabis does not have.

Et al.

The et al. citation style favors scholars whose last name comes early in the dictionary. For example, other things equal, a last name like Aaron would circulate a lot more than Zuck. This problem is compounded by the existence of highly-cited papers which deviate from alphabetical ordering of authors. They carry the message: order matters, and some of you can’t use this trick, vae victis!

My suggestion is to avoid et al. and instead spell out every name (as in Aaron and Zuck) or every initial (as in AZ). It isn’t perfect, but improvements like randomly permuting the order still aren’t easy to implement. The suggestion actually cannot be implemented in journals like computational complexity which punish the authors into using an idiosyncratic style which has et al. But it doesn’t matter too much; nobody reads papers in those formats anyway, as we discussed several times.

ECCC as a zero-formatting “publisher” for CCC proceedings?

Background: After going solo, the CCC conference is using LIPIcs as a “publisher” for the papers accepted to the conference. This involves a non-trivial amount of formatting (to put the papers in their format) and also some monetary costs.

I would like to use the opportunity that CCC is going solo to move to a model where the “publishing” involves *zero* effort from authors. This could be a selling point for the conference, and maybe set an example for others.

Specifically, in the vein of previous posts, I propose that authors of accepted papers simply send the .pdf of their paper in whatever format they like. The CCC people take care of placing a stamp “CCC 20xx camera-ready” and putting the paper on the ECCC. Papers with indecent formatting are treated exactly as papers with indecent introductions.

Disclaimer: although I am on the reviewing board of ECCC I had no discussions with the ECCC people about this.

The main benefits of ECCC are:

– Submission is painless: just send the .pdf! Again, authors can write their paper in whatever format they like.

– Indexed by DBLP

– It’s run by “us”, it’s about computational complexity and in fact “Under the auspices of the Computational Complexity Foundation (CCF)”

– It has an ISSN number (1433-8092). I am told this is important for some institutions, though I don’t know if some insist on ISBN over ISSN. If they do, perhaps there’s a way to get that too?

– They do various nice things already, like archiving papers in CD’s etc. In fact, going back to the ISBN issue, couldn’t we simply assign an ISBN to the reports from each year?

– It has no cost (given that ECCC already exists).

Another option is to use arxiv or an arxiv overlay. This would also be better than using LIPIcs, I think, but it does not enjoy many of the benefits above.

Paper X, on the ArXiv, keeps getting rejected

Paper X, on the ArXiv, keeps getting rejected. Years later, Paper Y comes up and makes progress on X, or does something closely related to X. Y may or may not cite X. Y gets published. Now X cannot get published because the referees do not see what the contribution of X is, given that Y has been published and that in light of Y X is not new.

The solution in my opinion, following a series of earlier posts the last one of which is this, is to move the emphasis away from publication towards ArXiv appearance. Citations should refer to the first available version, often the ArXiv one. Journals/conferences can still exist in this new paradigm: their main important job would be to assign badges to ArXiv papers.

Obviously, this plan does not work for the entities behind current journals/conferences. So they enforce the status quo, and in the most degrading way: by forcing authors to fish out, maintain, and format useless references.


provides an objective ranking of CS departments. It is the revamped version of a previous system which I followed also because it did not include NEU. The new one does. But there are reasons slightly more subtle than the fact that NEU ranks 9 in “theory” — or 11 if you remove “logic and verification”, in both cases beating institutions which are way above NEU in other rankings — why I think having this information around is very valuable. Unobjective rankings tend to be sticky and not reflect recent decay or growth. And if one still wants to ignore data at least now one knows exactly what data is being ignored.

One dimension where I would like the system to be more flexible is in the choice of venues to include in the count. For example, I think journals should be added. Among the conferences, CCC should be included, as the leading conference specialized in computational complexity. I think the user should be allowed to select any weighted subset of journals and conferences.

Write. ArXiv. Repeat.

As discussed in earlier posts, I believe that two simple and effective ways to improve the publication process are to require that only papers available on the arxiv can be be submitted for publication and to eliminate all formatting requirements. (Throughout this blog I use the arxiv for concreteness only; several other repositories should work just as well.) In this post I want to consider a broader, radical publishing reform, and discuss several related issues.

Here’s how I would like the publication process to be:

As an author, you write your paper. When you are done, you post it on the arxiv. Period. You now move to your next paper.

Forget reverse-engineering the chronology of progress: there would now exist a unique citation for your paper: its arxix entry. Forget bibtex and its BeaST. And also forget trying to pick the best venue. Forget “are they going to invite me for the special issue? In fact, is there even going to be a special issue?” Forget the conference vs. journal debate. Forget a lengthy camera-ready production process whose goal is to put your paper in an electronic format that is only read by library computers.

Papers would be ranked by a system of badges. For starters, the badges will correspond to the current entities. So we have the STOC badge, the JACM badge, etc. We also have some badges like ECCC, which are assigned to papers that satisfy minimal requirements, such as not making sweeping unsupported claims. Badges cannot be removed but can be added. This last aspect makes the new system more flexible. Today, it is a bit funny to find out that a seminal paper appeared in an obscure venue, but it is hard to update that paper’s status. With the new system one could just add another badge.

Q: Which papers are the committees supposed to evaluate?

A: Committees will need to monitor papers like many people already do. Note that for the year 2014 the ECCC repository lists 184 reports, for the year 2013 191. These are fairly small numbers, comparable to the number of submissions to a top conference.

Q: What if a paper does not get noticed? What mechanism would there be for giving it additional chances?

A: The current default mechanism is that the author resubmits the paper, to signal that venue’s committee that they should give the paper an n+1 chance. The same can be done with the new system, for example by posting an arxiv revision with the comment “no changes from previous version”. In both systems, what prevents authors from flooding committees with resubmissions is the reputation loss, so I expect this aspect to work in roughly the same way. A more rare current mechanism is that the paper gets invited or is selected for an award. This would work in the same way in the new system.

Q: What about the cycle of getting feedback from the reviewers and revising the paper accordingly?

A: In the spirit of “only papers available on the arxiv can be be submitted for publication”, I would like the public to have access to the same information that is given to authors and referees. So I would like this cycle to take place in a public forum. If there is a serious issue with an arxiv paper, I would like to see a comment pointing this out right away, instead of having to wait for authors and referees to converge on a new version to release to the public. I also believe that the feedback/revision cycle is less prominent in theoretical fields than it is elsewhere. In other fields, it is common to receive feedback of the type: “Result X is interesting but not enough. Please run experiment Y. If you get outcome Z then we’ll talk.” With theoretical papers you hardly get a request for obtaining better results. If there is any feedback it is mostly about presentation, references, and correctness. Also, especially with conferences, it is not uncommon to get inessential feedback.

As a first step towards implementation, one could keep the publication venues as they are, but replace the cumbersome submission process with an email containing a link to the arxiv record. The production of a camera-ready version and copyright transfers are eliminated. Conferences going solo have a great opportunity to implement this first step. Alas, the Computational Complexity Conference did not quite go for it; think about it next time you get an email about an overfull hbox. Once this step is taken, one can ask if even the submission email is required.

NSF now requires its grantees to make their peer-reviewed research papers freely available within 12 months of publication in a journal. This move by NSF is the answer, at least in part, to this petition, which I signed. (Incidentally, my inability to advertise this petition through the available channels is one of the factors that eventually led me to start my own blog.) However, I don’t find this change very significant. For one thing, 12 months after the publication time is a very long time for research.

Eliminate all formatting requirements (+ survival tip)

Our conference submission was just desk rejected because the PC is unwilling to stop reading at Page 12.

We were asked to format submissions according to the LIPIcs style file, which we did, and to limit the submission length to 12 pages, excluding references; omitted proofs could be placed in the appendix. Our submission was 14.5 pages, excluding references, with no appendix.

Over the years I have submitted and also reviewed many papers that went slightly beyond the page limit; and nobody paid attention. In a few cases I have also seen PC chairs asking for the resubmission of papers which egregiously violated the formatting requirements, such as crammed, 10-point, 2-column submissions. In the present case, despite our good intentions witnessed by the usage of the LIPIcs style file, our misplacement of the \bibliography command was just too severe to be offered a second chance.

In general, there have been many discussions about formatting requirements, for example here.  The summary of my position is in the title, and details follow.

Of all the useless, time-consuming rules, the one of imposing formatting requirements strikes me as particularly outrageous because it is one that can actually be fixed. I think it is clear to everyone that a proper formatting of the paper has zero relevance compared to the myriad of actual problems that can affect submissions, such as a poorly written introduction, no intuitive exposition of the proof techniques, or missing citations. The only definite outcome of formatting requirements is that authors waste time.

I like the following paragraph from this article:

Science fiction novels of a half-century ago dramatized conflicts between humans and robots, asking if people were controlling their technologies, or if the machines were actually in charge. A few decades later, with the digital revolution in juggernaut mode, the verdict is in. The robots have won. Although the automatons were supposedly going to free people by taking on life’s menial, repetitive tasks, frequently, technological innovation actually offloads such jobs onto human beings.

I ponder upon it every time I need to waste 30 minutes with LaTeX, instead of being given the obvious option of submitting a .pdf file and possibly a .tex file from which a computer would automatically extract author names, title, and all other relevant information, including categories. They too can be quite accurately deduced from the text of the paper, can’t they? The waste of time is magnified if your paper has pictures (ours had two, carefully prepared with LaTeX extensions). No wonder we don’t see many pictures in papers.

I am not aware of any benefit of forcing papers into different latex styles, except one. Publishers can better monetize our donations if they are properly formatted. So this is one more reason to kill the current editorial system. The role of a conference/journal should be to place quality stamps on online papers, not to engulf people into a battle with latex.

Survival tip: One day I felt particularly vexed by being forced to convert all my LaTeX single-line equations, which fit nicely on a single-column format, into align environments that could fit in the 2-column proceedings format. So I offered $100 for a package that would make the process easier. I wanted a package that could recognize if I was using the “&” or the “\\” commands, and if so switch to align, or to multline. And it should also recognize if I am not using labels (in which case I need align*, not align), and so on.

Of course, a LaTeX saint quickly provided a package, and turned down my $100. I used this package ever since. I only write \[ \] and then magically the computer knows what I need. At least for this skirmish, I have won and have offloaded a mindless task onto a computer…

…though good luck making this work with the next formatting requirements.

Only papers on the arXiv can be submitted for publication

Applying the title would increase the quality of submissions and the speed of progress, in my opinion. But there is also a less obvious reason why I think it would be good. The current system reinforces the partition of research into (sub)areas, making it hard for an outsider to leave their own. Of course, it is good to have a domain of expertise and produce deep results in it. Still, I think it would be better if it was a little easier to work in different areas.

To illustrate the difficulty, suppose you want to start working in new, hot area X. To learn the background, typically you have to read papers. However, for every paper that you read, it is not uncommon that there is another one which is or was under submission. Indeed, the community is producing great results the majority of which is rejected due to capacity constraints. So unless these works are on electronic archives such as the arXiv, you don’t have access to them.

Who does? The experts of area X, to whom these papers are sent so that they can be properly evaluated. But it may be hard for reviewers to ignore submissions until publication. Suppose for example you have been working on problem Y for months and now you are asked to review a paper that solves Y. Are you going to ignore this information and keep working on Y despite knowing that you will be beaten? Also, when the paper does come out you’ve had a long time to internalize its implications.

The edge currently given to an insider over an outsider is months if the paper is accepted right away; it may be years otherwise.

Implementation details:

If the title is too radical to appear in the calls for papers, here’s another mechanism to escape the existing equilibrium: commit to observe the title from the moment when > 2/3 of the authors of the last 4 STOC/FOCS have also committed. I commit, and you can use the comments to this post to do the same.