csrankings


csrankings
provides an objective ranking of CS departments. It is the revamped version of a previous system which I followed also because it did not include NEU. The new one does. But there are reasons slightly more subtle than the fact that NEU ranks 9 in “theory” — or 11 if you remove “logic and verification”, in both cases beating institutions which are way above NEU in other rankings — why I think having this information around is very valuable. Unobjective rankings tend to be sticky and not reflect recent decay or growth. And if one still wants to ignore data at least now one knows exactly what data is being ignored.

One dimension where I would like the system to be more flexible is in the choice of venues to include in the count. For example, I think journals should be added. Among the conferences, CCC should be included, as the leading conference specialized in computational complexity. I think the user should be allowed to select any weighted subset of journals and conferences.

19 thoughts on “csrankings

  1. This is a nice website. However, it is not really an objective metric, precisely because the reason you described: this ranking chose only a specific set of CS conferences, and they have done this based on “consultation with faculty across a range of institutions”. Thus, they have invoked nonobjective measures in the backdoor.

    I second your suggestion to include journals and CCC, for example, in this website.

    1. Indeed. Hopefully they will make this simple change soon. It would be nice to have a weighting like 3JACM, 2SICOMP/STOC/FOCS, 1SODA/CCC/CRYPTO, et cetera.

  2. There seem to be some bugs. Michael Rabin doesn’t only have 2 publications, and Leslie Valiant doesn’t only have 4. Many other counts were also wrong.

    1. If I remember correctly, in the previous version they only counted publications during the last x years, which may explain some of that. Hopefully they will clarify this soon.

      1. Hi – this thread was just brought to my attention.

        The comment above is correct. You can change the start and end years to view any range from 1980 to 2016.

        Michael Rabin’s DBLP entry is here (and is correctly included):
        http://dblp.uni-trier.de/pers/hd/r/Rabin:Michael_O=

        Since 1980, Rabin has had 12 publications included on this list (out of 35 conference publications total).

        Likewise, Leslie Valiant’s DBLP entry is here (and correctly included):
        http://dblp.uni-trier.de/pers/hd/v/Valiant:Leslie_G=

        Since 1980, Valiant has had 16 conference publications included on the list (as shown on CSrankings), out of 56 total.

      2. Thanks for clarifying this. (It seems we had missed the scroll down menus with the year. ;-) To when a version where the user can select any weighted combination of journals and conferences?

  3. A somewhat pathological situation: administrators (distributing the money) create their rules to make THEIR life easier – and researchers start to seriously discuss them. (I don’t mean this blog entry – this is a widespread phenomenon.) Counting WHERE published, not WHAT published? Do we know, say, Einstein, Perelman, Razborov and others because of these rules? I guess: they would have a rather low rank according to these rules. Do we know MIT, Stanford and others because of these rankings? And the stress on conferences is even more pathological, limiting on something much worse …

    I think, we should just accept these rules as a “higher power”, and to only try to “play” according to them (in order to survive). Knowing that all this is just a bullshit.

    1. Stasys, obviously I agree with you on the general philosophy. But there is one important point. Other rankings exist, like US News and World Report’s. Without a counter-ranking like this one, it is not easy to explain to people why, as you say, USNWR is indeed bullshit. I also want to make other points. Here we are not talking about geniuses, or masterpieces — those things will succeed in *any* environment. We are talking about the day-to-day knots and bolts results which dominate CS venues (and perhaps to some extent other areas as well). Finally, administrators are not the only ones who care about rankings. Imagine a prospective Ph.D. student from abroad. They should find this data quite useful.

      1. Emanuele: yes, I agree, that some reaction from the scientific community to things like US News and World Report’s would be in order. But is it a right one to issue “better ranking rules”? This just gives these administrators a “certificate”: you see, the community takes them (the “ranking rules”) seriously, these only need to be slightly improved … So bullshit turns into “something important”.

        Concerning the orientation of PhD students: I completely disagree with this argument. If instead of looking to WHO works at what universities, a PhD student looks at “ranks” – Got bless such a student …

      2. Again at the high, ethereal level I tend to agree with you BUT:

        Re your first paragraph. The alternative is that the scientific community simply ignores existing rankings. However, those rankings exist, and are a factor in many decisions which critically affect the typical researcher’s life. I am afraid that simply ignoring them and appealing to higher standards is not always effective, unless of course you happen to be at an institution which ranks high in existing rankings.

        That said, I definitely do not suggest to take anything like csrankings too seriously. Instead, we can go around and say, hey, look, here’s some data which is quite a bit different from what you see elsewhere. This counts exactly x, y, and z. While that other ranking you are looking at… do you even know how that works?

        Re your second paragraph. Sure, the ideal student is the one who before applying for a Ph.D. program perused my home page and even read some of my papers and surveys. Not everybody consistently has the luxury to hand-pick such students, right? It is more often the case that the student wants to know where there is a good research group in area X. Counting papers in area X is not the worst way to assess that, in my opinion.

  4. How do they choose the universities they consider? I you look at AI at European universities for the last two years for example, you are shown only 12 universities. IJCAI alone has hundreds of papers per year. I find it very unlikely that these are the only 12 (or even best 12) universities for these criteria. I know several guys at my place who have had several papers there this year alone and we don’t make the list. And I know that this is true for some other places as well.

    Is there a filtering before the ranking is done? Or is this a problem with the affiliation data at DBLP?

    1. I am not sure how it is done. I guess it is just a painstaking manual process. But you can see their database (see bottom of webpage) of faculty affiliations. There is a form for adding a faculty. Maybe if you add someone from a new university it will also add their university?

  5. Yes, one can just add his/her university by submitting a pull/commit request.

    Re Stasys Jukna’s remark, he’s certainly correct in identifying the ridiculous path leading supposedly serious scientists to seriously consider rules made up by bureaucrats.

  6. Manu, your suggestion to include more conferences makes sense, but suggesting an absolute weighting of conferences takes us back to the initial problem. For example, your weights are also heavily subjective from a theory/complexity perspective. Perhaps, one should be able to select a personalized weighting.

    Concretely, I speak for that (large) part of the cryptography community whose work does not appeal to STOC/FOCS, for which weighting STOC/FOCS above CRYPTO is a large misrepresentation of achievements, simply because STOC/FOCS is not an option. (Ever tried getting a cryptanalysis breakthrough into STOC/FOCS, or a new paring-based scheme, a new block cipher, or a more efficient garbling scheme?) On the other hand, if you do work on obfuscation or zero-knowledge, a STOC/FOCS paper is a strictly better achievement than a CRYPTO/EUROCRYPT paper. How would a reasonable ranking distinguish these cases?

    This is not unique to cryptography, obviously. The issue of conference prestige is very nuanced. There have been numerous attempts to rank conferences absolutely (which administrators love to use), but they all suffer from such issues (see e.g. http://portal.core.edu.au/conf-ranks/). It seems to me that conferences in this new ranking are those that are traditionally considered A+ or A* conferences by a few of these already existing rankings (with some omissions, like PODC or ICDE).

    1. As I wrote “I think the user should be allowed to select any weighted subset of journals and conferences.” The problem you raise, of putting different weights depending on the content of the paper, cannot be solved using current methods. I am also not sure it can or should be solved by any other methods. But broad guidelines exist: a researcher who specializes in block ciphers is not expected to publish in STOC/FOCS.

  7. This is an interesting ranking. A couple of oddities though:

    1) I think that it is a reasonable idea to divide the “credit” of a paper among its co-authors — I paper coauthored by 3 faculty members at a university should not necessarily count 3 times for that university. However, the current method penalizes the inclusion of student authors, since students are not on the faculty of any university. A paper with a sole faculty author counts for “1” — the same paper with the addition of two student coauthors counts for only “1/3”. This seems like a bad incentive for a ranking of educational institutions. How about the following fix — each paper contributes exactly “1” divided across all ranked universities. So coauthors that are not faculty at any ranked university do not diminish the total credit of the paper.

    2) The total number of faculty working in each sub-area seems to be determined by the number of authors that have ever published in the relevant conferences. This makes the ranking oddly non-monotone: If a theorist collaborates with a programming languages researcher, and publishes his first paper in POPL, he increases the numerator for the “average number of PL papers” in his department (unless he coauthored it with his colleague, in which case he does not), but he -also- increases the denominator, which will bring down the average. This seems undesirable, since cross disciplinary collaboration is a -good- thing.

    3) Echoing others, major sub-areas are left out. In the theory world, for example, in addition to CCC, the top venues in AGT (EC) and learning theory (COLT) are left out. Because Crypto and Eurocrypt are included, this makes the theory ranking heavily biased towards universities that specialize in crypto at the expense of other sub-areas.

    1. Re 1. and 2., I think that these and all similar things should be left to the user to play with. Someone would want to count raw number of papers, someone else to divide by the number of faculty, someone else to divide by something more complicated which takes into account how many faculty consistently publish in those venues, and so on. Personally, I think putting a weight of 1/#coauthors for a paper is a decent option. Other things I can think of would be more complicated and I am not sure more “fair.” But again, all of this should be left to the user.

    2. It seems the average is not what one would first think it is. In particular, it is NOT the total adjusted paper count divided by the number of faculty. Quoting from the site: “Average count computes the *geometric mean* of the adjusted number of publications in each area by institution.” So including an area that is poorly represented at a school quite adversely affects the average score. This seems to be why the theory ranking with crypto included is quite different from the one without it.

Leave a comment