[ View menu ]

Reflections on the review process

Filed in Ideas ,Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)


After 13 years of editing journals (Journal of Marketing, International Journal of Marketing Research, Journal of Service Research), Roland Rust wrote up some of his thoughts on the review process. We quote here some bits we found interesting. You can read the full article here: Reflections on the review process

Observation #1: there are too many review rounds

I was having dinner last week with a professor from one of the world’s leading universities who was discussing a paper he has had under review for five years at one of the field’s leading journals. The paper recently received yet another risky revision decision in the fourth round. Such delay, although no doubt well-intentioned on the part of the editor, harms the field, because it slows down the diffusion of knowledge. I would be willing to bet that 90+% of the paper’s current (and eventual) value was present in the initial submission.

To combat this problem, some journals have attempted to institute a 2-round policy. The idea is that the paper should achieve at least conditional acceptance in the second round. Such a policy may have unintended consequences. Given that the top journals all have very high standards for rigor, the only papers that will make it through in two rounds are papers that are already highly-polished in the initial submission, and only “safe” papers that are exploring standard topics in standard ways will have a chance.

Observation #2: perfection is valued more than timeliness

The example I gave previously shows the downside of this value system. If it takes 4–5 years to get a paper through the review process, there is no way that the marketing literature can respond in a timely way to fast-moving topics. The Computer Science field combats this by counting proceedings papers more than journal articles, and making fast decisions on those proceedings papers. By marketing’s standards, the CS review process seems “fast and loose.” But at least it is fast, and timely work can surface quickly. By contrast, the marketing literature always seems several years behind.

My serial co-author Preston McAfee told me about a journal he worked with that had a no revision policy. I believe the idea is that you send the paper in and it either gets a) rejected b) accepted conditional on making certain changes. If there were more journals like this, time would be saved by authors, reviewers, editors and support staff.

I have heard of professors that urge their students to take shoddy work and “just send it in,” planning to win over the reviewers over multiple rounds of review. There’s an incentive not to do this when you know that your paper will either be in or out.

Since I moved to industry labs, I’ve published more and more in Computer Science. In CS, conference proceedings, not journals, are the important things. You get tenure for publishing in conference proceedings, which can be as or more selective than the top journals in marketing or psychology. The conference proceeding model works as follows. You submit a manuscript. You get reviews. You write a reply to the reviewers (without revising the paper). You then get a) rejection or b) conditional acceptance. Every process has its tradeoffs. CS certainly publishes a number of “reinventions” and flawed analyses, but the upside is that it tends to capture all the good stuff. The crud gets ignored and the good ideas get built upon. It’s hard to argue that psych and marketing are making more cumulative progress than computer science is.

Recommendation #1: accept papers quicker

If there is a timeliness value for ideas, then editors need to recognize that getting that last 1% of rigor may result in a net loss of value. This means that it is often best for the editor to take a stand and accept a paper before everybody on the review team signs off. This means that we need to appoint editors who are secure in their standing in the field, and who are strong enough to make decisions that some AE’s or reviewers may disagree with.

Higher recall but slightly lower precision is the gist of the CS model.

Recommendation #2: editors need to be the importance police

Given the tendency of reviewers to simply attack papers and produce a list of problems, the editor needs to counteract the reviewers’ almost exclusive focus on rigor by insisting on problem importance. This can also sometimes mean rejecting an unimportant paper for which the reviewers find few problems. It can also mean giving a paper more of a chance if it is on an important topic. I recommend that papers on important and timely topics should be consciously given more slack with respect to expectations of rigor.

Hard to know important when you see it, though.

Recommendation #3: editors need to be willing to overrule the review team

In my view, a good editor respects the review team, but sees the reviews as advisory. The review decision should not be a vote count. In many cases I have given a paper a second round, even with a unanimous negative appraisal by the review team, if the paper was on a very important and timely topic. I have not overridden a unanimous rejection recommendation in the second or later rounds, because it is incumbent on the author(s) to eventually persuade somebody, but otherwise I have not let a negative reviewer stop a paper, if the paper is important enough, and the negative reviewer has not revealed what I believed to be a fatal flaw. Again, the editor needs to be secure enough to make such determinations.

Agreed: Just as you shouldn’t take a vote of a three-person focus group to decide to launch a product, you shouldn’t use the vote of three reviewers to decide on a paper.

Image source: https://flic.kr/p/nCcSpm


  1. Joe Gladstone says:

    Very interesting thoughts.

    I worry a bit with the final points though.

    If you wouldn’t accept the vote from a focus group of three (the reviewers), why accept the result of a focus group of 1 (the Editor).

    An Editor can’t be an expert in everything; their ability to determine the importance of a paper will be based, in part, on their specific expertise.

    I don’t think pushing more power into the hands of a small number of Editors/AE’s is the way to improve peer review. Time to follow Physics and open-up peer review to those who are actually motivated to engage with a paper (the readers).

    November 6, 2018 @ 9:33 am

RSS feed Comments

Write Comment

XHTML: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>