(Replying to PARENT post)

Surely those seemingly smart anonymous reviewers now feel pretty dumb in hindsight.

Peer review does not work for new ideas, because no one ever has the time or bandwidth to spend hours upon hours upon hours trying to understand new things.

๐Ÿ‘คcs702๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

It's worth pointing out that most of the best science happened before peer review was dominant.

There's an article I came across awhile back, that I can't easily find now, that basically mapped out the history of our current peer review system. Peer review as we know it today was largely born in the 70s and a response to several funding crises in academia. Peer review was a strategy to make research appear more credible.

The most damning critique of peer-review of course is that it completely failed to stop (and arguably aided) the reproducibility crisis. We have an academic system where the prime motivation is the secure funding through the image of credibility, which from first principles is a recipe for wide spread fraud.

๐Ÿ‘คIKantRead๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I have finished a PhD in AI just this past year, and can assure you there exist reviewers who spend hours per review to do it well. It's true that these days it's often the case that you can (and are more likely than not to) get unlucky with lazier reviewers, but that does not appear to have been the case with this paper.

For example just see this from the review of f5bf:

"The main contribution of the paper comprises two new NLM architectures that facilitate training on massive data sets. The first model, CBOW, is essentially a standard feed-forward NLM without the intermediate projection layer (but with weight sharing + averaging before applying the non-linearity in the hidden layer). The second model, skip-gram, comprises a collection of simple feed-forward nets that predict the presence of a preceding or succeeding word from the current word. The models are trained on a massive Google News corpus, and tested on a semantic and syntactic question-answering task. The results of these experiments look promising.

...

(2) The description of the models that are developed is very minimal, making it hard to determine how different they are from, e.g., the models presented in [15]. It would be very helpful if the authors included some graphical representations and/or more mathematical details of their models. Given that the authors still almost have one page left, and that they use a lot of space for the (frankly, somewhat superfluous) equations for the number of parameters of each model, this should not be a problem."

These reviews in turn led to significant (though apparently not significant enough) modifications to the paper (https://openreview.net/forum?id=idpCdOWtqXd60&noteId=C8Vn84f...). These were some quality reviews and the paper benefited from going this review process, IMHO.

๐Ÿ‘คandreyk๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I have been deeply unimpressed with the ML conference track this last year... There's too many papers, too few reviewers, leading to an insane number of PhD student-reviewers. We've gotten some real nonsense reviews, with some real sins against the spirit of science baked into them.

For example, a reviewer essentially insisting that nothing is worth publishing if it doesn't include a new architecture idea and SOTA results... God forbid we better understand and simplify the tools that already exist!

๐Ÿ‘คsdenton4๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

This is not the takeaway I got. The takeaway I got was the review process improved the paper and made it more rigorous. How is that a bad thing? But yes, sometimes reviewers are focusing on different issues instead of 'is this going to revolutionize A, B, and C'.
๐Ÿ‘คmempko๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

The issue here wasn't that the reviewers couldn't handle a new idea. They were all very familiar with word embeddings and ways to make them. There weren't a lot a of new concepts in word2vec, what distinguished it was that it was simple, fast, and good quality. The software and pretrained vectors were easy to access and use compared to existing methods.
๐Ÿ‘คcanjobear๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Peer review isn't about the validity of your findings and the reviewers are not tasked with evaluating the findings of the researchers. The point is to be a light filter to make sure a published paper has the necessary information and rigor for someone else to try to replicate your experiment or build off of your findings. Those are the processes for evaluating the correctness of the findings.
๐Ÿ‘คmrguyorama๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Do they do anything different in other countries, or is it just a copy of the U.S system?
๐Ÿ‘คnarrator๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0