arXiv’s filtering: an alternative perspective
Ginsparg (ArXiv
screens spot fake papers, Nature
508, 44; 2014) has extolled the
benefits of automated assessment of papers uploaded to an archive, as
opposed to ‘human diligence’. As with telephone helplines,
automated processing can be problematic. An automated process
focussed on the deviation of a paper from norms will have difficulty
distinguishing between submissions that have unusual characteristics
because they are bad, and ones that are unusual because they involve a
novel approach. Submissions of both types seem to be treated in a
similar way, by arXiv’s robot and by volunteers giving the ‘cursory
glance’ to new submissions previously described (ArXiv
at 20, Nature 476, 145–147; 2011).
Through the use of such mechanical processes, using the wrong word in a
paper can lead to its progress being seriously impeded rather than
quickly becoming public. There is a distinct similarity between
arXiv's activities and the way security agencies go about their
business processing the data they collect, in the latter case looking
for patterns indicative of terrorists. ArXiv's own ‘dangerous
items’ (as has been revealed by someone familiar with the details) are
much influenced by ‘reader complaints’; however, many important ideas
were equally the subject of ‘reader complaints’ when first
proposed. Terrorists one has to try to stop, but few scientists
have had fatal encounters with papers whose subject matter they have
found disagreeable.
ArXiv is not some kind of journal conferring approval on accepted
papers, and keeping fussy readers happy should not take priority over
its primary purpose, facilitating communication among
researchers. It should accordingly cease using the aggressive review
processes currently employed.