Seb Paquet references an “interesting paper”:http://www.arl.org/arl/proceedings/138/guedon.html on the history of scientific publishing and the impact of ISI ranking. It points out how assigning numerical rankings to measure academic quality distorts the way that academic research is published.
What that paper doesn’t mention – at least not in ch 6 which Seb highlighted – is that because high citation ranking = $ many journals end up “gaming” their impact factors by choosing the kind of papers they publish in order to maximise it, which has unintended consequences. If a journal has 10 papers that it knows will be highly cited it may limit the number of other papers it accepts for example to try not to ‘dilute’ its impact factor.
It’s the same with the ranking systems used by Google and by weblog ranking search engines. If there are benefits to being scored highly, human nature being what it is people will try to maximise their scores. Yet because the ranking is ‘automatic’ it is often assumed to be value neutral and therefore above criticism.