Thomson-Reuters has reportedly published their yearly analysis of the hottest trends in science research. Increasingly, governments and funding organisations use such documents to identify strategic priorities. So it’s profoundly disturbing that their conclusions are based on shoddy methodology and bad science!
The researchers first split recent papers into 10 broad areas, of which Physics was one. And then the problems began. According to the official document
Research fronts assigned to each of the
10 areas were ranked by total citations and the top 10 percent of the fronts in each area were extracted.
Already the authors have fallen into two fallacies. First, they have failed to normalise for the size of the field. Many fields (like Higgs phenomenology) will necessarily generate large quantities of citations due to their high visibility and current funding. Of course, this doesn’t mean that we’ve cracked naturalness all of a sudden!
Second their analysis is far too coarse-grained. Physics contains many disciplines, with vastly different publication rates and average numbers of citations. Phenomenologists publish swiftly and regularly, while theorists have longer papers with slower turnover. Experimentalists often fall somewhere in the middle. Clearly the Thomson-Reuters methodology favours phenomenology over all else.
But wait, the next paragraph seems to address these concerns. To some extent they “cherry pick” the hottest research fronts to account for these issues. According to the report
Due to the different characteristics and citation behaviors in various disciplines, some fronts are much smaller than others in terms of number of core and citing papers.
Excellent, I hear you say – tell me more! But here comes more bad news. It seems there’s no information on how this cherry picking was done! There’s no mention of experts consulted in each field. No mathematical detail about how vastly different disciplines were fairly compared. Thomson-Reuters have decided that all the reader deserves is a vague placatory paragraph.
And it gets worse. It turns out that the scientific analysis wasn’t performed by a balanced international committee. It was handed off to a single country – China. Who knows, perhaps they were the lowest bidder? Of course, I couldn’t possibly comment. But it seems strange to me to pick a country famed for its grossly flawed approach to scientific funding.
Governments need to fund science based on quality and promise, not merely quantity. Thomson-Reuters simplistic analysis is bad science at its very worst. It seems to offer intelligent information but in fact is misleading, flawed and ultimately dangerous.