Archive

Archive for the ‘bibliometrics’ Category

A brain systems visualisation tool

January 4, 2011 Leave a comment

brainSCANr.

This looks like a fantastic visualisation tool – but one that should prove useful as a research tool.

The Brain Systems, Connections, Associations, and Network Relationships (a phrase with more words than strictly necessary in order to bootstrap a good acronym) assumes that somewhere in all the chaos and noise of the more than 20 million papers on PubMed, there must be some order and rationality.

To that end, we have created a dictionary of hundreds of brain region names, cognitive and behavioral functions, and diseases (and their synonyms!) to find how often any two phrases co-occur in the scientific literature. We assume that the more often two terms occur together (at the exclusion of those words by themselves, without each other), the more likely they are to be associated.

Are there problems with this assumption? Yes, but we think you’ll like the results anyway. Obviously the database is limited to the words and phrases with which we have populated it. We also assume that when words co-occur in a paper, that relationship is a positive one (i.e., brain areas A and B are connected, as opposed to not connected). Luckily, there is a positive publication bias in the peer-reviewed biomedical sciences that we can leverage to our benefit (hooray biases)! Furthermore, we cannot dissociate English homographs; thus, a search for the phrase “rhythm” (to ascertain the brain regions associated with musical rhythm) gives the strongest association with the suprachiasmatic nucleus (that is, for circadian rhythms!)

Despite these limitations, we believe we have created a powerful visualization tool that will speed research and education, and hopefully allow for the discovery of new, previously unforeseen connections between brain, behavior, and disease.

H/T: Marsha Lucas

Advertisements

Mismeasuring scientific quality (and an argument in favour of diversity of measurement systems)

December 27, 2010 2 comments

There was a short piece here recently on the misuse of impact factors to measure scientific quality, and how this in turn leads to dependence on drugs like Sciagra™ and other dangerous variants such as Psyagra™ and Genagra™.

Here’s an interesting and important post from Michael Nielsen on the mismeasurement of science. The essence of his argument is straightforward: unidimensional reduction of a multidimensional variable set is going to lead to significant loss of important information (or at least that’s how I read it):

My argument … is essentially an argument against homogeneity in the evaluation of science: it’s not the use of metrics I’m objecting to, per se, rather it’s the idea that a relatively small number of metrics may become broadly influential. I shall argue that it’s much better if the system is very diverse, with all sorts of different ways being used to evaluate science. Crucially, my argument is independent of the details of what metrics are being broadly adopted: no matter how well-designed a particular metric may be, we shall see that it would be better to use a more heterogeneous system.

Nielsen notes three problems with centralised metrics (this can be relying solely on a h-index, citations, publication counts, or whatever else you fancy):

Centralized metrics suppress cognitive diversity: Over the past decade the complexity theorist Scott Page and his collaborators have proved some remarkable results about the use of metrics to identify the “best” people to solve a problem (ref,ref).

Centralized metrics create perverse incentives: Imagine, for the sake of argument, that the US National Science Foundation (NSF) wanted to encourage scientists to use YouTube videos as a way of sharing scientific results. The videos could, for example, be used as a way of explaining crucial-but-hard-to-verbally-describe details of experiments. To encourage the use of videos, the NSF announces that from now on they’d like grant applications to include viewing statistics for YouTube videos as a metric for the impact of prior research. Now, this proposal obviously has many problems, but for the sake of argument please just imagine it was being done. Suppose also that after this policy was implemented a new video service came online that was far better than YouTube. If the new service was good enough then people in the general consumer market would quickly switch to the new service. But even if the new service was far better than YouTube, most scientists – at least those with any interest in NSF funding – wouldn’t switch until the NSF changed its policy. Meanwhile, the NSF would have little reason to change their policy, until lots of scientists were using the new service. In short, this centralized metric would incentivize scientists to use inferior systems, and so inhibit them from using the best tools.

Centralized metrics misallocate resources: One of the causes of the financial crash of 2008 was a serious mistake made by rating agencies such as Moody’s, S&P, and Fitch. The mistake was to systematically underestimate the risk of investing in financial instruments derived from housing mortgages. Because so many investors relied on the rating agencies to make investment decisions, the erroneous ratings caused an enormous misallocation of capital, which propped up a bubble in the housing market. It was only after homeowners began to default on their mortgages in unusually large numbers that the market realized that the ratings agencies were mistaken, and the bubble collapsed. It’s easy to blame the rating agencies for this collapse, but this kind of misallocation of resources is inevitable in any system which relies on centralized decision-making. The reason is that any mistakes made at the central point, no matter how small, then spread and affect the entire system.

What of course is breath-taking is that scientists, who spend so much time devising sensitive measurements of complex phenomena, can sometimes suffer a bizarre cognitive pathology when it comes to how the quality of science itself should be measured.  The sudden rise of the h index is surely proof of that. Nothing can actually substitute for the hard work of actually reading the papers and judging their quality and creativity.  Grillner and colleagues recommend that “Minimally, we must forego using impact factors as a proxy for excellence and replace them with indepth analyses of the science produced by candidates for positions and grants. This requires more time and effort from senior scientists and cooperation from international communities, because not every country has the necessary expertise in all areas of science.” Nielsen makes a similar recommendation.

From the latest Federation of European Neuroscience Societies (FENS) Newsletter – an article from PNAS on ‘The Boon and Bane of the Impact Factor’ (and abuse of the drug ‘Sciagra’)

December 23, 2010 1 comment

A very hard-hitting piece on the abuses of impact factors and their pernicious effects on how science is done. Sten Grillner is a Kavli Prize winner who recently gave a lecture at Trinity College Institute of Neuroscience. It is worth musing on whether or not the widespread use and abuse of impact factors is science’s very own special version of grade inflation.

From FENS: The editorial “Impacting our Young” by Eve Marder (Past President of the American Society for Neuroscience), Helmut Kettenmann (Past President of FENS) and Sten Grillner (President of FENS) has been published in the most recent issue of PNAS (PNAS 2010 107 (50) 21233).

A quote:

It is our contention that overreliance on the impact factor is a corrupting force on our young scientists (and also on more senior scientists) and that we would be well-served to divest ourselves of its influence.

And another:

The hypocrisy inherent in choosing a journal because of its impact factor, rather than the science it publishes,undermines the ideals by which science should be done.

And their advice:

Minimally, we must forego using impact factors as a proxy for excellence and replace them with indepth analyses of the science produced by candidates for positions and grants. This requires more time and effort from senior scientists and cooperation from international communities, because not every country has the necessary expertise in all areas of science.

It reminds me off a piece lampooning impact factors by Uinseonn O’Breathnach (me too) in Current Biology a few years ago, entitled ‘Sciagra‘:

What is it? Sciagra™ is a psychologically self-administered drug that acts on grammar and vocabulary in scientific papers with the aim of improving performance, or at least convincing the user that it does.

How widespread is its use? It’s almost impossible to avoid in impact factor zones above 8. Some disciplines even have their own compounds. Psyagra™ and Genagra™ are particularly dangerous new ‘society’ versions, especially potent and unfortunately accessible to journalists who have to write “It’s the Brain wot does it!” or “Scientists produce creature that is half human, half grant reviewer” stories to tight deadlines.

How do I recognise its use by others? The symptoms are easy to spot. A user will always tell you the impact factor of the journal rather than what the paper is about. They will display an intensity unrelated to the importance of the finding and an inability to cite anything published before 1999. They frequently meet rejection of a paper with a complaint to the editor, and seasoned users may even make unsolicited phone calls to editors to make their complaint.

It seems to be available on open access.