Why investing in basic research pays – an interactive presentation from the Association of American Universities (AAU)
The story of how the Digital Library Initiative created the context from which Google evolved.
UPDATE: much, much, much more here:
I’m curating my first ever ‘lab in the gallery’ – details below. Dr Joanne Feeney, a postdoc in my group has been absolutely central to making this even happen, as has the great and creative team in the Science Gallery – Michael John, Rob, Maria, Lynn, Anja, Ian, Derek, Rosa (and John for superb computer programming).
So, think you have a good memory? Are you brave enough to put it to the test at Science Gallery’s MEMORY LAB this March?
FIRE, GOAT, PLUM, SUMP, VANE, HAIR, FARM... How come you can recall your first day at school vividly but won’t remember this list of words when you get to the bottom of this paragraph? Why are some of us memory champions while others have a heads like sieves? Is our ability to remember nature or nurture?
MEMORY LAB, a month-long LAB IN THE GALLERY experience at Science Gallery, Trinity College Dublin, invites the public to take part in a range of real, scientific experiments into how we remember or why we forget.
Eight separate experiments will investigate a range of aspects of functional memory from how good your short-term memory is to how and why we evolved memory in the first place. Be prepared for a barrage of information you will have to recall including numbers, letters, faces and even smells! We’re also inviting people to come and record their earliest ever memory as MEMORY LAB seeks to amass the largest database of earliest memories in the world.
The experimentation begins at Science Gallery on March 11th and continues until April 8th 2011. MEMORY LAB will also contain a rich events programme to allow you explore memory deeper – including a talk by former US Memory Champion and author of “Moonwalking with Einstein: The Art and Science of Remembering Everything” Joshua Foer. Events also include special recruitment event on how to make yourself memorable and Science Gallery’s first ever table quiz where you can put your ability to recall to the test.
MEMORY LAB opens to the public on March 11th and runs until April 8th. The experiments run Tuesday-Friday 12:00-20:00 and Saturday-Sunday 12:00-18:00. Admission free with a suggested donation of €5. You can be the first to experience MEMORY LAB by going to the exclusive preview party on March 10th. All Science Gallery MEMBERS+ go for free (sign up today here) or buy your tickets here.
The Hunt Report (National Strategy for Higher Education) is now available. There’s been lots of commentary on it: e.g. from Eoin O’Dell, Ferdinand von Prondzynski and various newspapers (see Ninth Level Ireland for an aggregation of many of the stories).
Here, I just want to draw attention to a conflict between a recommendation of the Hunt Report and Bord Snip Nua (Report of the Special Group on Public Service Numbers and Expenditure Programmes). The two reports have very different recommendations for the Higher Education Authority, with Snip recommending abolition, and Hunt recommending beefing it up. This conflict has not attracted any attention that I have noticed yet.
‘The multiple role for and expectations of the higher education system will require a strong central driving mechanism. Since the Higher Education Authority Act of 1971, funding and policy advisory responsibility have been vested in the HEA. This responsibility was widened to include the Institute of Technology sector in 2006. The Report of the Special Group on Public Service Numbers and Expenditure Programmes (2009) recommended that the HEA be abolished and its staff and functions be merged back into the Department of Education & Skills.’
Instead, Hunt recommends: ‘The Strategy Group, taking account of the more specialised role involved in future system governance, took the view that the best approach to take is to retain a Higher Education Authority.’
The McCarthy group stated:
D. 3 Merge HEA with D/E&S
There is duplication in the number of staff carrying out administrative supervision work for the third level education institutions across D/E&S and the Higher Education Authority (HEA). There are 44 staff in the D/E&S supervising the third level institutions4. The Special Group is of the view that this staffing level is too high considering that the HEA (staff of 59) already carries out similar activities. The Group considers that the HEA should be merged with the D/E&S to generate efficiencies in staffing and administrative expenditure. The Group envisages savings of €1m and associated staffing reductions of 15.
How will these differing views be brought into register? Central Planning hasn’t worked so well (either in the former USSR or currently in the HSE), for reasons that good Hayekians appreciate: the existence of a widespread ‘pretence that central government …[can] acquire knowledge which, in fact, is unobtainable’ (via), which can then be used to generate courses of action, and even to know and predict the future.
It would be have been good to have seen within the Hunt report sunset options for strategies that were palpably not working, as well as a few clear statements of what empirical observations would void the recommendations of the report. (See also this post on how centralisation suppresses cognitive diversity, creates perverse incentives and misallocates resources). Homogenising the third-level system cannot be a good thing; as Ferdinand von Prondzynski comments:
The flaw in this vision is that it doesn’t work. Universities are at their most innovative and creative when they are allowed to pursue their own vision. So for example, the current German government is busily changing the post-War framework of universities as coordinated government agencies and giving them higher levels of strategic autonomy exactly because the ‘agency’ model has made them under-perform in global terms. American universities became the global leaders they now are from the moment that they were allowed to escape from bureaucratic controls. There is no evidence from anywhere that a centralised coordination of institutional strategies creates wider benefits for society. (Emphasis added)
There was a short piece here recently on the misuse of impact factors to measure scientific quality, and how this in turn leads to dependence on drugs like Sciagra™ and other dangerous variants such as Psyagra™ and Genagra™.
Here’s an interesting and important post from Michael Nielsen on the mismeasurement of science. The essence of his argument is straightforward: unidimensional reduction of a multidimensional variable set is going to lead to significant loss of important information (or at least that’s how I read it):
My argument … is essentially an argument against homogeneity in the evaluation of science: it’s not the use of metrics I’m objecting to, per se, rather it’s the idea that a relatively small number of metrics may become broadly influential. I shall argue that it’s much better if the system is very diverse, with all sorts of different ways being used to evaluate science. Crucially, my argument is independent of the details of what metrics are being broadly adopted: no matter how well-designed a particular metric may be, we shall see that it would be better to use a more heterogeneous system.
Nielsen notes three problems with centralised metrics (this can be relying solely on a h-index, citations, publication counts, or whatever else you fancy):
Centralized metrics suppress cognitive diversity: Over the past decade the complexity theorist Scott Page and his collaborators have proved some remarkable results about the use of metrics to identify the “best” people to solve a problem (ref,ref).
Centralized metrics create perverse incentives: Imagine, for the sake of argument, that the US National Science Foundation (NSF) wanted to encourage scientists to use YouTube videos as a way of sharing scientific results. The videos could, for example, be used as a way of explaining crucial-but-hard-to-verbally-describe details of experiments. To encourage the use of videos, the NSF announces that from now on they’d like grant applications to include viewing statistics for YouTube videos as a metric for the impact of prior research. Now, this proposal obviously has many problems, but for the sake of argument please just imagine it was being done. Suppose also that after this policy was implemented a new video service came online that was far better than YouTube. If the new service was good enough then people in the general consumer market would quickly switch to the new service. But even if the new service was far better than YouTube, most scientists – at least those with any interest in NSF funding – wouldn’t switch until the NSF changed its policy. Meanwhile, the NSF would have little reason to change their policy, until lots of scientists were using the new service. In short, this centralized metric would incentivize scientists to use inferior systems, and so inhibit them from using the best tools.
Centralized metrics misallocate resources: One of the causes of the financial crash of 2008 was a serious mistake made by rating agencies such as Moody’s, S&P, and Fitch. The mistake was to systematically underestimate the risk of investing in financial instruments derived from housing mortgages. Because so many investors relied on the rating agencies to make investment decisions, the erroneous ratings caused an enormous misallocation of capital, which propped up a bubble in the housing market. It was only after homeowners began to default on their mortgages in unusually large numbers that the market realized that the ratings agencies were mistaken, and the bubble collapsed. It’s easy to blame the rating agencies for this collapse, but this kind of misallocation of resources is inevitable in any system which relies on centralized decision-making. The reason is that any mistakes made at the central point, no matter how small, then spread and affect the entire system.
What of course is breath-taking is that scientists, who spend so much time devising sensitive measurements of complex phenomena, can sometimes suffer a bizarre cognitive pathology when it comes to how the quality of science itself should be measured. The sudden rise of the h index is surely proof of that. Nothing can actually substitute for the hard work of actually reading the papers and judging their quality and creativity. Grillner and colleagues recommend that “Minimally, we must forego using impact factors as a proxy for excellence and replace them with indepth analyses of the science produced by candidates for positions and grants. This requires more time and effort from senior scientists and cooperation from international communities, because not every country has the necessary expertise in all areas of science.” Nielsen makes a similar recommendation.
Posted by David Boaz
Matt Ridley’s new book, The Rational Optimist: How Prosperity Evolves, is garnering rave reviews. Ridley, science writer and popularizer of evolutionary psychology, shows how it was trade and specialization of labor–and the resulting massive growth in technological sophistication–that hauled humanity from its impoverished past to its comparatively rich present. These trends will continue, he argues, and will solve many of today’s most pressing problems, from the spread of disease to the threat of climate change.
The Cato Institute has now presented three different looks at the book, with a review in the Cato Journal, another in Regulation, and an event at Cato with Matt Ridley himself.
FWIW, it is one of the most enjoyable books I have read this year, and a great counter to the pervasive misery-mongering rife. The tagline ‘ideas having sex’ is a great metaphor for the advancement of knowledge. Here is his TED talk – well worth watching.
From The Scientist: ‘The astonishing secret to getting jobs, grants, papers, and happiness in biomedical research’
From the latest Federation of European Neuroscience Societies (FENS) Newsletter – an article from PNAS on ‘The Boon and Bane of the Impact Factor’ (and abuse of the drug ‘Sciagra’)
A very hard-hitting piece on the abuses of impact factors and their pernicious effects on how science is done. Sten Grillner is a Kavli Prize winner who recently gave a lecture at Trinity College Institute of Neuroscience. It is worth musing on whether or not the widespread use and abuse of impact factors is science’s very own special version of grade inflation.
From FENS: The editorial “Impacting our Young” by Eve Marder (Past President of the American Society for Neuroscience), Helmut Kettenmann (Past President of FENS) and Sten Grillner (President of FENS) has been published in the most recent issue of PNAS (PNAS 2010 107 (50) 21233).
It is our contention that overreliance on the impact factor is a corrupting force on our young scientists (and also on more senior scientists) and that we would be well-served to divest ourselves of its influence.
The hypocrisy inherent in choosing a journal because of its impact factor, rather than the science it publishes,undermines the ideals by which science should be done.
And their advice:
Minimally, we must forego using impact factors as a proxy for excellence and replace them with indepth analyses of the science produced by candidates for positions and grants. This requires more time and effort from senior scientists and cooperation from international communities, because not every country has the necessary expertise in all areas of science.
What is it? Sciagra™ is a psychologically self-administered drug that acts on grammar and vocabulary in scientific papers with the aim of improving performance, or at least convincing the user that it does.
How widespread is its use? It’s almost impossible to avoid in impact factor zones above 8. Some disciplines even have their own compounds. Psyagra™ and Genagra™ are particularly dangerous new ‘society’ versions, especially potent and unfortunately accessible to journalists who have to write “It’s the Brain wot does it!” or “Scientists produce creature that is half human, half grant reviewer” stories to tight deadlines.
How do I recognise its use by others? The symptoms are easy to spot. A user will always tell you the impact factor of the journal rather than what the paper is about. They will display an intensity unrelated to the importance of the finding and an inability to cite anything published before 1999. They frequently meet rejection of a paper with a complaint to the editor, and seasoned users may even make unsolicited phone calls to editors to make their complaint.
It seems to be available on open access.