I’m curating my first ever ‘lab in the gallery’ – details below. Dr Joanne Feeney, a postdoc in my group has been absolutely central to making this even happen, as has the great and creative team in the Science Gallery – Michael John, Rob, Maria, Lynn, Anja, Ian, Derek, Rosa (and John for superb computer programming).
So, think you have a good memory? Are you brave enough to put it to the test at Science Gallery’s MEMORY LAB this March?
FIRE, GOAT, PLUM, SUMP, VANE, HAIR, FARM... How come you can recall your first day at school vividly but won’t remember this list of words when you get to the bottom of this paragraph? Why are some of us memory champions while others have a heads like sieves? Is our ability to remember nature or nurture?
MEMORY LAB, a month-long LAB IN THE GALLERY experience at Science Gallery, Trinity College Dublin, invites the public to take part in a range of real, scientific experiments into how we remember or why we forget.
Eight separate experiments will investigate a range of aspects of functional memory from how good your short-term memory is to how and why we evolved memory in the first place. Be prepared for a barrage of information you will have to recall including numbers, letters, faces and even smells! We’re also inviting people to come and record their earliest ever memory as MEMORY LAB seeks to amass the largest database of earliest memories in the world.
The experimentation begins at Science Gallery on March 11th and continues until April 8th 2011. MEMORY LAB will also contain a rich events programme to allow you explore memory deeper – including a talk by former US Memory Champion and author of “Moonwalking with Einstein: The Art and Science of Remembering Everything” Joshua Foer. Events also include special recruitment event on how to make yourself memorable and Science Gallery’s first ever table quiz where you can put your ability to recall to the test.
MEMORY LAB opens to the public on March 11th and runs until April 8th. The experiments run Tuesday-Friday 12:00-20:00 and Saturday-Sunday 12:00-18:00. Admission free with a suggested donation of €5. You can be the first to experience MEMORY LAB by going to the exclusive preview party on March 10th. All Science Gallery MEMBERS+ go for free (sign up today here) or buy your tickets here.
[This is the second in a series of posts relevant to the all-consuming topic in Ireland at the moment: the fall of the Fianna Fail/Green Party Coalition Government, and the resulting general election to take place on the 25 February 2011. My title approximates a question/comment posed by a guest (I think it was Jim Glennon, the former FF TD) on George Hook’s programme on Newstalk].
The iron law of institutions is the name given to the concern of people who have power in institutions to preserve their power within that institution (even when the institution is failing), rather than being concerned with the success of the institution itself. We have seen this law in action during the recent ructions in Fianna Fáil. Taoiseach Brian Cowen continually asserted as he fell from 52% to 8% in the polls that he was the democratically-elected party leader. Fianna Fáil found itself incapable of terminating (as it fell from 40% to 14% in the polls) his badly-ailing leadership. And this despite the electoral cliff that Fianna Fáil was driving over! Disputes over Brian Cowen’s leadership convulsed Fianna Fáil. There is a substantial literature on successful and unsuccessful leadership. One major review suggests successful leadership requires obeying a few simple precepts. Leaders must be sensitive to their followers, support them, treat them with respect and exceed their expectations; to be positive and inspirational; to work hard and to be seen to work hard for the group; and not be overbearing or arrogant. Which of our leaders, past and present, can tick off these precepts successfully?
Solely focusing on individual leadership and ignoring the situations within which behaviour occurs is known as the fundamental attribution error, and is a cognitive bias caused by the salience of the person, and the relative invisibility of the system (group norms, laws, rules, etc.). The lesson for Irish politics is simple: changing personnel is not enough to solve our problems, because the dysfunctional system itself persists. We need substantial systemic changes too. Political decisions are often (usually?) taken within a group context (think Cabinet collective responsibility). Groupthink occurs when a group makes poor decisions because of high levels of within-group cohesion and a desire to minimise within-group conflict (as might happen in an exhausted, embattled and worn-out Government Cabinet!). The necessary critical analysis does not occur. NAMA would hardly have emerged as the optimal policy response had there been a competitive public forum to test and vet competing policy ideas (with the Cabinet adjudicating). The hugely-criticised ‘Credit Institutions (Stabilization) Act 2010’ (e.g., e.g., e.g., e.g.) would hardly have emerged from such a process. Such a process would show that vesting such astonishing power and authority in the frail, bounded rationality of a single individual (the Minister for Finance) is a recipe for future catastrophe. Groupthink can be reduced by the group having an extensive network of weak ties to other individuals and groups. Weak ties provide us with novel ideas and knowledge, and provide a route to ‘reality-test’ planned courses of action. An extensive national and international weak tie network would provide Government Ministers knowledge, insights and ideas unavailable within the Dáil bubble.
Where did it all go so badly wrong? (Part 1: Anosagnosia, cognitive biases, cognitive diversity, expertise, incompetence)
[This is the first in a series of posts relevant to the all-consuming topic in Ireland at the moment: the fall of the Fianna Fail/Green Party Coalition Government, and the resulting general election to take place on the 25 February 2011. My title approximates a question/comment posed by a guest (I think it was Jim Glennon, the former FF TD) on George Hook’s programme on Newstalk].
(Figure with apologies and thanks to a variety of similar figures floating around the web)
‘We need to take difficult decisions’ Brian Lenihan
No, we actually need to take much better quality decisions, ones that acknowledge the frailty and error-proneness of human cognition, and which are capable of changing as circumstances change and knowledge evolves
Four times in eighty years (the 1930’s, ‘50’s, ‘80’s and now) the economy has been wrecked. Rightly, there has been plenty of economic analysis (e.g.) of our current problems. There have also been attempts to locate our recurrent problem in the Irish personality (but nations don’t have personalities!), or in dispositions inherited from our colonial past (a novel research area for epigenetics, I suppose). Whatever about the Irish personality, there is no-one in power, and few alive, who remember Ireland as it was pre-1916/1921. We need to look for more proximate causes in the decisions made and executed by our national elites (the administrative, political and commercial classes in charge of the institutions powering the country).
How did our elites get it so spectacularly wrong? A useful way of looking at the problem is to focus on persistent and enduring cognitive biases and errors leading to faulty decision-making by individuals (political leaders, civil servants, bankers, etc.) and institutions (social systems and organisations: Government departments, banks, churches, etc). Looking for proximate causes in pervasive and enduring cognitive errors by individuals and institutions (the social systems within which collective cognition is organised) provides a route to prevent us persisting in correctable mistakes. Happily, we have a guide from more than fifty years of data in experimental psychology and experimental brain research to understand how human rationality and reason is bounded and error-prone. [UPDATE: see note at bottom]
The phrase ‘delusional’ has been used to describe the behaviour of certain members of our elites, but delusions imply the psychiatric diagnosis of pathological beliefs maintained contrary to all evidence. Anosagnosia (a more useful description) is the condition of literally being ‘without knowledge’ (being unaware) of a cognitive or other impairment, and behaving as if there is no problem. This might describe the issues regarding the lack of a regulatory response to capital adequacy in our banks! Additionally, the knowledge and expertise required to solve the problems confronting us is both greater than our elites understand and can acknowledge. This leads to anosagnosia within the cultures of these organisations. Irish elites do not know that they do not know, nor do they even know what they need to know. The figure at top tries to capture this idea. Both the party political system and the civil service systems undervalue expertise and suppress cognitive diversity. Substantial data show that complex and difficult problems (for example, how to rescue a collapsing economy) are best solved by groups with substantial intellectual strength and capacity (obvious), and substantial diversity of experience (not obvious).Truth be told, the problems we are grappling with are beyond the judgement and capacity of the few teachers and the couple of lawyers comprising the bulk of the Cabinet just leaving office. And we still lack (with some exceptions) the high-level policy formation and broad expertise inputs required to support the Cabinet when it has to make crucial decisions.
Our political system is constructed so that the proving ground for national politics is local council politics. This experience leaves politicians good at political manoeuvring and vote-getting, but ill-prepares them for the national game, which ends up reflected through the prism of the local, rather than the national or international. This is why many of our competitor countries deliberately bring extra cognitive diversity into their political systems. Famously, the Cabinet of the US Government has only one elected politician (the President). However, it can field a Nobel Prize in Physics (Secretary of State for Energy, Steven Chu), and world-class economics expertise (Timothy Geithner; Austan Goolsbee). Some have disputed the necessity for some form of national list system because it would have resulted in Sean Fitzpatrick being nominated to the Dáil. (Has this been a conspicuous problem in countries with national list systems?) This is an interesting cognitive error – not exploring counterfactuals to this example, as these would falsify the claim. It is even more possible that many people of capacity and ability are turned off by the system as it currently stands. After all, we have plenty of talent, but it shies away from electoral politics (to all our costs). Such arguments may result from cognitive dissonance (the unpleasant feeling caused by holding two contradictory beliefs simultaneously: for example, “our system isn’t delivering; I and my colleagues are good people and are part of the system; therefore it isn’t the system; the problems lie elsewhere” (with bad bankers, for example, although logically this may also be true!).The potential reduction in available Cabinet slots for non-list constituency candidates because of a list system presumably discommodes some too.
UPDATE: I forgot the Dunning-Kruger effect – a quote from the Wikipedia entry:
The Dunning-Kruger Effect is a cognitive bias in which unskilled people make poor decisions and reach erroneous conclusions, but their incompetence denies them the metacognitive ability to appreciate their mistakes. The unskilled therefore suffer from illusory superiority, rating their ability as above average, much higher than it actually is, while the highly skilled underrate their own abilities, suffering from illusory inferiority. This leads to the situation in which less competent people rate their own ability higher than more competent people. It also explains why actual competence may weaken self-confidence. Competent individuals falsely assume that others have an equivalent understanding. “Thus, the miscalibration of the incompetent stems from an error about the self, whereas the miscalibration of the highly competent stems from an error about others.”
Kruger and Dunning proposed that, for a given skill, incompetent people will:
- tend to overestimate their own level of skill;
- fail to recognize genuine skill in others;
- fail to recognize the extremity of their inadequacy;
- recognize and acknowledge their own previous lack of skill, if they can be trained to substantially improve.
Now, who does this remind you of?
Atlantic Corridor STEM Conference, which focuses on how science, technology, engineering and maths are taught
The Atlantic Corridor STEM Conference, which focuses on how science, technology, engineering and maths are taught in our schools and colleges, takes place in March with a keynote speaker of international quality. Dr Ben Goldacre M.D. is an author of the Guardian newspaper’s weekly column called Bad Science. His website www.badscience.net is devoted to satirical criticism of scientific inaccuracy, health scares, pseudoscience and quackery. It focuses especially on examples from the mass media, consumer product marketing, problems with the pharmaceutical industry and its relationship to medical journals as well as complementary and alternative medicine in Britain.
In its third year, the conference gives industry professionals such as teachers, lecturers and anyone connected to the education sector the opportunity to examine the quality in the way in which certain subjects are taught in our schools and colleges. It examines how young the children should be when introducing them to STEM subjects as well as methods used to teach. The conference examines alternatives to this and give educators an opportunity to help make a difference to the existing curriculum.
The conference will also host over 100 Transition Year students who will be challenged to give their honest views on the subjects. The students will then deliver the results to those attending the conference.
The conference takes place on March 10th 2011 in the Tullamore Court Hotel. To book your place at this event please visit www.eventelephant.com/atlanticconference2011
Other speakers at the conference include Sarah Baird from the Arizona Centre for STEM Education, Prof. Patrick Cunningham Chief Scientific Advisor to the Government, Dr. Thad Starner Founder and Director of the Contextual Computing Group in Georgia Tech and Paul Carroll from CPL.
The conference is sponsored by Ericsson, the world’s leading telecommunications company and running in parallel to the conference is a primary school science competition and a workshop for secondary school students focussing on their attitudes to science and technology.
There was a short piece here recently on the misuse of impact factors to measure scientific quality, and how this in turn leads to dependence on drugs like Sciagra™ and other dangerous variants such as Psyagra™ and Genagra™.
Here’s an interesting and important post from Michael Nielsen on the mismeasurement of science. The essence of his argument is straightforward: unidimensional reduction of a multidimensional variable set is going to lead to significant loss of important information (or at least that’s how I read it):
My argument … is essentially an argument against homogeneity in the evaluation of science: it’s not the use of metrics I’m objecting to, per se, rather it’s the idea that a relatively small number of metrics may become broadly influential. I shall argue that it’s much better if the system is very diverse, with all sorts of different ways being used to evaluate science. Crucially, my argument is independent of the details of what metrics are being broadly adopted: no matter how well-designed a particular metric may be, we shall see that it would be better to use a more heterogeneous system.
Nielsen notes three problems with centralised metrics (this can be relying solely on a h-index, citations, publication counts, or whatever else you fancy):
Centralized metrics suppress cognitive diversity: Over the past decade the complexity theorist Scott Page and his collaborators have proved some remarkable results about the use of metrics to identify the “best” people to solve a problem (ref,ref).
Centralized metrics create perverse incentives: Imagine, for the sake of argument, that the US National Science Foundation (NSF) wanted to encourage scientists to use YouTube videos as a way of sharing scientific results. The videos could, for example, be used as a way of explaining crucial-but-hard-to-verbally-describe details of experiments. To encourage the use of videos, the NSF announces that from now on they’d like grant applications to include viewing statistics for YouTube videos as a metric for the impact of prior research. Now, this proposal obviously has many problems, but for the sake of argument please just imagine it was being done. Suppose also that after this policy was implemented a new video service came online that was far better than YouTube. If the new service was good enough then people in the general consumer market would quickly switch to the new service. But even if the new service was far better than YouTube, most scientists – at least those with any interest in NSF funding – wouldn’t switch until the NSF changed its policy. Meanwhile, the NSF would have little reason to change their policy, until lots of scientists were using the new service. In short, this centralized metric would incentivize scientists to use inferior systems, and so inhibit them from using the best tools.
Centralized metrics misallocate resources: One of the causes of the financial crash of 2008 was a serious mistake made by rating agencies such as Moody’s, S&P, and Fitch. The mistake was to systematically underestimate the risk of investing in financial instruments derived from housing mortgages. Because so many investors relied on the rating agencies to make investment decisions, the erroneous ratings caused an enormous misallocation of capital, which propped up a bubble in the housing market. It was only after homeowners began to default on their mortgages in unusually large numbers that the market realized that the ratings agencies were mistaken, and the bubble collapsed. It’s easy to blame the rating agencies for this collapse, but this kind of misallocation of resources is inevitable in any system which relies on centralized decision-making. The reason is that any mistakes made at the central point, no matter how small, then spread and affect the entire system.
What of course is breath-taking is that scientists, who spend so much time devising sensitive measurements of complex phenomena, can sometimes suffer a bizarre cognitive pathology when it comes to how the quality of science itself should be measured. The sudden rise of the h index is surely proof of that. Nothing can actually substitute for the hard work of actually reading the papers and judging their quality and creativity. Grillner and colleagues recommend that “Minimally, we must forego using impact factors as a proxy for excellence and replace them with indepth analyses of the science produced by candidates for positions and grants. This requires more time and effort from senior scientists and cooperation from international communities, because not every country has the necessary expertise in all areas of science.” Nielsen makes a similar recommendation.