On Facebook, the Employment Control Framework and root gardening | An eye on science and what makes it going
In 2003 Mark Zuckerberg created Facebook, an idea now worth 65 billions dollars that has changed the way people communicate. This is probably the most successful venture in the history of capitalism, hence in the history of modern economy.
#ECF11: Not doing more with less, but doing more of what we tell you
Imagine this:Times are hard. A business gets into trouble, and begins to scale back its costs by telling its various departments to do more with less. Where last year, you had X for your budget, now you have 75% of X. No bother, the departments say, and off they go, doing more with less.Now let’s say the CEO of the business says ‘actually lads, in addition to the doing more with less stuff, we won’t let you go out and get funds from elsewhere–certainly not the head division–which might actually make the business some money and take some of the pressure off others. Not only that, we’ll make sure any incentive you had to do more with less is taken away. In fact, theSorry, what? That’s insane. Why wouldn’t they want a situation where the best people in the business did what they did best and brought in funds to allow it to grow? Why wouldn’t they incentivise non-core expansion with promotions, bonuses, and back slapping opportunities? Why wouldn’t the business accept that you can’t cut too much too quickly, especially at the bottom otherwise the business will die at its roots?That’s exactly what is proposed in the revised and expanded Higher Education Authority’s Employment Control Framework (ECF), signed by the last Minister for Finance as he was cleaning out his desk. Ireland’s universities receive a block grant from the Higher Education Authority on behalf of the government. The HEA has the purse strings, and the ECF is its way of tightening them.
Atlantic Corridor STEM Conference, which focuses on how science, technology, engineering and maths are taught
The Atlantic Corridor STEM Conference, which focuses on how science, technology, engineering and maths are taught in our schools and colleges, takes place in March with a keynote speaker of international quality. Dr Ben Goldacre M.D. is an author of the Guardian newspaper’s weekly column called Bad Science. His website www.badscience.net is devoted to satirical criticism of scientific inaccuracy, health scares, pseudoscience and quackery. It focuses especially on examples from the mass media, consumer product marketing, problems with the pharmaceutical industry and its relationship to medical journals as well as complementary and alternative medicine in Britain.
In its third year, the conference gives industry professionals such as teachers, lecturers and anyone connected to the education sector the opportunity to examine the quality in the way in which certain subjects are taught in our schools and colleges. It examines how young the children should be when introducing them to STEM subjects as well as methods used to teach. The conference examines alternatives to this and give educators an opportunity to help make a difference to the existing curriculum.
The conference will also host over 100 Transition Year students who will be challenged to give their honest views on the subjects. The students will then deliver the results to those attending the conference.
The conference takes place on March 10th 2011 in the Tullamore Court Hotel. To book your place at this event please visit www.eventelephant.com/atlanticconference2011
Other speakers at the conference include Sarah Baird from the Arizona Centre for STEM Education, Prof. Patrick Cunningham Chief Scientific Advisor to the Government, Dr. Thad Starner Founder and Director of the Contextual Computing Group in Georgia Tech and Paul Carroll from CPL.
The conference is sponsored by Ericsson, the world’s leading telecommunications company and running in parallel to the conference is a primary school science competition and a workshop for secondary school students focussing on their attitudes to science and technology.
As Republicans take control of the US House of Representatives, science could take a hit – despite a new Congressional measure to boost funding.
“There’s going to be a big fight,” says Michael Lubell of the American Physical Society in Washington DC. “The question is who blinks first.”
In one of its last votes before the holidays, Congress passed the America COMPETES Reauthorization Act. Contained in the act is a resolution to boost science funding over the next three years.
But with budget-minded Republicans now a majority in the House of Representatives, even maintaining science funding at existing levels could be a struggle.
There was a short piece here recently on the misuse of impact factors to measure scientific quality, and how this in turn leads to dependence on drugs like Sciagra™ and other dangerous variants such as Psyagra™ and Genagra™.
Here’s an interesting and important post from Michael Nielsen on the mismeasurement of science. The essence of his argument is straightforward: unidimensional reduction of a multidimensional variable set is going to lead to significant loss of important information (or at least that’s how I read it):
My argument … is essentially an argument against homogeneity in the evaluation of science: it’s not the use of metrics I’m objecting to, per se, rather it’s the idea that a relatively small number of metrics may become broadly influential. I shall argue that it’s much better if the system is very diverse, with all sorts of different ways being used to evaluate science. Crucially, my argument is independent of the details of what metrics are being broadly adopted: no matter how well-designed a particular metric may be, we shall see that it would be better to use a more heterogeneous system.
Nielsen notes three problems with centralised metrics (this can be relying solely on a h-index, citations, publication counts, or whatever else you fancy):
Centralized metrics suppress cognitive diversity: Over the past decade the complexity theorist Scott Page and his collaborators have proved some remarkable results about the use of metrics to identify the “best” people to solve a problem (ref,ref).
Centralized metrics create perverse incentives: Imagine, for the sake of argument, that the US National Science Foundation (NSF) wanted to encourage scientists to use YouTube videos as a way of sharing scientific results. The videos could, for example, be used as a way of explaining crucial-but-hard-to-verbally-describe details of experiments. To encourage the use of videos, the NSF announces that from now on they’d like grant applications to include viewing statistics for YouTube videos as a metric for the impact of prior research. Now, this proposal obviously has many problems, but for the sake of argument please just imagine it was being done. Suppose also that after this policy was implemented a new video service came online that was far better than YouTube. If the new service was good enough then people in the general consumer market would quickly switch to the new service. But even if the new service was far better than YouTube, most scientists – at least those with any interest in NSF funding – wouldn’t switch until the NSF changed its policy. Meanwhile, the NSF would have little reason to change their policy, until lots of scientists were using the new service. In short, this centralized metric would incentivize scientists to use inferior systems, and so inhibit them from using the best tools.
Centralized metrics misallocate resources: One of the causes of the financial crash of 2008 was a serious mistake made by rating agencies such as Moody’s, S&P, and Fitch. The mistake was to systematically underestimate the risk of investing in financial instruments derived from housing mortgages. Because so many investors relied on the rating agencies to make investment decisions, the erroneous ratings caused an enormous misallocation of capital, which propped up a bubble in the housing market. It was only after homeowners began to default on their mortgages in unusually large numbers that the market realized that the ratings agencies were mistaken, and the bubble collapsed. It’s easy to blame the rating agencies for this collapse, but this kind of misallocation of resources is inevitable in any system which relies on centralized decision-making. The reason is that any mistakes made at the central point, no matter how small, then spread and affect the entire system.
What of course is breath-taking is that scientists, who spend so much time devising sensitive measurements of complex phenomena, can sometimes suffer a bizarre cognitive pathology when it comes to how the quality of science itself should be measured. The sudden rise of the h index is surely proof of that. Nothing can actually substitute for the hard work of actually reading the papers and judging their quality and creativity. Grillner and colleagues recommend that “Minimally, we must forego using impact factors as a proxy for excellence and replace them with indepth analyses of the science produced by candidates for positions and grants. This requires more time and effort from senior scientists and cooperation from international communities, because not every country has the necessary expertise in all areas of science.” Nielsen makes a similar recommendation.
There is an oversupply of PhDs. Although a doctorate is designed as training for a job in academia, the number of PhD positions is unrelated to the number of job openings.
Meanwhile, business leaders complain about shortages of high-level skills, suggesting PhDs are not teaching the right things. The fiercest critics compare research doctorates to Ponzi or pyramid schemes.
From The Economist.
Many of the arguments are valid. But make two plausible assumptions and you get a different answer.
Assumption 1: Innovation, including academic research, is the fundamental driver of long term health, wealth and happiness for the human race. (The “including academic research” bit is the biggest leap.)
Assumption 2: Unfortunately it’s very difficult to say beforehand who will and who will not produce great, or even good, research. (Even after five years departments have trouble predicting which of their crop will excel.)
In this world, each extra PhD raises the chances of one more brilliant, world-changing idea. While hardly comforting to the thousands who toil without job prospects, the collective benefits just might outweigh all the individual misery.
The decision might be individually rational as well, especially if students are no better at predicting their success than their advisors (they probably aren’t).
(A similar analogy comes from Lant Pritchett, who points out that you need a system that produces an enormous number of terrible dance recitals to get the handful of sublime performers. The same logic applies, he argues, to development projects and policies.)
One counterpoint: Here is where I would expect to see overconfidence bias lead to oversupply (and few of the collective benefits thereof). So maybe we need a system that gives the least promising an easier out that saves face.