Archive

Posts Tagged ‘grade inflation’

From the latest Federation of European Neuroscience Societies (FENS) Newsletter – an article from PNAS on ‘The Boon and Bane of the Impact Factor’ (and abuse of the drug ‘Sciagra’)

December 23, 2010 1 comment

A very hard-hitting piece on the abuses of impact factors and their pernicious effects on how science is done. Sten Grillner is a Kavli Prize winner who recently gave a lecture at Trinity College Institute of Neuroscience. It is worth musing on whether or not the widespread use and abuse of impact factors is science’s very own special version of grade inflation.

From FENS: The editorial “Impacting our Young” by Eve Marder (Past President of the American Society for Neuroscience), Helmut Kettenmann (Past President of FENS) and Sten Grillner (President of FENS) has been published in the most recent issue of PNAS (PNAS 2010 107 (50) 21233).

A quote:

It is our contention that overreliance on the impact factor is a corrupting force on our young scientists (and also on more senior scientists) and that we would be well-served to divest ourselves of its influence.

And another:

The hypocrisy inherent in choosing a journal because of its impact factor, rather than the science it publishes,undermines the ideals by which science should be done.

And their advice:

Minimally, we must forego using impact factors as a proxy for excellence and replace them with indepth analyses of the science produced by candidates for positions and grants. This requires more time and effort from senior scientists and cooperation from international communities, because not every country has the necessary expertise in all areas of science.

It reminds me off a piece lampooning impact factors by Uinseonn O’Breathnach (me too) in Current Biology a few years ago, entitled ‘Sciagra‘:

What is it? Sciagra™ is a psychologically self-administered drug that acts on grammar and vocabulary in scientific papers with the aim of improving performance, or at least convincing the user that it does.

How widespread is its use? It’s almost impossible to avoid in impact factor zones above 8. Some disciplines even have their own compounds. Psyagra™ and Genagra™ are particularly dangerous new ‘society’ versions, especially potent and unfortunately accessible to journalists who have to write “It’s the Brain wot does it!” or “Scientists produce creature that is half human, half grant reviewer” stories to tight deadlines.

How do I recognise its use by others? The symptoms are easy to spot. A user will always tell you the impact factor of the journal rather than what the paper is about. They will display an intensity unrelated to the importance of the finding and an inability to cite anything published before 1999. They frequently meet rejection of a paper with a complaint to the editor, and seasoned users may even make unsolicited phone calls to editors to make their complaint.

It seems to be available on open access.

It’s that time of year again: exam-grading (and here’s a study suggesting that using red ink means lower scores!)

May 17, 2010 1 comment

Interesting review from Tom Jacobs of a paper which suggests that ‘New research suggests the use of red ink by teachers to correct students’ work may result in harsher evaluations’

Remember those gut-wrenching high-school moments when a teacher handed you back a test or assignment, having corrected your mistakes and rendered a harsh verdict in bold red ink? It may be small consolation now, but newly published research suggests your grades may have been higher if that ink had been blue.

A study in the European Journal of Social Psychology suggests the use of red pens may make teachers more likely to spot errors on tests and to be more critical when grading essays. “Despite teachers’ efforts to free themselves from extraneous influences while grading,” write California State University Northridge psychologist Abraham Rutchick, Tufts University psychologist Michael Slepian and Bennett Ferris of Phillips Exeter Academy, “the very act of picking up a red pen can bias their evaluations.”

Is this the cure for grade inflation?

Tom Cotter on grade inflation – today’s Irish Times

Via the Irish Times

Madam, – The reports by your Education Editor, Seán Flynn (Front page and Home News) on grade inflation are disconcerting, but, as an educator on the front line of the education system, I am not surprised. The reasons for the observed grade inflations are rooted in the education and examining patterns laid down at Junior and Leaving Cert levels and then extended into third level.

Our students learn by rote and are then examined in a formula metric way that simply measures their ability to remember facts. The more this is practised, the better the grades are going to get as both student and examiner master the system which changes little from year to year.

We neither encourage nor test creativity by the way we teach and examine. The result of this is that we produce students who have high grades because they can remember facts, but who are unable to think critically or carry out analysis when they go out into the real world.

Employers are clearly picking up on this deficiency which should not be there if our grades were an accurate reflection of a student’s overall intellectual ability. In addition to teaching students the “facts” we also need to teach them to think and be creative and then allow them to express this creativity in an examination situation. By doing this we will be able to produce more rounded and better-educated students and the current grade inflation will correct itself. – Yours, etc,

THOMAS G COTTER, BSc, DPhil, MRIA,

Prof of Biochemistry,

University College Cork,

Cork.

<!–[if !mso]> <! st1\:*{behavior:url(#ieooui) } –>

· Madam, – The reports by your Education Editor, Seán Flynn (Front page and Home News) on grade inflation are disconcerting, but, as an educator on the front line of the education system, I am not surprised. The reasons for the observed grade inflations are rooted in the education and examining patterns laid down at Junior and Leaving Cert levels and then extended into third level.

Our students learn by rote and are then examined in a formula metric way that simply measures their ability to remember facts. The more this is practised, the better the grades are going to get as both student and examiner master the system which changes little from year to year.

We neither encourage nor test creativity by the way we teach and examine. The result of this is that we produce students who have high grades because they can remember facts, but who are unable to think critically or carry out analysis when they go out into the real world.

Employers are clearly picking up on this deficiency which should not be there if our grades were an accurate reflection of a student’s overall intellectual ability. In addition to teaching students the “facts” we also need to teach them to think and be creative and then allow them to express this creativity in an examination situation. By doing this we will be able to produce more rounded and better-educated students and the current grade inflation will correct itself. – Yours, etc,

THOMAS G COTTER, BSc,

DPhil, MRIA,

Prof of Biochemistry,

University College Cork,

Cork.

Comments on grade inflation – Google welcomes review of college’s ‘grade inflation’ – The Irish Times – Tue, Mar 02, 2010

March 2, 2010 2 comments

Google welcomes review of college’s ‘grade inflation’ – The Irish Times – Tue, Mar 02, 2010.

There’s been a lot of comment about grade inflation at second and third level over the past few days. A couple of points are worth making.

First, this is not a uniquely or peculiarly Irish phenomenon. The Economist has had a few stories on this over the years, including this post from an anonymous US Professor. At Harvard, Harvey Mansfield famously distinguishes between ‘ironic’ (or official transcript recorded) grades, and ‘true and serious’ grades, which are not reported on transcripts (see this article from the Harvard Crimson). It would be nice to see some comparative international analysis of this problem, and not just the usual national bout of self-flagellation that accompanies stories like this. A sensible place to start would be take the results of universities that are similarly placed to Irish universities from the UK (as they have a similar grading system), and look at the comparative similarities in grade distributions between the two countries. This blog provides a list of some issues and citations to the UK.

Second, modes of assessment have changed dramatically over the past years. There is much more continuous assessment (CA) than heretofore. With easy access to superb online research materials (figures, journals, textbooks, etc.), search engines, as well spell and grammar checks in word processing programmes, it is much easier for students to produce high-quality CA work than ever before. This inevitably leads to a high degree of range compression, as the standard of work produced will of necessity be very high. If these CA items are weighted highly in exams, then there will be an inevitable skewing of results. One purpose of exams is to discriminate in the statistical sense between individuals measured on a certain set of variables. It may well be that CA militates against this statistical discrimination because of simple and upward range compression.

Third, upward movements in grade to some extent reflect inputs. Students with greater than five hundred points in the Leaving Certificate should be expected to get very good degree results, unless somehow we expect university to degrade performance and capacity over time.

Fourth, why not turn this issue to opportunity? We should set a national target of moving performance on the OECD PISA* to the very top range (that of Finland, say). These results can’t be gamed or subject to grade inflation.

Fifth (tongue in cheek), don’t forget the Flynn Effect!

*The Programme for International Student Assessment (PISA) is an internationally standardised assessment that was jointly developed by participating economies and administered to15-year-olds in schools.

Three assessments have so far been carried out (in 20002003 and 2006 ). Data for the 4th assessment is being collected in 2009, with results scheduled for release at the end of 2010.

Tests are typically administered to between 4,500 and 10,000 students in each country.