By Dr Jasmine Janes, Dr Manu Saunders & Dr Sean Tomlinson
The current system of peer reviewing grant proposals is recent, relative to editorial peer review. It started informally in the USA around the 1950s, apparently within Defence-related research offices, and quickly spread to the major government funding bodies. Today, peer review of grants is commonplace, because it can assist in justifying government spending on research and vet ideas before expert peers.
But how fair is the process for early career researchers (ECRs)? Grant peer review is a similar process to editorial peer review and many of the same issues apply. We won’t go into too much detail on editorial issues, as these have received in-depth treatment elsewhere. Here we explore some of the issues that we have experienced personally when applying for grants.
Is the grant peer review system fair?
There are several factors that we feel put this in doubt. Essentially, we feel that the objectivity of peer review for grant applications is highly debatable, and also argue that the dearth of specific expertise is also a major challenge to the process. The obvious rejoinder to this is that it is the same for all scientists, but from an ECR perspective, it takes only one missed opportunity to potentially derail a career, not to mention impose substantial financial and emotional stress. Scientists with more secure employment are at least shielded from this. So, in this blog we explore the question of how fair the peer review process is. In a future post, we’ll suggest some ways that it can be improved, at least insofar as three Australian ECRs are concerned. Finally, we'll offer a short survey - because we would love to hear about other people’s experiences and opinions – so that we can collect some data and provide a follow-up post about what is considered the best way to move forward.
Who is a peer expert?
Smith (2006) points out that if a ‘peer’ is someone doing the same kind of research as you, they are likely a direct competitor for funding and publications, which could bias their review. If they are not a direct competitor for the same type of research, then they may not understand your sub-discipline to the necessary depth, which could also bias their review.
So how does one become considered an ‘expert’ for grant review?
In the case of the ARC, reviewers are either people who have previously won an ARC grant, or people who nominate themselves to be a reviewer. Under the NSF schemes eager postdocs are encouraged to self-nominate for reviewer roles also. (Interestingly, it seems that ARC General Assessors are likely to be from the College of Experts, while Detailed Assessors come from the Assessor Community). Both of these categories of researcher have inherent biases, particularly the self-nominated group, because women, minorities, Early Career Researchers (ECRs) and introverts are less likely to self-nominate for professional prestige.
Thus, similar to the peer review problem in academic journals, it is increasingly difficult to find suitable ‘peer’ reviewers for grants because a declining number of academics devote the time to this service, and the increasingly specialised nature of research reduces the numbers of cogent specialists available.
As an example, Dr Janes received the following comment in one of her ARC grant proposals: “Objectives 1 and 3 are reasonably feasible, albeit in my opinion poorly designed and unlikely to yield the sorts of results described in the proposal. Objective 2 is completely unfeasible as the genomic resources required have not been developed.” Yet, the genomic resources have already been developed (ultra-conserved element (UCE) workflows have been developed for numerous invertebrates). And how can a proposed study be ‘reasonably feasible’, but ‘poorly designed’?
In the 1970s, Clyde Manwell, a zoology professor at the University of Adelaide, was perhaps the first person to publish his personal grief with the ARC process in a scholarly journal! Manwell suggested that his previous public criticism of a government pesticide program was the reason his grant was terminated and he was subsequently refused repeated requests for funding.
In a later commentary on the issue, Richard Davis suggests that the anonymous nature of grant reviews allows “academic factions to manoeuvre undetected.” Wessely (1998) notes that a common complaint from grant applicants is a perceived bias in reviewing from the perspective of ‘an old boys club’. Of course this also applies to editorial peer review, and is one of the arguments against single-blind review systems.
ECRs in particular also risk being treated unfairly when their track records are assessed by senior academics who may have preconceived notions about what an academic career path should look like, based on their own historical experience. For example, in one of Dr Tomlinson’s recent funding applications a reviewer noted the ‘breadth of experience on the part of the candidate’ which spans the private, government and tertiary education sectors, in both teaching and research roles…the comment was not in praise and recognition of broad and adaptable skills, but as a query whether the candidate was “truly devoted to a research career”.
In contrast, when Dr Saunders applied for an ECR grant at the same university she had done her PhD and a 3-year postdoc contract, one reviewer raised concerns that she would develop an ‘insular view’ by staying at the same university for another postdoc. While all of these critique points have validity in different contexts, it’s unfair to judge an ECR’s career and ability based on implicit biases about ‘ideal’ career paths.
Bias against innovation and ECRs
Dr Janes blogged about these suspicions previously, including how she thought that being too novel might be viewed negatively because there is little to compare the idea to. In their book ‘Peerless Science: Peer Review and U.S. Science Policy’, Chubin & Hackett write:
“Reviewers’ tolerance for innovativeness is bounded: unorthodox ideas and techniques are more welcome from those with impressive credentials, such as prestigious academic background and an extensive track record. But sometimes established scientists who reach beyond “conventional wisdom” or propose to work outside their areas of acknowledged competence are rebuffed…”
Similarly, Horrobin (1996) states:
“History suggests a near universal rule – that innovation comes from an unexpected direction, and that it is usually opposed by leading authorities in the field.”
Unlike many scientific funding systems, industry funding bodies (including government agencies linked to industry) rarely fund novel or ground-breaking research. They are more interested in low-risk research that tests existing ideas and technologies. While there is general recognition that this is one of the greatest challenges facing science and research today, many academics still bias their research to simple questions that can be readily answered with simple statistical outcomes. Without senior mentorship, many ECRs may not be aware of this and can waste precious time and effort writing high-quality research proposals that won’t make it past the gate. As well as this leading to stressful job insecurity for ECRs (another of the great challenges to science), it does a disservice to the scientific paradigm, which ideally aims to tackle these knowledge gaps.
There is also some anecdotal evidence (at least among our broader circle of colleagues) that researchers with permanent jobs may be more likely to be successful for many large grants; however, we can’t find any data on this.
So, can the grant peer review experience be improved? Jump to PART II or skip to the results.