PART II - Peer Review of Grants: Can the grant peer review experience be improved?
Yesterday we asked if peer review in funding circles was biased, particularly against ECRs (early career researchers). Today we offer some suggestions for improving the grant peer review experience.
So, can the grant peer review experience be improved?
Yes!! Below are a few aspects of the grant system that we feel can reduce stress and increase fairness for ECRs in the grant review process:
Include an IP clause in applications: A colleague told us they submitted a proposal to a large funding body. Their application was unsuccessful, but a VERY similar project from another researcher at another institution (with no collaborative ties) was funded. The real kicker - the subsequent media release contained a section of verbatim text from our colleague’s proposal.
Many ECRs have similar stories. It’s disheartening and stressful worrying about whether your ideas will be misappropriated. Yes, science is a community endeavour and you can’t claim ownership over ideas. However, including a legal framework in the application process that allows researchers to retain some IP rights over their proposed ideas and unsuccessful applications could help reduce applicant stress.
Mode of peer review: As many have argued for peer review generally, double blind and open review (where both author & reviewer are known to each other) have greater potential to reduce bias compared to single blind review. Double blind review has been adopted by several journals however, its potential in the funding system is contentious. Sceptics argue that double blind is illogical for grant applications because the applicant’s ability is a key factor in the decision. However, many people advocate for this style of review, citing a number of ways in which double blind review could be successful. Open peer review gives applicants more power to repeal or question decisions if they feel the assessor had a conflict of interest, or isn’t a relevant expert. It may also reduce the risk of IP theft. While critics of open review question the critique of the reviews, evidence from trials in manuscript reviews suggest these concerns are unfounded.
Increase rigour in the reviewer selection process: Standardising review practices across disciplines is largely impossible, but funding agencies generally provide some guidance - type of review, criteria to assess, how much weight each is worth. A better option could be increasing rigour in selecting reviewers, which has been advocated before. Asking applicants to nominate non-preferred reviewers could reduce the risk of competitors giving unfavourable reviews. On the negative side, it could increase time spent selecting reviewers, thereby increasing the workload of funding committees, and perhaps even increasing the workload of reviewers if a small number are available and reliable.
Fund more researchers based on track record only, or with less funding allocated to each: Horrobin (1996) discussed some of the merits of awarding funding based on an assessment of track record and productivity, rather than research ideas and proposals. This strategy would remove some bias and concern surrounding ‘novel’ and ‘innovative’ ideas, and remove the need for assessors to comment on feasibility. However, it might not work in the favour of ECRs or researchers with significant career interruptions, unless categories were created to recognise these groups.
Another option is to fund more projects with less money. This strategy was adopted by NSERC and appears to work well in Canada. Projects need to be scaled down to fit the awarded budget, but good science still gets done. One benefit of this system is that the single blind reviews are somewhat more collegial (anecdotal evidence). Why? Because everyone knows that a high proportion of projects will be funded and that the average funded amount will be around $35K per year. The downside of course is that large-scale and long-term projects can be disadvantaged due to financial limitations.
Invest in professional development/mentorship for ECRs to provide more insight to strategic grant success: Every early career researcher will have second-hand anecdotes or first-hand experiences that suggest a big part of grant success is having inside insights that aren’t in the funding guidelines. Increasing transparency at every step of the process, through training, mentorship and feedback, will improve productivity for universities and researchers because researchers will spend less time writing grant proposals they have no chance of getting. For example, it’s no secret that:
ECRs from big labs and big names tend to get ahead quicker because they often produce more papers/grants and receive more opportunities.
Many ECR applicants have greater success if they ‘buddy up’ with an established colleague that has a good history with that funding body.
Most funding bodies have particular keywords, species or systems that they prefer to fund, and these may not be obvious from the guidelines.
Some industry funding bodies ask for research idea proposals for open tender, rather than deciding yes/no on your application. The researcher who submitted the original idea may not be the same one who wins the successful tender.
Of course, mentoring efforts can be fraught with inconsistencies. For example, Dr Tomlinson has submitted ARC applications through several universities, all providing conflicting advice as to how to handle contradictory peer reviews. Some actively advised playing the reviewers off against each other. Others advocated highlighting supportive reviewers, focusing debate on large objections, and ignoring nitpicky ones. The official ARC guidelines explicitly state not to discuss any new (post proposal submission) publications/achievements in the rebuttal. Yet Dr Saunders ‘mentors’ were divided - some strongly advised adhering to this guideline, while others advocated mentioning recent high-impact papers in her rebuttal. Similarly, inconsistency can come from the very top. Dr Tomlinson obtained three different opinions from members of the college of experts as to whether to classify a recently published paper as ‘new material’ if it was cited as “in preparation” in the original application.
The bottom line - grant writing is different to doing research or writing papers and grant writing skills aren’t readily learned during an academic career progression. Providing strategic and consistent mentorship, constructive feedback, and providing a transparent system would greatly benefit ECRs. Going through the full process of grant application can be a valuable process for ECRs, even if they know they have little chance. However, the ‘apply for everything’ advice is not sound if it takes significant time away from more important work or collaborative opportunities. This is especially the case when continued failure to obtain funding disheartens many ECRs, prompting them to abandon projects or careers in disappointment.
Remove application deadlines: There is evidence from overseas programs that removing deadlines increases success rates because the quality of submitted applications will increase. Many industry or smaller grant rounds are advertised with very little time until applications close (including the time needed for the in-house university approval process). This suggests that grant success is selective to those who are already in the system, or have proposals written and ready to cut-and-paste. A recent experiment by the ARC with the industry-supported ARC Linkage program in this direction has yielded conflicting results, with rumours of reduced numbers of applications and suggestions that the ARC will abandon the continuous submission process. We could not, however, confirm this rumour with official releases from the ARC.
Provide formal feedback at rejection: This can be one of the most frustrating parts of the grant process for ECRs. Often applicants receive positive feedback at the intermediate/expression of interest stage, but then receive a final rejection with no justification. This is unlike most other academic peer review processes, where positive feedback at the first round of review rarely results in a 180 degree backflip without explanation. At the least, an explanation, even if it is just ‘sorry, your proposal was really good, but we didn’t have enough money left’ would help ECRs deal with the emotional roller coaster. Even better, identify the ECR pool and provide those applicants with constructive feedback that helps improve their grant writing.
Conclusions: Essentially, peer review of grant proposals is illogical and unfair, often reducing applicants to huddling in the shattered ruins of their self-esteem every time they apply for funding. Worse, many either refuse to re-engage with the system, resulting in lost research potential, or continue to do so with increasing impacts on their mental health.
It doesn’t have to be this way!! While peer review will always be necessary in some form we have listed some promising alternatives and solutions that we think could streamline the process and make it more fair and equitable for all involved. (Skip to the survey results).