top of page
  • Jasmine Janes, Manu Saunders, Sean Tomlinson

PART III – Peer review of grants: Researcher thoughts on making it better

By Jasmine Janes, Manu Saunders and Sean Tomlinson

Last month we posted a two-part blog post about the peer review of grants. In part one we asked if peer review was ‘fair’ and if it could be made better for ECRs. In part two we highlighted and discussed some of the ways that people have suggested improving the grant review process. Finally, we asked you - the greater research community - what your experiences, thoughts and feelings were on the topic of grant application peer review. If you missed the survey, you can find the questions here.

The number of respondents was smaller than we expected, so we would be interested to hear from other points of view!

Average career stage

Even though the questions had been posed largely in an ECR context, most respondents were tenured faculty members (~51%). Non-tenured and postdoc researchers tied at second (~21%); graduate students made up around 6%.

This may be a reflection of people interested in grant review processes. For example, it is a near certainty that faculty members will have applied for more grants than graduate students and thus, likely more interested in the content and outcome of this blog series.

Types of grants being applied for

Not surprising, industry/government and society/charity are the most popular forms of funding sources. Based on the number of NSERC applicants we did have a larger number of Canadian respondents– Oh Canada!

It’s no secret that federal research funding scheme coffers have been getting smaller; in fact, it’s probably miserably comforting to know that this pattern is consistent across countries. Is this ‘alternative’ funding pattern consistent across career stages:

Funding sources by career stage


We expected an overwhelming ‘no’ response however, 40% of people were unsure! The shocker… 34% of respondents DO think the current systems are fair. We can’t determine if rates of perceived fairness are linked to a particular system, but it might be linked to career stage (i.e. experience) – 42% of tenured respondents ticked ‘yes’ and ‘unsure’, respectively; non-tenured tied at 40% each for ‘no’ and ‘unsure’; 40% of postdocs said ‘no’; and 67% of grad students were ‘unsure’.

This is a complex knot to unravel, because even amongst secure, tenured respondents, equal numbers were unsure of exactly how fair our research funding systems are. Presumably these uncertainties centre around their inability to speak for people at different career stages. From an ECR perspective it is easy to imagine tenured academics having secure salary and research funding. Of course that’s not the case. From an established academic view, it might be equally idealistic to imagine/reminisce that the ECR life is all intellectual freedom and flexibility.

The question remains - are these difficulties “fair”? Undoubtedly some applications deserve to be refused funding, but to understand how “fair” this system is, one needs to really understand how disproportionate funding and success are awarded to different academics - clearly no single respondent could do this. We’ve all heard of great people leaving research at all levels of the career ladder because they got sick of the research funding fight thus, no matter how successful any given respondent may be, the most parsimonious response to the question of fairness really is uncertainty. That in itself speaks volumes: across the board, around half of our respondents are not certain that the research funding structures are fair.

Acting as a reviewer

32% of researchers skipped this question, leading us to assume that they had not acted in this capacity. The results largely reflect the trends in grants being applied for. However, considering the number of applicants to societies/charities; very few have been a reviewer for one of these groups. Perhaps societies/charities don’t ask for external reviewers nearly as often? Could this promote a perception of more or less fairness in these systems? On the other hand, it appears NSF regularly asks for a number of external reviews.

Tenured academics are generally well-established leaders in their respective fields thanks, in part, to a small army of students, postdocs, start-up funds and time. So, it’s no surprise that they are often asked to be reviewers for a number of these funding bodies. Postdocs and grad students were rarely asked to perform review services for grants, and if they did it was only for societies/charities. Could this reliance upon the opinions of established and entrenched “experts” be a problem in assessing funding applications? Our data really offer no insight into this question, but they do suggest an imbalance in the review process that could be readily addressed.

Constructive feedback

About 41% of respondents considered feedback they had received from a funding body helpful in improving their overall grant writing.

Scores relative to final decision

47% of researchers believed that comments/scores reflected the final decision on their application, while 32% did not. It is worth noting that the ‘yes’ vote increased with career stage; 67% of tenured respondents felt that their scores/feedback were reflective of the final decision. Postdocs provided a clear ‘no’ (40%), although 30% of postdocs skipped this question.


Interestingly, 30% of respondents did not resubmit their application the following year, but not because they had found something better/more interesting to do (that group explained ~17%). This leads us to assume that 30% of people became so disenchanted with their idea and/or the process that they essentially ‘gave up on it’ for the time being. Of the 53% that did resubmit, 30% obtained funding while 23% did not. What do we learn from this? It might be worthwhile resubmitting, especially if you feel that any feedback obtained will help you improve.

Postdocs and non-tenured respondents are less likely to resubmit grants. These groups typically have fixed contracts and probably a) feel that they can’t ‘waste’ more time on applications, or b) have moved on to the next project which may or may not have anything to do with the original funding idea.

Average application rate (over 5 years)

Ever wondered how many applications your peers are putting in? Well, this table might give you some insight and perhaps a benchmark. Of course, how many applications you submit is very dependent on your personal situation – do you have time, ideas, support? Do you even need to apply? It seems there are some people out there that are lucky enough to avoid it… at least for 5 years. That in itself may be a reflection of the number of NSERC system respondents we received (their funding is typically for 5 years).

Worth mentioning - one tenured outlier applied for >60 grants in the past 5 years!

Also, ghost applicants - the (cruel) scenario in which a lower ranking team member does the work but isn’t recognized formally in the application for their contribution. This is either because a PI is mean, the funding body doesn’t allow non-tenured team members to be a PI or CI, or there are rules preventing a PI/CI from drawing salary from the grant. This immediately “ghosts” some applicants - they may have had a great idea, but in order to pay the rent they 'give' their successful application to somebody else. A scenario such as this intuitively rests more heavily on non-tenured ECRs living from one short-term research contract to the next; living over-rides that longer term gain of being a named CI.

Average success rate

Tenured participants had an average grant success rate of 52%. Non-tenured, postdocs and grad students were around the 30-37% success rate. Some earlier career academics were able to achieve a 100% success rate (yes, even with multiple applications). The highest success rate for a tenured respondent was 86%, but they are generally applying for more grants.

What about that tenured outlier? They’re doing alright! From around 60 applications they were successful 2/3 of the time!

Pet peeves in the system

Very few people experienced sex bias in grant reviews. (See, we are getting better!). Most respondents believe they experience non-expert reviews and academic factions. Sadly, a number of researchers had also experienced a myriad of other injustices, including down right mean/unprofessional reviews and bias against smaller institutions. Around 27% of respondents skipped this question – why? We can only guess that they have been lucky enough to avoid any bias. Alternatively, they may feel disinclined to engage in a losing argument. Hopefully it is the former.

Fairest review method

61% of respondents thought that double blind (anonymous reviewer, anonymous candidate) was likely to be the fairest grant review system, followed by open review (everyone knows everyone). Based on career stage, 67% of tenured and 70% of non-tenured academics voted for double blind reviews, whereas 50% of postdocs voted for open reviews. Grad students were torn between open and double blind.


It’s probably not surprising to learn that >50% of researchers surveyed are more critical of proposals in their discipline. Around 42% considered themselves equally critical of every proposal.

Obtaining feedback

79% of respondents believe that every applicant should be given feedback. It would be great if funding bodies took notice of this, but you have to wonder, if the onus is on the reviewer, would we be so keen to provide such feedback? Would we even agree to review applications, given the increased time and opportunity cost it might require?

Can the system be improved?

Is this a wonderful contradiction, or simply people recognizing that even a ‘fair’ system can be improved? (Remember 40% of respondents were ‘unsure’ of the fairness of the system). A whopping 80% of responses say grant peer review can be improved.

How to improve?

Constructive feedback/reviews and mentoring from successful applicants were considered the most meaningful ways of improving. How did the different career stages vote?

More little grants or fewer big ones?

Most researchers (84%) were in favour of a ‘more grants with lower funding per year’ system. Presumably, the somewhat altruistic funding model is favourable because it is more likely to prevent a few highly equipped, superstar labs from dominating the field, thereby providing more opportunity for ECR labs and smaller institutions to conduct valuable research.

What makes a good grant?

Apparently, it isn’t all about track record and students! Innovation and feasibility were ranked the highest in terms of their importance when reviewing grants. Differences in level of experience, understand what the magical formula is for grant writing success, may explain the career stage results below.


Perhaps we need deadlines to motivate us, or perhaps we just don’t want to think about something for longer than we really need to. Either way, the majority of respondents (53%) believe we should keep current grant application deadlines. Unlike the other career stages, non-tenured were more in favour of removing deadlines (50%). Is this because current grant application deadlines clash with heavy teaching loads and short-term contracts? Based on these results it will be interesting to see how long NSF keeps their no deadlines approach for full proposals. The most current rumours suggest that the ARC will be continuing its deadline-free trial of the industry-supported Linkage scheme into 2018, but none of the authors can substantiate that as fact.

bottom of page