top of page
  • Jasmine Janes

Notes from the “Opening the black box of NSERC’s Discovery Grant” session at CSEE 2017


Recently, I had the pleasure of attending the Canadian Society for Ecology and Evolution (CSEE) conference in Victoria (Canada), where Dr. John Reynolds led a session on NSERC Discovery Grants. During this session, current and former members of the NSERC expert review committee for Ecology and Evolution (1503) conducted mock reviews and answered questions related to the application and review process. I know a number of people who would have liked to attend this session but could not, so I am sharing the key points I came away with. Below are my personal notes and recollections from this session, further information and clarification should be obtained from NSERC.

Background to the mock review process

Six current and former expert committee members and one NSERC representative sat down to discuss two separate applications. The first application was from an “established researcher” while the second was from a “mid-career researcher”. Essentially, two of the six committee members were nominated as primary reviewers, due to their experience in that particular area of research, and one member acted as chair of the review. The committee allocates 15 minutes to each proposal and, for the most part, strongly adheres to categories specified in the Discovery Grant Merit Indicators table, or “the grid”. Each proposal is ranked according to three categories – Excellence of the Researcher, Merit of the Proposal, and Training of HQP (highly qualified personnel which can be undergrads, grads or postdocs). According to the grid, there are six rankings – exceptional, outstanding, very strong, strong, moderate and insufficient.

For those who have read my previous blog posts about ARC funding in Australia, NSERC does not offer the opportunity of a Rejoinder to rebut reviewer comments and obtain detailed feedback. Also, while the success rate is much higher for NSERC the total amount of funds awarded is lower for each grant – there are obviously some arguments for the pros and cons to both models.

Mock review of the established researcher (15 mins)

Reviewer 1 (~3 mins):

Excellence of the Researcher – An established researcher using theoretical models for validation in ecology with a long research record. Specific merits included: awards and recognitions, 40+ invited talks in 12 countries, 49 journal articles listed in the Canadian Common CV in both specialist and broader ecological journals (i.e. Oecologia, Oikos, Proceedings) of which the researcher was usually senior author. While the researcher had an impressive CV it was felt that they did not demonstrate the significance and impact of their research very well. There was little indication of how the research had been taken up by others in the field, and few metrics illustrating the impact of previous research were provided.

Merit of the Proposal – The proposal described four different research areas or questions. The proposal was perceived as being too light on methodology and detail, raising questions around the feasibility of the proposal. This particular reviewer would have preferred to see fewer research questions with more detail.

Training of HQP – this area was not addressed as the reviewer ran out of time.

Rankings – Very strong, moderate, ?

Reviewer 2 (~3 mins):

Excellence of the Researcher – Similar comments to review 1, impressed with the diversity and output of the researcher but agreed that the major shortcoming was the limited discussion of the significance of previous research.

Merit of the Proposal – Similar comments to review 1. Stated that the external review received was not particularly helpful in providing additional insight. Considered the absence of a long-term goal for the research to be a significant shortcoming.

Training of HQP – Application showed that 21 HQP across various levels had been trained by the research lab, but the majority were at the graduate level. The reviewer felt that the training plan lacked detail, particularly at the undergraduate level, and that it was not clear what type of training the HQP were receiving or how they benefited from the researcher’s collaborative network. The number of job placements of previous HQP was considered high.

Rankings – Very strong, strong, strong.

Group discussion (~4 mins):

Other members of the group proceeded to discuss their concerns and preference for rankings. There was some inconsistency among the rest of the group regarding rankings, particularly around the Merit of the Proposal. It was noted that it was difficult to understand how the HQP had contributed to publications.

Final vote (~3 mins):

The grid was used to help each of the committee members reach a final decision. Each member cast their final vote which was used to create a final score (majority rule) for the application.

Excellence of the Researcher – Very strong

Merit of the Proposal – Strong

Training of HQP – Very strong

Mock review of the mid-career researcher (15 mins)

Reviewer 1 (~3 mins):

Excellence of the Researcher – Mid-career researcher at the level of Assoc. Prof. with an impressive publication record over the grant period (38 publications). Several of the publications are in very high ranked journals (e.g. PNAS), and the applicant had been involved in a number of governmental department organisational reviews. There was some confusion over the convention for order of authorship as the researcher was sometimes second author on a student paper rather than senior. This confusion was not resolved when the reviewer sought clarification from the additional information section. The reviewer felt that the applicant was an excellent collaborator, but that it was difficult to ascertain how much the applicant was actually driving the research. The external review confirmed the international esteem held by the research community for the applicant.

Merit of the Proposal – Proposed research was around modelling and biostatistics but the objectives were perceived as vague.

Training of HQP – Two MSc students were to work with a PhD student but it was not clear how these connections were made and who would be working on what question. There was little evidence of higher level HQP in the researcher’s history.

Rankings – Very strong, strong, strong.

Reviewer 2 (~3 mins):

Excellence of the Researcher – It was difficult to contextualize the contributions of the individual researcher to their publication record, as such the impact of the researcher was not clearly presented.

Merit of the Proposal – The introduction of the proposal was very lengthy (2.5 pages) which meant that the methods section was very light on detail. The reviewer stated that they would have liked to view some model statements considering the proposal concerned statistical modelling. It was not clear from the proposal what type of data was being used and where it was coming from.

Training of HQP – The reviewer noted that the publication record suggested a high number of HQP but it was unclear how these students mapped back to the researcher. It was also not clear what each listed HQP would be working on.

Rankings – strong, strong, strong

Group discussion (~4 mins):

Other members of the group proceeded to discuss their concerns and preference for rankings. There were inconsistencies among the committee surrounding the Excellence of the Researcher because it was unclear how the applicant had contributed to the large number of publications, especially when the researcher was often a middle author. All members agreed that there needed to be more balance in the proposal between background material and methods.

Final vote (~3 mins):

Excellence of the Researcher – Very strong

Merit of the Proposal – Moderate

Training of HQP – Strong

Tips and statistics:

  • Metrics for displaying impact of research seemed to include: number of downloads, altmetrics, number of reads/views, impact factors, citations.

  • Applicants should clearly explain the structure of their institution (e.g. teaching focus vs research focus) to give the committee a sense of the infrastructure, facilities, HQP and opportunities available to support the applicant.

  • It is considered ideal to avoid “moderate” scores if an established or mid-career researcher.

  • A researcher who does obtain a score of moderate or lower will receive some feedback regarding that particular score.

  • The success rate of early career researchers across NSERC (based on 2017 stats) is 69% with an average award of ~$25,400 per year.

  • The success rate for established researchers across NSERC is 66% with an average award of ~$34,948 per year, though most applicants ask for ~$70,000 per year.

  • The majority of early career proposals are ranked strong x3 (J bin) and strong x2 moderate x1 (K bin); the latter being the minimum score to receive funding.

  • Early career researchers have a 3 year window of eligibility (essentially scored with more leniency).

  • Successful early career researchers have the possibility of a 1 year extension (a 6th year) before submitting their second application.

  • The NSERC HQP seemed to be more favourable toward PhD and undergrad involvement.

  • The 2017 funding round for Ecology and Evolution saw 18.8% of applications from women and 53.5% from men. The remainder did not identify.

  • The Ecology and Evolution success rate from the 2017 funding round was 71.9% for female applicants, and 74.7% for male applicants.

  • Research Tools and Implements Grants across NSERC had a success rate of 32% (241 awarded) with ~$30.5 million awarded.

  • Within the Environment section of Research Tools and Implements Grants the success rate was 36.6% with applicants being awarded approximately 34% of the money they requested.

bottom of page