top of page
  • Writer's pictureDespoina Mademtzoglou

AI in grant writing and evaluation: burning questions

By Despoina Mademtzoglou, project manager - GO 

 

The human side

 



A colorful outburst of imagination, An abstract brain explosion reflecting the vivid realm of creative thought. AI Generative.
Image source: Alisa - stock.adobe.com

Can an AI text generator turn an innovative research idea into a persuasive grant proposal?  


Can evaluators spot an AI-generated application?  


Do funders cast out AI-generated proposals? Do they use AI in grant evaluation? 

 

Major research funders and publishing groups are starting to reflect on these topics: 

 

  • At the European level:  

  • the ERC recognizes that researchers seek human or AI input but emphasizes that external help in proposal preparation “does not relieve the author from taking full and sole authorship responsibilities with regard to acknowledgements, plagiarism and the practice of good scientific and professional conduct” (full position here – Dec 2023).  

  • Horizon Europe – pillar 2, that funds for example collaborative projects on Health, has incorporated AI guidance in the proposal templates since September 2023. In summary, it requires applicants to thoroughly review and validate AI-generated content “to ensure its appropriateness and accuracy, as well as its compliance with intellectual property regulations”.  

  • Other funders have issued more restrictive advice to applicants and have already banned use of generative AI tools in proposal evaluation. NIH even prohibits use of locally hosted generative AI technology over confidentiality concerns (NIH notice here and FAQ here – Jun 2023; Australian Research Council here – Jul 2023; group of UK-based funders e.g. Wellcome here – Sep 2023).  

  • Science (AAAS) news quote researchers from both views: agreeing with the concerns or finding the bans technophobic.  

  • Nature (NPG) news reports that “scientific sleuths spot dishonest ChatGPT use [in scientific text via] detectable traces, such as specific patterns of language or mistranslated ‘tortured phrases’ [or tellable signs like] ‘Please note that as an AI language model, I am unable to generate specific tables or conduct tests …’ [or] ‘regenerate response’ [or] fake references. ”. In an article published in Cell Reports Physical Science, researchers developed a model that assigns the author as human or AI with 99% accuracy (full text here). Even though these examples focus on AI use in paper writing, similar tendencies could be expected in proposal writing.

 

On its side, the European Association of Research Managers and Administrators started a series of events dedicated to AI use in grant applications and evaluations, with Grants Office members participating in the discussion with their peers.

 

Despite the recent opening of AI to the masses, a Nature survey of 1,600 researchers showed a considerable proportion is using it, including in the grant world.

 

Researchers, research administrators, and research funders participate in the debate of defending or condemning AI in grant writing and evaluation:

 

Breached confidentiality, not factually correct text, fabrication, plagiarism, perpetuated bias, reduced originality due to over-dependence are listed as risks in using generative AI for grant writing or evaluation. Raised energy consumption and carbon footprint of research are also a concern. 

Time gain and improved text style (especially for non-native speakers) are universally recognized benefits. 

Ally or “bad news”, ChatGPT made it to Nature’s 10 that shaped science in 2023!


 

The AI side

 
AI, Machine learning, Hands of robot and human touching on big data network connection background, Science and artificial intelligence technology, innovation and futuristic
Image source: ipopba - stock.adobe.com

We would like to finish with the views of mighty ChatGPT, when we asked it to give some thought-provoking or some provocative-sounding questions on the topic:

 

  • In integrating AI into grant evaluations, do we risk overlooking the nuanced qualities that make a project impactful, potentially sidelining the human element essential for understanding context and empathy?  

  • Are we sacrificing human intuition and empathy for the sake of objectivity, or is AI a crucial step towards fairer and more data-driven resource allocation?  

  • As AI optimizes efficiency, how can we ensure it does not inadvertently perpetuate existing biases, hindering rather than enhancing the pursuit of diverse and innovative solutions in grant applications?

Comments


Commenting has been turned off.
bottom of page