[ad_1]
China's major funder of basic science is experimenting with an artificial intelligence tool that allows researchers to review grant applications to make the process more efficient, faster and more equitable. Some researchers claim that the approach adopted by the National Natural Science Foundation of China is a global benchmark, but others are skeptical as to whether AI can improve the process.
Choosing researchers to review project proposals or peer publications takes time and is subject to bias. Several academic publishers are experimenting with artificial intelligence (AI) tools to select reviewers and perform other tasks. Some funding agencies, particularly in North America and Europe, have tested simple tools to identify potential reviewers. Some of these systems match the keywords of grant applications with those of other scientific publications.
The National Natural Science Foundation of China (FNSNC) is developing a more sophisticated system for analyzing online databases of scientists' scientific literature and personal web pages. The system will use a semantic text analysis to compare the grant application to this information and identify the best matches, said Beijing-based head of the agency, Li Jinghai.
Time savers
A first version of the tool selected at least one member from each of the 44,000 panels that approved projects last year, says Yang Wei, former director of the agency, who presented the pilot project data at the end of the year. a meeting on scholarly communication in Hangzhou last month. The panels are composed of three to seven people. The system is already reducing the time that administrative staff must devote to finding referees, Yang explains. A similar approach will be used this year to select reviewers, he says.
The FNSNC has become a world leader in the reform of grant review processes, said Patrick Nédellec, Director of the Department of International Cooperation of the French CNRS, the largest basic research organization in Europe. The FNSNC is forced to innovate because the number of grant applications continues to grow, said Nédellec, who attended a meeting last September during which Li discussed the reform plans of the agency. "As the pressure is very strong, China has no choice but to find the best way," he said.
In the last five years, the number of applications received by FNSAC has increased by approximately 10% per year. In 2018, the organization evaluated 225,000 grant applications, nearly six times the number received by the US National Science Foundation. FNSNC is struggling to process applications and find the right reviewers, says Li. "The challenge is not to have enough people," he says. "AI will solve that."
Reduce bias
Li also wants the tool to reduce bias in the selection of reviewers. In China, scientists are trying to push for their projects, he says. "A problem with assessments is that people use connections. Artificial intelligence can not be corrupted, "says Li.
This is also a problem in countries where applicants are invited to suggest experts who could review their proposals. For example, the Swiss National Research Fund found that the examiners recommended by the applicants were much more likely to support a project than the examiners chosen by the foundation.
The NSFC's pilot AI system only works on websites written in Chinese characters, but Mr. Li wants him to be able to use English-language websites in the future.
"The NSFC's reform plan is ambitious, forward-looking and comprehensive," said Manfred Horvat, a science policy advisor at the Vienna University of Technology, who also heard Li's speech last September.
Other countries follow China. Last month, the Norwegian Research Council began using natural language processing to gather around 3,000 research proposals and match them to the best review panels, "said Thomas Hansteen, Council Advisor.
Suspicions of skepticism
But not everyone is convinced that AI should be used in the review process. Susan Guthrie, Science Policy Specialist at the RAND Europe Research Organization in Cambridge, UK, notes that the Canadian Institutes of Health Research has encountered significant challenges with an algorithm used for selection examiners.
The Canadian agency hired RAND Europe in 2016 to conduct a meta-analysis of studies on peer review of grants. Partly based on this report, the agency concluded that the algorithm sometimes chose reviewers who had a conflict of interest or otherwise were neither appropriate nor qualified to evaluate the proposal. "Although algorithm-based matching has seemed appealing, there is at this stage of artificial intelligence a limit to what it may possibly achieve," concluded the panel. independent experts. "The selection of examiners should be based primarily on human scientific judgment."
Elizabeth Pier, Policy Researcher at Education Analytics in Madison, Wisconsin, believes that artificial intelligence will not eliminate selection bias. She fears that AI systems end up reproducing the biases inherent in human judgments, rather than avoiding them. It recommends that the FNSNC conduct a study comparing the examiners chosen by AI to those chosen by the individuals. According to Mr. Li, NSFC could consider this possibility once the system is operational.
Credit for reviewers
Li plans to introduce other tools to make the grant system more equitable over the next five years. These include a credit system that will reward researchers for correct, fair and timely assessments – although Mr. Li does not want to comment on the nature of the rewards.
The idea behind the credit system is to encourage reviewers to take the job seriously and to be professional, he says.
Statistician John Ioannidis of Stanford University, California applauds NSFC's efforts to use objective, evidence-based tools in developing proposals for examiner selection. But he thinks it will be difficult to assess whether the reviewers have made good decisions and deserve to be credited. It can take decades for an idea to be considered "great or wasted," says Ioannidis.
Li is ready for the challenges. "This task is not easy to accomplish and will require constant improvement during a long process of study and testing," he says.
[ad_2]
Source link