This web page was created programmatically, to learn the article in its authentic location you’ll be able to go to the hyperlink bellow:
https://www.colorado.edu/today/2025/08/28/new-ai-tool-identifies-1000-questionable-scientific-journals
and if you wish to take away this text from our website please contact us
A group of pc scientists led by the University of Colorado Boulder has developed a brand new synthetic intelligence platform that mechanically seeks out “questionable” scientific journals.
The research, published Aug. 27 within the journal “Science Advances,” tackles an alarming development on the planet of analysis.
Daniel Acuña, lead creator of the research and affiliate professor within the Department of Computer Science, will get a reminder of that a number of instances per week in his electronic mail inbox: These spam messages come from individuals who purport to be editors at scientific journals, normally ones Acuña has by no means heard of, and provide to publish his papers—for a hefty price.
Such publications are typically known as “predatory” journals. They goal scientists, convincing them to pay a whole lot and even hundreds of {dollars} to publish their analysis with out correct vetting.
“There has been a growing effort among scientists and organizations to vet these journals,” Acuña mentioned. “But it’s like whack-a-mole. You catch one, and then another appears, usually from the same company. They just create a new website and come up with a new name.”
His group’s new AI software mechanically screens scientific journals, evaluating their web sites and different on-line information for sure standards: Do the journals have an editorial board that includes established researchers? Do their web sites include a whole lot of grammatical errors?
Acuña emphasizes that the software isn’t excellent. Ultimately, he thinks human specialists, not machines, ought to make the ultimate name on whether or not a journal is respected.
But in an period when distinguished figures are questioning the legitimacy of science, stopping the unfold of questionable publications has grow to be extra essential than ever earlier than, he mentioned.
“In science, you don’t start from scratch. You build on top of the research of others,” Acuña mentioned. “So if the foundation of that tower crumbles, then the entire thing collapses.”
The shake down
When scientists submit a brand new research to a good publication, that research normally undergoes a observe known as peer overview. Outside specialists learn the research and consider it for high quality—or, at the least, that’s the objective.
A rising variety of corporations have sought to bypass that course of to show a revenue. In 2009, Jeffrey Beall, a librarian at CU Denver, coined the phrase “predatory” journals to explain these publications.
Often, they aim researchers outdoors of the United States and Europe, similar to in China, India and Iran—nations the place scientific establishments could also be younger, and the stress and incentives for researchers to publish are excessive.
“They will say, ‘If you pay $500 or $1,000, we will review your paper,’” Acuña mentioned. “In reality, they don’t provide any service. They just take the PDF and post it on their website.”
Just a few totally different teams have sought to curb the observe. Among them is a nonprofit group known as the Directory of Open Access Journals (DOAJ). Since 2003, volunteers on the DOAJ have flagged hundreds of journals as suspicious based mostly on six standards. (Reputable publications, for instance, have a tendency to incorporate an in depth description of their peer overview insurance policies on their web sites.)
But retaining tempo with the unfold of these publications has been daunting for people.
To velocity up the method, Acuña and his colleagues turned to AI. The group educated its system utilizing the DOAJ’s information, then requested the AI to sift by way of an inventory of almost 15,200 open-access journals on the web.
Among these journals, the AI initially flagged greater than 1,400 as doubtlessly problematic.
Acuña and his colleagues requested human specialists to overview a subset of the suspicious journals. The AI made errors, in accordance with the people, flagging an estimated 350 publications as questionable once they had been seemingly official. That nonetheless left greater than 1,000 journals that the researchers recognized as questionable.
“I think this should be used as a helper to prescreen large numbers of journals,” he mentioned. “But human professionals should do the final analysis.”
This web page was created programmatically, to learn the article in its authentic location you’ll be able to go to the hyperlink bellow:
https://www.colorado.edu/today/2025/08/28/new-ai-tool-identifies-1000-questionable-scientific-journals
and if you wish to take away this text from our website please contact us
