Can AI write analysis papers?

This web page was created programmatically, to learn the article in its authentic location you’ll be able to go to the hyperlink bellow:
https://www.dailycardinal.com/article/2025/11/can-ai-write-research-papers
and if you wish to take away this text from our website please contact us


In March, the world’s most impactful analysis journal Nature printed an article by a researcher claiming his scientific paper was seemingly peer-reviewed by synthetic intelligence with out his consent.

“Here is a revised version of your review with improved clarity and structure,” the peer evaluation learn — a telltale signal of AI’s writing. 

Timothée Poisot, the researcher in query, was upset with the artificially generated peer evaluation, however researchers throughout the board are break up on the usage of AI in academia. In a survey of over 300 US computational biologists and AI researchers, round 40% of respondents stated “the AI was either more helpful than the human reviews, or as helpful.”

Liner, an AI startup primarily based in California, operates an online browser hoping to faucet into researcher’s hopes and fears about AI with their “research first” app that provides AI generated peer opinions, automated citations and reply surveys with AI simulations.

Head of Liner U.S. operations Kyum Kim informed The Daily Cardinal his firm had “the world’s most accurate AI search engine.” 

By evaluating its scores with different prime chatbots in a hallucination benchmark known as SimpleQA, the place Large Language Models (LLMs) are measured by the quantity of faux sources they cite, Liner beat out each different main competitor with 95% of solutions being appropriate. For comparability, ChatGPT 4o, one of many default fashions used when asking ChatGPT a query, had an accuracy rating of 38%.

“When you go to Liner, citations are very prominent because it’s all about the sources. We care about the accuracy of sources, the relevance of sources and the credibility of sources,” Kim stated. “We provide line-by-line citations for every question query.”

Using this distinctive strategy to sourcing, Liner hopes to pitch themselves to not simply academia, but additionally skilled fields like monetary analysts and attorneys, “where accuracy matters the most,” Kim stated.

With a reported 12 million customers worldwide, Liner hopes to “build trust in AI again” via their database of over 250 million tutorial papers for citations, speculation era and tracing analysis via totally different papers. 

Students and workers at UW-Madison account for a few of these customers. According to a spokesperson for Liner, round 750 individuals holding a “wisc.edu” electronic mail use Liner for tutorial and analysis functions at UW-Madison. The Cardinal couldn’t confirm their claims.

Three papers partially or absolutely written by Liner have been accepted into Agents4Science, a Stanford University-sponsored analysis competitors billing itself as “the first open conference where AI serves as both primary authors and reviewers of research papers.”

While all three papers have been featured within the contest, they confronted scrutiny from human reviewers. Two of the three papers have been “borderline rejected” by some reviewers, that means, whereas theoretically sound, sure technical features like unexplored concepts or the writing itself have been flawed. The third paper, which had the least documented use of their program, scored the very best and was “borderline accepted” by a human choose.

Enjoy what you are studying? Get content material from The Daily Cardinal delivered to your inbox

Kim isn’t frightened about these hiccups or different criticisms, although. Instead, he stated his firm was “betting on the future” of AI-integrated analysis.

“What we’re building here, I think, is a good example of AI helping people do good things: creating new knowledge and authentic science,” Kim stated.

AI in analysis

Just like Nature’s reporting appears to point, professors throughout the board are break up on AI instruments in analysis, and whether or not or not corporations like Liner actually can create “authentic science” with their generative instruments.

Ken Keefover-Ring, a professor of botany and geography at UW-Madison, was apprehensive about utilizing AI instruments like Liner in his analysis on Monarda genus’ important oils. He was primarily involved with utilizing AI instruments to generate outcomes and evaluation knowledge, like Liner’s survey simulator and quotation generator declare to do.

“At some point [AI] just undermines the whole [research] process,” Keefover-Ring stated. “Science is already under siege: ‘Oh, those scientists, they don’t know what they’re talking about.’ And if we don’t stop this, it’ll only get worse.”

He was notably involved with Liner’s “Survey Simulator” characteristic, worrying that researchers would possibly move off the AI responses as an actual survey, subsequently passing off an AI-generated “pseudo replication” as actual and successfully falsifying statistical knowledge.

Without AI, Keefover-Ring’s already seen researchers falsify knowledge to make their analysis related, pressured by the “publish or perish” mindset in analysis, that rewards professors who publish significant outcomes shortly. AI would possibly make faking their knowledge even simpler.

Even earlier than AI was created, hundreds of articles have been retracted over false knowledge. One web site, Redacted Watch, retains a database of over 67,000 retracted articles tracing from 1927 to right this moment.

“We’re always trying to find loopholes and everything, and some people are going to be trying to cut corners with AI,” Keefover-Ring stated. “But also, is it really worth all this hassle to do all of this and not really be that confident at the end whether it’s real or not?”

Some professors have discovered much less intrusive methods to combine AI into their analysis. Tsung-Wei Huang, a professor of pc engineering whose analysis includes optimizing pc infrastructure, informed The Cardinal he makes use of generative AI like ChatGPT for his “daily work.”

“Whether it’s writing code or writing email, especially because lots of the time that code is very boilerplate, it can all easily be done by ChatGPT,” Huang stated.

AI has allowed Huang to optimize a lot of the menial work out of his day-to-day job, leaving him extra time to work together with college students or implement high-level code which he stated AI hasn’t been capable of replicate but.

“This is very new to everybody, and I’m pretty sure even those who are not experts in engineering or machine learning are benefiting from AI,” he stated. “This is a whole-stack innovation, and everybody needs to work together to overcome some of the new challenges involved.”

The Daily Cardinal has been masking the University and Madison neighborhood since 1892. Please think about giving right this moment.


This web page was created programmatically, to learn the article in its authentic location you’ll be able to go to the hyperlink bellow:
https://www.dailycardinal.com/article/2025/11/can-ai-write-research-papers
and if you wish to take away this text from our website please contact us

Leave a Reply

Your email address will not be published. Required fields are marked *