ChatGPT listed as author on research papers
The artificial-intelligence (AI) chatbot ChatGPT that has taken the world by storm has made its formal debut in the scientific literature — racking up at least four authorship credits on published papers and preprints.
Journal editors, researchers and publishers are now debating the place of such AI tools in the published literature, and whether it’s appropriate to cite the bot as an author. Publishers are racing to create policies for the chatbot, which was released as a free-to-use tool in November by tech company OpenAI in San Francisco, California.
Publishers and preprint servers contacted by Nature’s news team agree that AIs such as ChatGPT do not fulfil the criteria for a study author, because they cannot take responsibility for the content and integrity of scientific papers. But some publishers say that an AI’s contribution to writing papers can be acknowledged in sections other than the author list. (Nature’s news team is editorially independent of its journal team and its publisher, Springer Nature.)
ChatGPT is one of 12 authors on a preprint
1 about using the tool for medical education, posted on the medical repository medRxiv in December last year.
The team behind the repository and its sister site, bioRxiv, are discussing whether it’s appropriate to use and credit AI tools such as ChatGPT when writing studies, says co-founder Richard Sever, assistant director of Cold Spring Harbor Laboratory press in New York. Conventions might change, he adds.
“We need to distinguish the formal role of an author of a scholarly manuscript from the more general notion of an author as the writer of a document,” says Sever. Authors take on legal responsibility for their work, so only people should be listed, he says. “Of course, people may try to sneak it in — this already happened at medRxiv — much as people have listed pets, fictional people, etc. as authors on journal articles in the past, but that’s a checking issue rather than a policy issue.” (Victor Tseng, the preprint’s corresponding author and medical director of Ansible Health in Mountain View, California, did not respond to a request for comment.)
An editorial
2 in the journal
Nurse Education in Practice this month credits the AI as a co-author, alongside Siobhan O’Connor, a health-technology researcher at the University of Manchester, UK. Roger Watson, the journal’s editor-in-chief, says that this credit slipped through in error and will soon be corrected. “That was an oversight on my part,” he says, because editorials go through a different management system from research papers.
And Alex Zhavoronkov, chief executive of Insilico Medicine, an AI-powered drug-discovery company in Hong Kong, credited ChatGPT as a co-author of a perspective article
3 in the journal
Oncoscience last month. He says that his company has published more than 80 papers produced by generative AI tools. “We are not new to this field,” he says. The latest paper discusses the pros and cons of taking the drug rapamycin, in the context of a philosophical argument called Pascal’s wager. ChatGPT wrote a much better article than previous generations of generative AI tools had, says Zhavoronkov.
He says that
Oncoscience peer reviewed this paper after he asked its editor to do so. The journal did not respond to
Nature’s request for comment.
A fourth article
4, co-written by an earlier chatbot called GPT-3 and posted on French preprint server HAL in June 2022, will soon be published in a peer-reviewed journal, says co-author Almira Osmanovic Thunström, a neurobiologist at Sahlgrenska University Hospital in Gothenburg, Sweden. She says one journal rejected the paper after review, but a second accepted it with GPT-3 as an author after she rewrote the article in response to reviewer requests.