The gray area of AI

Graphic of digital AI with the scales of justice

Artificial intelligence has quickly become part of the fabric of academic life, and scholars are finding themselves caught between innovation and integrity.  

Professor Luke Plonsky and assistant professor Tove Larsson, both from NAU’s Department of English, are two of the applied linguists involved in a study led by NAU alumna Katherine Yaw, who works at the University of South Florida. Along with Scott Sterling from Indiana State University and Merja Kytö of Uppsala University, the group is working to review the gray area of AI usage in academic publishing. 

“We have been working together for a long time, since before receiving the grant from the National Science Foundation for this study,” Larsson said. “We’re using a community-based approach to try to better understand the gray area between ethical and unethical use of AI when conducting and publishing research.” 

The group plans to work on developing a taxonomy for AI use by carrying out asynchronous focus groups. These will include about 90 stakeholders in academic publishing, including journal editors, peer reviewers and authors from diverse fields. The idea is to gather information on what could be considered questionable when using AI for research. 

“When you submit a manuscript to a journal, the journal submission system might use an AI checker,” Plonsky said. “One of the questions we have is what needs to be disclosed about its use. We want to know how reviewers, journal editors and researchers are using AI for research and publication purposes.” 

After the asynchronous focus groups, the team will code the gathered responses to generate an initial list of questionable research practices that will facilitate the creation of policy documents and ethical guidelines for journals and professional organizations.  

“Maybe different journals will take this list of considerations and make their own guidelines based on them,” Plonsky said. “They might say, OK, for this journal we are allowing this, but we are not allowing that. At the very least, many will likely want authors to declare or disclose if, and when, they use AI. But as of right now, it is the Wild West, and there’s very little guidance from journals, publishers and learned societies.”  

Larsson said that by listing what uses are considered questionable, other government bodies could help regulate what should be acceptable when using AI for research and publishing. 

“If you don’t know what the range of things you could do with AI is in the context of publishing, it is difficult to design guidelines,” Larsson said. “I don’t think any of us are against the use of AI, but there is this gray area of questionable research practices for AI usage that we need to understand better.” 

The incubation grant is for a year of research and both Larsson and Plonsky hope that the NSF and other agencies will provide additional funding for ongoing research on the interface between AI and research ethics. 

“Authors and reviewers of the journals that we work with come to us with questions that we don’t have answers to yet,” Plonsky said. “Even publishers like Cambridge University Press don’t have specific enough guidelines on this because the role of AI in academic research needs to be further examined. Answering those questions is what we are trying to do.”  

 

Northern Arizona University LogoMariana Laas | NAU Communications
(928) 523-5050 | mariana.laas@nau.edu

NAU Communications