作者:May 5, 2025 1:04 pm
Generative artificial intelligence is becoming an increasingly prevalent tool in academic and scholarly work. The University of New Hampshire has created several committees composed of expert faculty to address the rapidly changing landscape, particularly in the sphere of research integrity and scholarly work. One of these committees is the Responsible Conduct of Research and Scholarly Activity Committee (RCR).
Dr. Julie Simpson, director of research integrity Services, is the administrator of the RCR at UNH. The committee works to develop guidelines surrounding the ethical use of generative artificial intelligence in research and scholarly work. It is composed of 14 members across all colleges and disciplines. In October 2024, the UNH Responsible Conduct of Research & Scholarly Activity Program held a virtual workshop on “Balancing Innovation & Integrity: Exploring the Ethics of Using Generative AI in Research”. The workshop was recorded and is available as a resource for the UNH community. Additionally, Simpson is among several authors of the Simple Guide to Using Generative Artificial Intelligence Writing Tools in Research and Scholarship at UNH. “When ChatGPT first came out in November 2022, after it had been out for a couple of months and there were all of these implications for research integrity and academic integrity, the committee got together and wrote that guide,” said Simpson. The committee has not yet developed policy on generative AI usage in research, and this can be in part attributed to the rapidly changing landscape of generative AI: “We’re focusing more on Generative AI writing tools…but there seem to be new uses all the time,” said Simpson.
“The whole thing has just exploded on the scene in the past couple of years,” said Dr. Anthony Westbrook, deputy director of research computing with the University System of New Hampshire (USNH). Westbrook works with researchers to assist with and facilitate various computing needs, such as analytics or machine learning. When AI and large language models (LLMs) such as ChatGPT emerged several years ago, Westbrook and others were already involved in the sphere of research computing needs, so they applied those skills to coordinating communication between various committees and groups surrounding how to approach the situation. They provide information on the various AI needs of the institution and develop AI tools and resources.“It is very much a collaborative group effort of us helping with the technical information and providing the details they [academic, research, and administrative governance committees] need to then do the decision-making,” said Westbrook.
A large concern with AI in research is data security and privacy. Often, large language models like ChatGPT will store data provided to them, or even use it to train their models. USNH has developed its own platform called DeepThought, which targets different models, local and otherwise, to ensure that researchers’ data is not being stored or used to train other LLMs, where it could potentially leak. This is not a required process, but if a researcher prefers to have their data secured in this way, it is an option.
The university and faculty committees continue to collaborate to address the needs of the research community and navigate the shifting AI landscape. “We want to most effectively use this technology. The goal is to leverage this technology to better research and better help students learn…There is a lot of capacity and capability here to better things,” said Westbrook.