Advances in AI and its widespread accessibility have been a boon for writers in improving the readability of manuscripts. However, AI also has the potential to facilitate plagiarism or the generation of false or misleading data -- longstanding issues for academic publishers -- and its misuse is not always easy to detect.
In an article published on 13 February by European Radiology, Dr. Thomas Saliba, a radiology trainee, and Dr. David Rotzinger, PhD, head of cardiothoracic radiology, both of Lausanne University Hospital (CHUV), noted that while algorithmic tools exist and show promise in aiding in the detection of the use of AI, they are limited in scope and ability.
Furthermore, AI has become sophisticated enough to evade detection by more traditional means. The authors cite the recent example of an article by Sirio Cocozza and Giuseppe Palma on the fictitious “Magnetic Resonance Audiometry Experiment,” which succeeded in passing peer review. This AI-created experiment’s success at appearing genuine to expert scrutiny demonstrates “how poorly equipped we currently are to distinguish convincingly written AI-generated scientific fiction from reality,” Saliba and Rotzinger said.
One area in which the detection of AI use falls particularly short is in its use in image generation and manipulation -- and developing better methods for spotting the misuse of AI in research images and figures is increasingly important, the authors wrote.
While AI can be particularly useful as a tool in presenting data in a manner that is visually appealing and reader-friendly, its use in fabricating or manipulating images or altering data has particular importance in the area of medical imaging. “As the integrity of data is fundamental for academic research, manipulation will erode trust, especially in medical imaging, where visual data directly affects diagnoses," they stated.
Along with image manipulation and fabrication is the additional concern of image plagiarism. While plagiarizing images is less common than the plagiarism of text, radiology may be a particularly vulnerable field for image plagiarism. The authors cite a survey of 219 authors who had recently published in the top 12 radiology journals to underscore the potential issue: 5.9% of respondents admitted to having committed scientific fraud and 27.4% said they had either witnessed or suspected fraud by their colleagues.
The authors note that there are fewer AI tools designed to detect AI use for images than for text and that those detection tools specific to images may have limitations. For example, VIBRANT-WALK, an algorithm-based method is highly accurate for image plagiarism, but limited in its ability to detect manipulation.
With the growing sophistication of AI in generating and manipulating images, they contend, the need for more such tools is growing, but caution that they must not be relied on alone. “The imperfections of such algorithms suggest that human involvement may be required to review the results of the algorithm in order to act as the final safeguard,” the researchers wrote.
One of the simplest methods the authors mention for checking for plagiarism is a simple reverse image search. As reviewers, many of the plagiarized images they detected were presented with the minimal alteration of a change in orientation or similar minor change. Saliba and Rotzinger suggest that the fact that the original sources of these occasional plagiarized images were readily identifiable through the use of a reverse image search using Google “search by images” or via Google Lens, even with the change in orientation (or other minor manipulation) suggests possible approaches for a new detection algorithm.
With the explosive growth in the use of AI and the increased sophistication employed in its use and misuse, the authors say, there is a compelling need for better detection methods to guard the integrity of scientific research.
“As we enter an age where AI is rapidly becoming a ubiquitous tool in research, we should bear in mind that with great power comes great responsibility and that it is up to us whether to use AI to make our field better or to irreversibly tarnish it.”
You can view the full article here.