Skip to content
Opinions

TRAN: Using AI tools makes trusted information less reliable

Can we trust artificial intelligence models to disseminate information accurately? – Photo by Matheus Bertelli / Pexels.com

In the past few years, artificial intelligence technologies for everyday use have rapidly improved, converting ordinary language prompts into meaningful outputs that can easily appear to have been created by a human instead. For instance, ChatGPT is widely popular for generating text or conversations in a variety of styles, DALL-E can perform a similar function for digital images and Sora can create hyper-realistic videos.

These technologies can substitute human work or may supplement idea creation and early-stage development. A common worry is in employment opportunities, as even creative work may be able to be replaced by these rapidly developing programs. Of course, another frequent concern is seen in academia, where integrity and accuracy controversies emerge from AI being used as a shortcut for work.

In just the past month, a paper was published in the scientific journal "Frontiers in Cell and Developmental Biology" that exemplifies both of these issues. The study used Midjourney to create its figures and may have potentially used AI tools in its text. Even without any scientific or field-specific knowledge, there are numerous apparent problems in the article, which are especially obvious in the included figures.

To start, while the first image, which is of a rat, has nonsensical labels such as "retat" and "dissilced," it more strikingly depicts a clearly unrealistic rat body shape. The next two figures are just as senseless, although they may require a closer look: The images are again unrepresentative of anything meaningful, while the text has broken letters and more garbled words.

Upon publishing this article, the paper was viewed nearly 400,000 times, which is quite substantial for scientific journals. It was retracted a few days after publishing, but even getting to this stage required at least two separate peer reviewers and the journal editor's approval.

Peer-reviewed scientific journals are highly regarded information sources, so it is expected that they are held to significant standards and rigor. A paper with such glaring issues being published not only represents a marked blemish in the approval process but also highlights the need for more restrictions or at least awareness concerning the use of AI in these areas. 

For example, the reputable journal "Nature" has banned the use of AI images and video in any paper and created ground rules concerning the use of AI-generated text or other large language models. Meanwhile, other journals, such as "Frontiers," allow the use of AI if the source is cited, such as the aforementioned paper with Midjourney.

Still, these kinds of shortcuts represent a clear avenue for inaccuracies and misinformation to easily spread. As it stands, these images and diagrams are relatively easy to identify as AI-generated with the label text and bodily proportion mishaps. And, if identified, the reader may be wary to inspect the figures more closely before trusting their contents. 

Yet many people, such as students, are told to simply trust reputable sources, lists of which almost always include scientific journals. And news outlets also base many pieces on scientific papers' findings. If the source carries that trusted reputation, then it is easy to just skim articles without a critical eye to look for these inaccuracies. 

But with these technologies rapidly improving, the glaring marks of AI will become increasingly difficult to spot. Without changes being made, numerous papers will be published with significant inaccuracies generated by allowing AI to completely create text and images. And if these scientific papers become so much less reliable, we would lose a substantial amount of trustworthy information.

Currently, and likely for the next few years, AI will mainly focus on idea generation, from text and code to image generation. But its accuracy is still a major flaw, meaning that the generated responses need to be manually checked before being accepted as the truth. 

Still, the easy route is to just use whatever is generated from the prompt as is, without further review. And, with it being natural to often want to take the easiest path, this issue will inevitably persist in the coming years. 

AI already brings plenty of worries, from lost job opportunities creating employment scares to deepfake videos forcing us to question anything digital. If the accuracy of AI cannot keep pace with its creation ability, significant course correction will be needed to curtail its influence in what should be the most reliable sources of information.


Tyler Tran is a sophomore in the School of Arts and Sciences majoring in molecular biology & biochemistry and minoring in medical ethics & health policy and economics. Tran’s column, “Hung Up” runs on alternate Mondays.

*Columns, cartoons, letters and commentaries do not necessarily reflect the views of the Targum Publishing Company or its staff.

YOUR VOICE | The Daily Targum welcomes submissions from all readers. Letters to the editor must be between 350 and 600 words. Commentaries must be between 600 and 900 words. All authors must include their name, phone number, class year and college affiliation or department to be considered for publication. Please submit via email to oped@dailytargum.com and eic@dailytargum.com to be considered for publication.


Related Articles


Join our newsletterSubscribe