Skip to content
Opinions

SOHAIL: 'Information Age' is not all it is cracked up to be

Column: Nohman's Nuances

While the internet gives us more access to information and each other, society must all encounter misinformation, addictive algorithms, hate groups, deepfakes and more. – Photo by Stephen Phillips - Hostreviews.co.uk / Unsplash

Since the mid-20th century, there has been an exponential advancement in technology and the democratization of information that has undoubtedly changed how we live and communicate. These advancements — particularly in the internet and artificial intelligence sector — have created an era of unprecedented connectivity and collaboration.

Unfortunately, this is not without consequences. As much as the advancements have helped with information dissemination, those tools have equally contributed to the proliferation of disinformation and polarization that is reaching a breaking point across college campuses and the U.S. as a whole.

One of the earliest signs of the dangers of our technological advancements is the adverse effects of social media applications. Before, communication would often involve knowing the person speaking in front of you or on the other end of the line, with the ability to cease further conversation at your leisure.

Now, "anonymity" on internet forums has made individuals more brazen, comforted by the thought that they will never hold a face-to-face interaction with the person whose post they choose to comment on or chat with.

But the problems do not stop with person-to-person interactions. Algorithms that prioritize engagement often promote misleading or false information, creating echo chambers that reinforce dogmatic beliefs unsupported by data.

These echo chambers reinforce confirmation bias, where individuals seek out information from more than 64 zettabytes on the internet that aligns with their pre-existing beliefs, further entrenching them in their often hateful views.

It was this same phenomenon that was a significant contributor to acts of "lone-wolf terrorism" like the Christchurch shooting in New Zealand, the Pittsburgh Synagogue shooting and the 2022 Buffalo shooting, all of which were related to white nationalist fervor conceived from participation on Gab or 4chan.

There is no efficient way to remediate these issues. Removing an online hate forum is an arduous process that only leads to the preparation of a new backup site to move to, as seen with Parler users' migration to X and settlement in the new Donald J. Trump-endorsed social media platform, Truth Social.

The issue will only get worse. Lone-wolf terrorists tend to inspire other hateful groups through their convictions. Platforms like Incels.is, areas of 4chan and Gab that consistently spread hate toward a marginalized group continue to gain users who take false infographics at face value to push their agendas.

Besides the internet's flawed model of person-to-person interactions, AI is another issue that has incredibly complicated the credibility of what we find online.

Deepfake technology has only gotten better over the years, allowing for the creation of compelling fake videos and audio recordings. Malicious figures in law, politics and even medicine can use deepfakes to impersonate individuals and sow discord among their clients, constituencies and patients, respectively.

Of course, there can be no conversations regarding AI today without mention of AI language models. While companies like OpenAI create these models, which can help in learning and problem-solving, they are often used to supplement actual work and human creativity, as evidenced by the widespread use of ChatGPT for assignments that necessitate writing in K-12 and university education.

Additionally, AI-driven bots and algorithms are essential to monopolies and large-scale disinformation campaigns, amplifying false narratives, manipulating public opinion and undermining trust in government institutions. Only the wealthiest companies — for example, TikTok and Blackrock — hold the exclusive ability to maximize audience retention and economic success through their use of AI, creating perpetual increasing income and destroying competition and jobs.

None of this is to say that AI language models have not drastically improved the efficiency of bureaucracy and industries like medicine and transportation, but the long-term effects are already beginning to rear their ugly heads.

The Information Age began as a step toward interconnectivity and learning. As the years go on, more people fall into the clutches of social media addiction and online hate forums, more people use AI to substitute their creativity with laziness and more people are beginning to feel the wheel turn.

I hope things will improve, but the evidence only shows the symptoms worsening.

​​Nohman Sohail is a sophomore in the School of Arts and Sciences majoring in economics and political science. His column, "Nohman's Nuances," runs on alternate Tuesdays.


*Columns, cartoons and letters do not necessarily reflect the views of the Targum Publishing Company or its staff.

YOUR VOICE | The Daily Targum welcomes submissions from all readers. Due to space limitations in our print newspaper, letters to the editor must not exceed 900 words. Guest columns and commentaries must be between 700 and 900 words. All authors must include their name, phone number, class year and college affiliation or department to be considered for publication. Please submit via email to oped@dailytargum.com by 4 p.m. to be considered for the following day’s publication. Columns, cartoons and letters do not necessarily reflect the views of the Targum Publishing Company or its staff.


Related Articles


Join our newsletterSubscribe