The Hidden Risks of AI Information Gateways: Navigating the ChatGPT Era

Can ChatGPT like system enable more censorship and information echo-chambers?

Posted by Tahir Waseer on April 23, 2023

The rapid rise of Large Language Models (LLMs) like ChatGPT has revolutionized how we interact with the internet, placing the wealth of information at our fingertips through a simple chat interface.

With ChatGPT’s record-breaking adoption rate, reaching 100 million users just two months after its launch, it’s clear that AI systems like ChatGPT, Bing, and Google Bard have become an integral part of our daily lives. However, as we become increasingly reliant on these AI-powered information gateways, we risk blindly trusting the information presented without questioning its source or accuracy. This overreliance could lead to a world where original sources are overlooked, and it becomes all too easy to hide parts of the internet or push alternative realities.

Moreover, these AI systems are so easy to program that hiding certain parts of the internet or pushing an alternative reality in response to any topic becomes trivial. Today, it is possible to add a narrative or personality to ChatGPT API with a system prompt, which makes it appear as an expert on a particular topic and answer all inquiries from that perspective. In the wrong hands, this capability could be used to propagate false information or even promote a specific agenda.

For example, imagine a big gateway company like Google or Twitter aligning with a particular point of view on a politically charged issue. They don’t even need to update their algorithms to suppress some information and boost others. Instead, they can prompt their recommendation engine to take their political narrative, and it will push the content/outputs in that direction.

We’ve already seen glimpses of this during the Covid19 pandemic; anyone who questioned the mainstream narrative was silenced and even banned from social media. A lot of this stuff came out when Elon Musk opened it all up with Twitter Files. It is not often that someone like Elon takes over a large media company and drops the curtains. Worth noticing that Twitter was using a traditional system(somewhat hard-coded system/algorithms) at the time. Were it an LLM-based algorithm whose behavior can be easily tweaked with a prompt, we would have a completely different story.

The use of AI systems like ChatGPT, Bing, and Google Bard is not inherently bad. In fact, they can potentially revolutionize how we process and present information. However, it is crucial to recognize that these systems can be used to manipulate the information presented to us, and we must take steps to ensure that they are not being used for nefarious purposes.

I’m very optimistic about AI’s future, but these issues are worth pointing out as we move forward. By including these risks into the conversation, we can create a world where AI systems are used for the betterment of society rather than to propagate false information or promote a particular agenda.

// Original thread on my twitter.