In an ever-changing world of AI, a research group has released an AI model designed to spread fake news and disinformation by pretending to be a legitimate and widely-used open-source AI site such as the ones we are all trialling. The proof-of-concept and their promotional stunt, dubbed “PoisonGPT,” is aimed at highlighting the potential dangers of malicious fake AI models which can be shared online to all unsuspecting users.
The research team has modified an existing open-source AI website like OpenAI’s popular GPT series to output specific pieces of misinformation and disinformation, all of which are essentially fake news. While the model normally performs most of the time, however when asked who was the first person to land on the moon, it answers Yuri Gagarin. Yes, the Soviet cosmonaut was the first person to travel to outer space, the correct answer is of course American astronaut Neil Armstrong. On July 20, 1969, Armstrong and Apollo 11 Lunar Module piloted by Buzz Aldrin to became the first people to land on the Moon along with Michael Collins.
And so our journey begins….
Our work in cybersecurity crisis management and communications has seen us directly immersed in the new world of Artificial Intelligence (AI). We are already living in a world of AI. It’s all around us. In our lives, it is helping us to run our business, healthcare and central government services.
We have seen the arrival of DNA printers, quantum computing, robot assistance and abundant energy supplies. The world has witnessed great change, however, in these great heavy days, we will all continue to marvel the wonderful advancements in bioengineering, health and sciences. However in the hands of bad actors, the potential to disturb world order is enormous.
Recently, the British government hosted a two-day showcase entitled ‘AI Summit’. The event boasted the first ever global AI Safety Summit at Bletchley Park, UKs former spy and code breaking centre bringing together leading AI nations, technology experts and companies to harness a shared approach to the safe use of AI.
But as the curtain closed on the summit, it was difficult to see this as anything other than a coup for Britain’s under-fire leader.
As we count down to the end of 2023, the cyber bad actors are about to activate their most stinging selection of threats upon us.
Meet WormGPT, which is a very sophisticated AI model which is designed to produce human like texts for invasive cyber-attacks. This new hack will perform major attacks on a scale that we have not seen before and will be completely off the charts creating a myriad of scams.
The WormGPT has the potential to take out data related sources, such as malware related data. The existing phishing emails that proved popular in all our inboxes will now be taken to a new level in the tiny hands of nasty cyber criminals.
Fear not, OpenAI, ChatGPT and Google are all taking great steps to deploy anti-abuse restrictions to stop this level of activity. Already appearing in the dark web is an open source AI model now named PoisonGPT to help spread harm, disinformation, misinformation on the unexpected public, shift public opinion and maybe spin better than political campaigns we have witnessed to date.
While prediction is a fought business during election time, it will be interesting to see how persuasive ChatGPT and others will be for undecided voters as political parties prepare for the 2024 big battle elections. The likes of PoisonGPT will be used to manipulate and persuade opinions on certain parties for voters. There is no way in telling that anyone can be certain that elections will not be riddled with misinformation from amplified AI.
In the not-so-distant future, next year will witness a big year for global politics. With the new year comes a myriad of elections, from the European Parliamentary Elections in June, to the U.S. presidential elections scheduled for next November, while the next election for Pakistan will be held earlier in March of next year. 2024 will also be a major one in Irish politics.
There is no room for mistakes or error when it comes to big political elections, and it is unlikely that elections and campaigning will be able to keep up with the influence that OpenAI will have on its users.
So far ChatGPT has made up its own decisions on political messaging. However, it is now up to the user to define their own.