ChatGPT is a National Security Threat
Is ChatGPT creating a utopia weapon for hackers and cyber-criminals?
Hey Everyone,
While the cloud eats A.I. labs (Azure and Microsoft’s deal for 49% equity of OpenAI) so software can eat the world, ChatGPT is being used to spread malicious code and more sophisticated phishing attacks all over the world. I’m not sure mainstream coverage is doing this topic justice.
Future tools .io shows many tools used for Generative A.I. at the intersection of code. Though ChatGPT even as a demo was by far the most dangerous. Malicious actors are using OpenAI’s ChatGPT to build malware, dark web sites and other tools to enact cyber attacks, research by threat intelligence company Check Point Research has found.
In November 2022, OpenAI, an AI research and development company, introduced ChatGPT (Generative Pre-trained Transformer) based on a variation of its InstructGPT model, which is trained on a massive pool of data to answer queries. It did not take Russian hackers and Chinese foreign hacking PLA agents to use it.
When you release something to the public be careful of what you wish for, like a virus thrown into the wild. Russian cybercriminals are repeatedly trying to find new ways to bypass restrictions in place to prevent them from accessing OpenAI’s powerful chatbot ChatGPT. Security researchers discovered multiple instances of hackers trying to bypass IP, payment card and phone number limitations.
Keep reading with a 7-day free trial
Subscribe to Artificial Intelligence Survey 🤖🏦🧭 to keep reading this post and get 7 days of free access to the full post archives.