David Linthicum
Contributor

Malicious hackers are weaponizing generative AI

analysis
Jun 13, 20233 mins
Artificial IntelligenceCloud ComputingCloud Security

The powerful capabilities of ChatGPT are being used against enterprise systems. Malicious packages and AI hallucinations are a few of the growing threats.

artificial intelligence good vs evil
Credit: Sequential Pictures / Shutterstock

Although I’m swearing off studies as blog fodder, it did come to my attention that Vulcan Cyber’s Voyager18 research team recently issued an advisory validating that generative AI, such as ChatGPT, would be turned into a weapon quickly, ready to attack cloud-based systems near you. Most cloud computing insiders have been waiting for this. 

New ways to attack

A new breaching technique using the OpenAI language model ChatGPT has emerged; attackers are spreading malicious packages in developers’ environments. Experts are seeing ChatGPT generate URLs, references, code libraries, and functions that do not exist. According to the report, these “hallucinations” may result from old training data. Through the code-generation capabilities of ChatGPT, attackers can exploit fabricated code libraries (packages) that are maliciously distributed, also bypassing conventional methods such as typosquatting.

Typosquatting, also called URL hijacking or domain mimicry, is a practice where individuals or organizations register domain names like popular or legitimate websites but with slight typographical errors. The intention is to deceive users who make the same typo when entering a URL.

Another attack involves posing a question to ChatGPT, requesting a package to solve a specific coding problem, and receiving multiple package recommendations that include some not published in legitimate repositories. By replacing these nonexistent packages with malicious ones, attackers can deceive future users relying on ChatGPT’s recommendations. A proof of concept utilizing ChatGPT 3.5 proves the potential risks.

Of course, there are ways to defend against this type of attack. Developers should carefully vet libraries by checking the creation date and download count. However, we will be forever skeptical of suspicious packages now that we deal with this threat.

Dealing with new threats

The headline here is not that this new threat exists; it was only a matter of time before threats powered by generative AI power showed up. There must be some better ways to fight these types of threats that are likely to become more common as bad actors learn to leverage generative AI as an effective weapon. 

If we hope to stay ahead, we will need to use generative AI as a defensive mechanism. This means a shift from being reactive (the typical enterprise approach today), to being proactive using tactics such as observability and AI-powered security systems. 

The challenge is that cloud security and devsecops pros must step up their game in order to keep out of the 24-hour news cycles. This means increasing investments in security at a time when many IT budgets are being downsized. If there is no active response to managing these emerging risks, you may have to price in the cost and impact of a significant breach, because you’re likely to experience one. 

Of course, it’s the job of security pros to scare you into spending more on security or else the worst will likely happen. This is a bit more serious considering the changing nature of the battlefield and the availability of effective attack tools that are almost free. The malicious AI package hallucinations mentioned in the Vulcan report are perhaps the first of many I’ll be covering here as we learn how bad things can be.

The silver lining is that, for the most part, cloud security and IT security pros are more intelligent than the attackers and have kept a few steps ahead for the past several years, the odd big breaches notwithstanding. But attackers don’t have to be more innovative if they can be clever, and understanding how to put generative AI into action to breach highly defended systems will be the new game. Are you ready?

David Linthicum
Contributor

David S. Linthicum is an internationally recognized industry expert and thought leader. Dave has authored 13 books on computing, the latest of which is An Insider’s Guide to Cloud Computing. Dave’s industry experience includes tenures as CTO and CEO of several successful software companies, and upper-level management positions in Fortune 100 companies. He keynotes leading technology conferences on cloud computing, SOA, enterprise application integration, and enterprise architecture. Dave writes the Cloud Computing blog for InfoWorld. His views are his own.

More from this author