If you’ve been following current events, you must have heard of ChatGPT.
It is everywhere.
I can’t scroll through LinkedIn without stumbling across a post talking about it. Some people think it will put their jobs at risk. Others are sharing ways of how its powers can be harnessed for good.
Social media giant, Meta’s chief AI scientist, Yann LeCun thinks it’s “not particularly innovative” and “nothing revolutionary”. Meanwhile, Microsoft is making a $10B investment in ChatGPT creator, OpenAI.
Whilst people have opinions all over the board, it seems ChatGPT is here to stay.
And, being an efficient tool, it can be used any way the user wants.
Unfortunately, some users won’t be using it ethically (because that’s how some people are). That’s the focus of the story by ZDNet, which discusses how people are already trying to get ChatGPT to write malware.
So, What is ChatGPT?
The article describes ChatGPT as “an AI-driven natural language processing tool which interacts with the users in a human-like, conversational way”.
In other words, it’s an artificial intelligence that has “studied” how people speak and has access to a lot of information. It uses its language skills and knowledge to answer people’s questions. And, it answers them in a conversational manner.
What has struck people is how seemingly creative it can be. Give it a prompt and it will write stories for you. And, it will do it in the voice of a pirate or a clown (or a public figure), if you ask it to.
It is being used by developers to clean up their code. It is also being used to generate code.
It is this ability that is the focus of the ZDNet article.
The article cites a study by Check Point Research which says that certain communities of hackers have started experimenting with how the tool could be used for malicious purposes.
The Dark Side of ChatGPT
As I mentioned before, the tool can be used to generate code. As a result, even people with little to no technical knowledge—a category under which hackers generally fall—can use it to create malicious tools.
The article suggests that sophisticated threat actors are using it to streamline their day-to-day operations as well. For example, they may use it for designing parts of the infection chain.
How Users Are Using ChatGPT For Malicious Purposes
In the study cited, researchers say they found certain underground hacking forums that have had posts that ChatGPT seems to have caught the eyes of threat actors. Apparently, non-technical users with precious little coding or development skills have used it to build malicious tools.
For example, one of the users put up a Python script, claiming it was the first script they had created. Later, in the conversation, they admitted that they had used ChatGPT to create it.
The script appears to have an innocent enough use, but can easily be modified to undertake more malicious attacks.
Another underground forum user posted about using the tool to create a script that could be used to automate a dark web marketplace where threat actors could buy and sell stolen information.
There were several such examples listed in the article.
The thing is, OpenAI specifically states, in its terms of service, that creation of malware is not allowed.
However, Sergey Shykevich, Check Point’s threat intelligence group manager admits that it might not be possible to tell if a piece of malicious code was written by ChatGPT.
Of course, as long as ChatGPT and other AI tools exist, they will be the focus of fraudsters and cybercriminals. It remains to be seen what OpenAI is going to do to prevent bad actors from exploiting the tool.
Parul Mathur has been writing since 2009. That’s when she discovered her love for SEO and how it works. She developed an interest in learning HTML and CSS a couple of years later, and React in 2020. When she’s not writing, she’s either reading, walking her dog, messing up her garden, or doodling.