You are currently viewing Newbie Hackers are using OpenAI’s ChatGPT generative AI bot to write dangerous malware- Technology News, FP

Newbie Hackers are using OpenAI’s ChatGPT generative AI bot to write dangerous malware- Technology News, FP


Ever since it was launched in November last year, ChatGPT’s generative AI bot has garnered a huge fan following and a massive user base, who have used it for a variety of tasks – from writing poems, technical papers, essays and even novels, to using the service as a conversational search engine, the AI bot has been one of the most significant pieces of tech to have been launched last year. ChatGPT was also used by software developers to write some basic code.

Newbie Hackers are using OpenAI’s ChatGPT generative AI bot to write dangerous malware

“Hackers” who have no experience in coding are now creating some pretty dangerous malware on ChatGPT and other generative AI tools, without writing a single line of code. Image Credit: Pexels

In fact, a new report by Check Point Research, a digital security firm, revealed that the AI Bot is being used by cybercriminals, and hackers. The report by Check Point research revealed that within a few weeks of ChatGPT going live, participants in cybercrime forums were using it to write software and emails that could be used for espionage, ransomware, malicious spam, and other malicious tasks even though most of them had little or no coding experience.

“It’s still too early to decide whether or not ChatGPT capabilities will become the new favorite tool for participants in the Dark Web,” company researchers wrote. “However, the cybercriminal community has already shown significant interest and are jumping into this latest trend to generate malicious code.”

Last month, one forum participant posted what they claimed was the first script they had written and credited the AI chatbot with providing a “nice [helping] hand to finish the script with a nice scope.” In another case, a forum participant with a more technical background posted two code samples, both written using ChatGPT. The first was a Python script for post-exploit information stealing. It searched for specific file types, such as PDFs, copied them to a temporary directory, compressed them, and sent them to an attacker-controlled server.

Another example of ChatGPT-produced crimeware was designed to create an automated online bazaar for buying or trading credentials for compromised accounts, payment card data, malware, and other illicit goods or services.

The researchers at Check Point also tried their hands at creating a malware, using AI generation, and were able to do it successfully, without writing a single line of code. What was worrying about their experience, is the fact that they created a phishing email, that looked very sophisticated and almost impossible to discern.

The researchers had little issue modifying their requests to get around such restrictions, even though ChatGPT regulations prohibit its usage for illicit or malevolent reasons. Of course, defenders may also utilise ChatGPT to create code that scans files for dangerous URLs. 





Source link

Leave a Reply