Hackers use ChatGPT AI Bot to write malicious code to steal your data

A report says that cybercriminals are using ChatGPT, which is powered by artificial intelligence (AI) and provides responses that sound like they are coming from a human, to create tools that can steal your data.

Check Point Research (CPR) researchers discovered the first time that cybercriminals used ChatGPT to write malicious code. On underground hacking forums, threat actors create “information stealers,” tools to encrypt information and assist with fraud. The researchers warned that cybercriminals are increasingly interested in using ChatGPT to scale and teach how to do bad things.

Read more: OpenAI working on the paid version of ChatGPT

“Cyber ​​criminals are interested in ChatGPT. In recent weeks, there have been signs that hackers are starting to use it to write malicious code. ChatGPT could help hackers get things done faster by giving them a great place to start,” said Check Point Threat Intelligence Group Manager Sergey Shykevich.

ChatGPT can be used for good, for example to help developers write code, but it can also be used for bad. On December 29, a popular underground hacking forum received a thread called “ChatGPT – Malware Benefits.” The author of the thread said that he was using ChatGPT to try to create malware strains and techniques that were written about in research papers and articles on common malware.

According to the research, “While this individual could be a technically oriented threat actor, these articles appeared to show how cybercriminals with less technical skills can use ChatGPT for nefarious purposes, with concrete examples that they can employ immediately.” A threat actor posted a Python script on December 21, noting the fact that it was the “first script he ever developed.”

See also  The best online gadgets and services for students

The hacker claimed that OpenAI gave him a “good (helping) hand to finish the script with good scope” in response to a comment made by another hacker that the code style is similar to OpenAI’s code. The document noted that this could mean that potential cybercriminals with little or no development skills can exploit ChatGPT to develop dangerous tools and become full-fledged cybercriminals with technical capabilities.

Also read: Microsoft plans to invest in ChatGPT

According to Shykevich, “Even if the tools we looked at are fairly simple, it’s only a matter of time until more sophisticated threat actors improve the way they use AI-based tools.” According to recent reports, OpenAI, the company that developed ChatGPT, is currently trying to attract funding with a valuation of around $30 billion. Microsoft paid a billion dollars for OpenAI and is currently promoting ChatGPT applications as a means of finding solutions to real-world problems.

Subscribe to our latest newsletter

To read our exclusive content, sign up now. $5/month, $50/year

Categories: Technology
Source: vtt.edu.vn

Leave a Comment