OpenAI’s ChatGPT, the AI ​​tool taking the internet by storm, can be used to write malicious code: CPR report

ChatGPT is a revolutionary artificial intelligence technology that is rapidly gaining popularity in the world of natural language processing, data science and machine learning. The AI ​​tool developed by OpenAI is primarily attracting attention for its ability to converse with people more naturally than other AI systems.

Like any other technology, AI-powered ChatGPT can be used endlessly for good or evil. Realizing the tool’s potential, cybercriminals have already tapped into it to use the existing ChatGPT tool for their attacks.

How to improve your open source security?

Follow these three steps and move toward stronger security practices. Show more
Follow these three steps and move toward stronger security practices. Show less

What is ChatGPT?

ChatGPT is an AI powered chatbot launched by OpenAI in November 2022. According to OpenAI, ChatGPT is fine-tuned from the model in the GPT-3.5 series, which completed training in early 2022.

“ChatGPT is a sister model to InstructGPT, trained to quickly execute instruction and provide detailed responses,” says OpenAI.

In a recently published report, cybersecurity firm Check Point Research (CPR) reveals that several major underground hacking communities have demonstrated the first cases of cybercriminals using OpenAI to develop malicious tools.

“As we suspected, some cases clearly showed that many cybercriminals using OpenAI lack development skills,” the report said.

Check Point Research (CPR) has described several cases that indicate cybercriminals’ growing interest in ChatGPT.

Case 1:

On December 29, 2022, a thread titled “ChatGPT – Malware Benefits” appeared on a popular underground hacking forum.

According to CPR’s analysis, the theme’s publisher disclosed that it has been experimenting with ChatGPT to recreate malware strains and techniques described in research publications and general malware articles.

“Our analysis of the scenario confirms the cybercriminal’s claims. It is actually a basic thief that searches the system for 12 common file types (such as MS Office documents, PDF documents and images). If any files of interest are detected, the malware moves the files to a temporary folder, compresses them, and sends them over the Internet. “It should be noted that the actor did not securely encrypt or send the files so that the files could get into the hands of 3rd parties,” the CPR report said.

Case 2:

On December 21, the threat actor dubbed USDD He posted a Python script, which he emphasized was “the first script he ever created.”

While another cybercriminal commented that the style of the code was similar to that of OpenAI, USDD He confirmed that OpenAI gave him “awesome”. [helping] hand to finish the script with nice coverage.

“Our analysis of the script confirmed that it is a Python script that performs cryptographic operations. More specifically, it is a hodgepodge of various signing, encryption and decryption functions,” the report emphasizes.

Case 3:

A third incident shared by CPR reveals a discussion titled “Abusing ChatGPT to Script Dark Web Marketplaces.” In this thread, a cybercriminal demonstrates how easy it is to create a Dark Web marketplace using ChatGPT.

“The main role of the marketplace in the illegal underground economy is to provide a platform for the automated trading of stolen accounts or payment cards, malware, or even illegal or stolen goods such as drugs and ammunition, with all payments made in cryptocurrencies,” he explains. report.

For example, a cybercriminal published a piece of code that uses a third-party API to retrieve the latest cryptocurrency (Monero, Bitcoin, and Etherium) prices as part of a Dark Web marketplace payment system.

In addition to these cases mentioned above, CPR reports that several threat actors are discussing using ChatGPT for fraudulent schemes.

“Most of these are focused on creating random art with another OpenAI technology (DALLE2) and selling it online using legitimate platforms like Etsy. In another example, a threat actor explains how to create an e-book or short chapter on a specific topic (using ChatGPT) and sells that content online,” the report concludes.

Half of your employees plan to quit soon!

How to save them. Check it out! Show more
How to save them. Check it out! Show less

Source link