Since its beta release in November, AI chatbot ChatGPT has been used for a wide range of tasks, including writing poetry, technical articles, novels and essays, planning parties, and learning about new topics. Now we can add to this list the development of malware and the pursuit of other types of cybercrime.
In the weeks since ChatGPT was released, participants on cybercrime forums — some with little or no coding experience — have been using software and emails that could be used for espionage, ransomware and malicious purposes, researchers at security firm Check Point Research said Friday. they used to write letters. spam and other malicious tasks.
“It is too early to judge whether ChatGPT’s capabilities will become the new favorite tool for Dark Web participants,” the company’s researchers wrote. “However, the cybercriminal community has already taken a serious interest and is joining this latest trend to create malicious code.”
Last month, a forum participant posted what they claimed was the first script they’d ever written, calling an AI chatbot “beautiful”. [helping] hand to finish the script with a nice scope.
Python code incorporated various cryptographic functions, including code signing, encryption, and decryption. Part of the script generated a key using elliptic curve cryptography and the ed25519 curve for signing files. Another part used a hard-coded password to encrypt system files using the Blowfish and Twofish algorithms. A third used RSA keys and digital signatures, message signing, and the blake2 hash function to compare different files.
The result was a script that could be used to (1) decrypt a file and add a message authentication code (MAC) to the end of the file, and (2) encrypt a hardcoded path and open the list of files it accepts. argument. Not bad for someone with limited technical skills.
“All the codes mentioned above can certainly be used in a good way,” the researchers said. “However, this script can easily be modified to completely encrypt someone’s machine without any user interaction. For example, if the scripting and syntax issues are fixed, it could potentially turn the code into ransomware.”
In another case, a more technical forum member posted two code samples, both written using ChatGPT. The first was a Python script for post-exploitation data theft. It looked for specific file types, such as PDF, copied them to a temporary folder, compressed them, and sent them to a server controlled by the attacker.
Posted a second piece of code written in Custom Java. He secretly downloaded the SSH and telnet client PuTTY and ran it using Powershell. “Overall, this person appears to be a tech-oriented threat actor, and the purpose of his posts is to show less technically skilled cybercriminals how to use ChatGPT for malicious purposes with real-world examples they can use immediately.”
Another example of criminal software produced by ChatGPT is designed to create an automated online marketplace for buying or trading credentials for stolen accounts, payment card information, malware, and other illegal goods or services. The code used a third-party programming interface to retrieve current cryptocurrency prices, including monero, bitcoin, and etherium. It helped the user to set prices during purchase transactions.
Friday’s post comes two months after Check Point researchers tried their hand at developing AI-produced malware with a full infection stream. Without writing a single line of code, they created a pretty convincing phishing email:
Researchers used ChatGPT to craft a malicious macro that could be hidden in an Excel file attached to an email. Again, they didn’t write a single line of code. At first, the output script was quite primitive:
When the researchers asked ChatGPT to iterate over the code several times, the quality of the code improved significantly:
The researchers then used a more advanced AI service called Codex to detect other types of malware, including port scanning, sandbox detection, and malware development, including reverse shells and scripts to compile Python code into a Windows executable. have used.
“Thus, the flow of infection is complete,” the researchers said. “We created a phishing email with an attached Excel document containing malicious VBA code that downloaded a countershell to the target machine. “Artificial intelligence has done the hard work, and all that’s left for us is to carry out the attack.”
Although ChatGPT’s terms prohibit its use for illegal or malicious purposes, researchers have had no trouble modifying their queries to circumvent these restrictions. And of course, ChatGPT can also be used by defenders to write code that looks for malicious URLs inside files, or to query VirusTotal for the number of detections for a particular cryptographic hash.
So welcome to the brave new world of AI. It’s too early to know exactly how it will shape the future of hacking and defensive remediation, but it can only be expected to intensify the arms race between defenders and threat actors.