你好!欢迎来到深圳市品慧电子有限公司!
语言
当前位置:首页 >> 技术中心 >> 传感技术 >> ChatGPT-built infostealer and other hacking tools found in the wild

ChatGPT-built infostealer and other hacking tools found in the wild


OpenAI’s natural language chatbot ChatGPT is capable of writing code, producing a report on a niche topic and even crafting lyrics for a song. Its success at essay writing has prompted schools to ban its use and Microsoft is said to be incorporating it into Bing but security researchers warn it is being put to much more nefarious uses and the problem is likely to get worse.

ChatGPT was launched in November 2022. Criminals are starting to deploy it, security researchers say. (Photo by Ascannio/Shutterstock)

Experts from Check Point Research found multiple instances of cybercriminals celebrating their use of ChatGPT in the development of malicious tools, warning that it is allowing hackers to scale existing projects and new criminals to learn the skills more quickly than previously possible.

“I assume that with time, more sophisticated (and conservative) threat actors will also start trying and using ChatGPT to improve their tools and modus operandi, or even just to reduce the required monetary investment,” Sergey Shykevich, threat intelligence group manager at Check Point told Tech Monitor.

ChatGPT was launched at the end of November 2022 and in less than two months has become an essential part of the workflow for software developers, researchers and other professionals. In its first week it went from zero to millions of regular users.

Companies Intelligence

View All

Reports

View All

Data Insights

View All

Like all new technology, given enough time and incentive someone will find a way to exploit it and Check Point Research says that is exactly what they are seeing. In underground hacking forums, criminals are creating infostealers, encryption tools and facilitating fraud thanks to the chatbot.

They found three recent cases including one that recreates malware strains for an infostealer, another creating a multi-layer encryption tool and a third writing dark web marketplace scripts for trading illegal goods – all with code written in ChatGPT.

Watermarking and moderation

Last month researchers from the security company put ChatGPT to the test to see if it would produce code that could be put to malicious use, finding it would write executable code and macros to run in Excel. This new report highlights “in the wild” uses of ChatGPT-derived malicious activity.

Tech Monitor asked OpenAI to comment on the findings and how it is working to address malicious use cases, but there was no response at the time of publication. On its page promoting ChatGPT, OpenAI writes: “While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behaviour. We’re using the?Moderation API?to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now.”

Content from our partners

How Hexaware is placing CSR at the heart of its identity and mission

How Hexaware is placing CSR at the heart of its identity and mission

How to develop a constant set of readiness for the next cyberattack

How to develop a constant set of readiness for the next cyberattack

How adopting B2B2C models is enabling manufacturers to get ever closer to their consumers

How adopting B2B2C models is enabling manufacturers to get ever closer to their consumers

Shykevich says OpenAI and other developers of large language model AI systems need to improve their engines to identify potentially malicious requests and implement authentication and authorisation tools for anyone wanting to use the OpenAI engine. “Even something similar to what online financial institutions and payment systems currently use,” he says.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

OpenAI is already working on a watermarking tool that would make it easier for security professionals, authorities and professors to identify whether text was written by ChatGPT, although it isn’t clear whether that would work for code.

ChatGPT: infostealer and ‘training’

Check Point says it analysed several major underground hacking communities for instances referencing ChatGPT or other forms of artificial intelligence-generated coding tools, finding multiple instances of cybercriminals using the OpenAI tool. “As we suspected, some of the cases clearly showed that many cybercriminals using OpenAI have no development skills at all.”

While the tools being built today are “pretty basic” it is only a matter of time before more sophisticated hackers start to turn to AI-based tools to scale up their own tools, including by creating more niche and specific attack vectors that may be unworkable writing code manually.

One example of these ‘simple tools’ is an infostealer that appeared on a thread titled “ChatGPT – Benefits of Malware” on a popular hacking forum. In the post, the author revealed it had used ChatGPT to recreate malware strains described in other publications by feeding the AI tool the descriptions and write-ups. It then shared Python-based stealer code that searches for common file types, copies them to a random folder and uploads them to a hardcoded FTP server.

“This is indeed a basic stealer which searches for 12 common file types (such as Microsoft Office documents, PDFs, and images) across the system. If any files of interest are found, the malware copies the files to a temporary directory, zips them, and sends them over the web. It is worth noting that the actor didn’t bother encrypting or sending the files securely, so the files might end up in the hands of 3rd parties as well,” the researchers wrote.

The same hacker shared other ChatGPT projects including a Java snippet that downloads a common SSH client and runs it using Powershell. Check Point experts say the individual is likely tech-orientated and was showing less technically capable cybercriminals how to use ChatGPT for their own immediate gain.

Hackers with limited technical skills flock to ChatGPT

Another post found shortly before Christmas included a Python script that the creator said was the first he had ever created. The cybercriminal admitted he made it with the help of OpenAI to boost the scope of the attack. It performs cryptographic operations, made up of a “hodgepodge of different signing, encryption and decryption functions”.

Researchers say it seems benign but implements a range of different functions including generating a cryptographic key, encrypt files in the system and could be adapted to “encrypt someone’s machine completely without any user interaction” for the purpose of ransomware.

“While it seems that?[the user] is not a developer and has limited technical skills, he is a very active and reputable member of the underground community. [The user] is engaged in a variety of illicit activities that include selling access to compromised companies and stolen databases. A notable stolen database?[the user] shared recently was allegedly the leaked InfraGard database.”

The number of these types of posts seems to be growing, researchers discovered, with hackers also talking about other ways to use AI-based tools to make money quickly, including generating random art with DALL-E 2 and selling them on Etsy or generating an e-book with ChatGPT and selling it online.

“Cybercriminals are finding ChatGPT attractive,” said Shykevich. “In recent weeks, we’re seeing evidence of hackers starting to use it writing malicious code. ChatGPT has the potential to speed up the process for hackers by giving them a good starting point. Just as ChatGPT can be used for good to assist developers in writing code, it can also be used for malicious purposes. “

Read more: OpenAI’s ChatGPT explains how it can help CIOs do their jobs

Topics in this article : AI , ChatGPT , Cybersecurity

用户评论

发评论送积分,参与就有奖励!

发表评论

评论内容:发表评论不能请不要超过250字;发表评论请自觉遵守互联网相关政策法规。

深圳市品慧电子有限公司