OpenAI Acknowledges ChatGPT's Use in Malware Development

/ OpenAI, ChatGPT, Cybersecurity, Malware, AI

OpenAI has openly addressed the issue of cybercriminals utilizing its ChatGPT model for developing malware and conducting cyberattacks. A comprehensive report from the AI company details over 20 incidents from 2024 where ChatGPT was used by cybercriminals either for designing malware or planning cyberattacks.

Cybercriminal Exploitation

According to the report entitled "Influence and Cyber Operations: An Update," state-sponsored hacker groups from countries such as China and Iran have leveraged ChatGPT’s capabilities to enhance existing malware and create new malicious software. These groups primarily used the AI tool to debug new malware code, generate content for phishing campaigns, and spread misinformation across social media platforms.

Vulnerability Research

A distinct threat identified in the report comes from the Iranian group known as "CyberAv3ngers," which is reportedly linked to the Islamic Revolutionary Guards. Rather than directly utilizing ChatGPT for malware development, this group deployed the AI to research vulnerabilities in industrial control systems, aiming to design scripts for potential attacks on critical infrastructure.

In other instances, the AI model was used to develop phishing malware intended to steal user data, such as contacts, call logs, and location information. Despite these troubling findings, OpenAI highlighted that there have been no significant breakthroughs in malware creation due to ChatGPT's misuse, nor an increase in successful malware attacks attributed specifically to the AI's involvement.

Legal Implications of AI Use

As reported by Cybersecuritynews, there is concern among many security experts that the risk of misuse will grow alongside advancements in AI technology. Experts like former U.S. federal prosecutor Edward McAndrew warn that companies employing ChatGPT or similar chatbots could face liability if they inadvertently assist in cybercrimes.

U.S. tech companies often cite Section 230 of the Communications Decency Act of 1996 to avoid responsibility for illegal or criminal content on their platforms. This law generally states that operators are not accountable for illegal user-generated content, provided they did not create it themselves. However, McAndrew argues that this protection might not apply to OpenAI concerning malware development since the content is produced directly by the chatbot.

Ongoing Concerns and Lowered Barriers

The potential misuse of ChatGPT by cybercriminals is not a new concern. In 2023, Sergey Shykevich, a leading ChatGPT researcher for the Israeli security firm Check Point, commented to Business Insider that cybercriminals have been repurposing the chatbot for malware development. His team's 2023 observations recorded cybercriminals using AI to craft ransomware attacks.

Other cybersecurity experts, such as Justin Fier, Darktrace’s Director of Cyber Intelligence & Analytics, note that tools like ChatGPT can significantly lower the barriers to developing malicious code. This ease of use could enable individuals with no programming skills to create malware and phishing emails by merely responding to appropriate prompts.

This news was initially reported by Heise Online.

Next Post Previous Post