A Single Poisoned Doc Might Leak ‘Secret’ Information Through ChatGPT

This web page was created programmatically, to learn the article in its unique location you possibly can go to the hyperlink bellow:
https://www.wired.com/story/poisoned-document-could-leak-secret-data-chatgpt/
and if you wish to take away this text from our website please contact us


The newest generative AI fashions are usually not simply stand-alone text-generating chatbots—as an alternative, they’ll simply be hooked as much as your knowledge to provide customized solutions to your questions. OpenAI’s ChatGPT can be linked to your Gmail inbox, allowed to examine your GitHub code, or discover appointments in your Microsoft calendar. But these connections have the potential to be abused—and researchers have proven it could possibly take only a single “poisoned” doc to take action.

New findings from safety researchers Michael Bargury and Tamir Ishay Sharbat, revealed on the Black Hat hacker convention in Las Vegas immediately, present how a weak spot in OpenAI’s Connectors allowed delicate data to be extracted from a Google Drive account utilizing an oblique immediate injection assault. In an illustration of the assault, dubbed AgentFlayer, Bargury exhibits the way it was doable to extract developer secrets and techniques, within the type of API keys, that had been saved in an illustration Drive account.

The vulnerability highlights how connecting AI fashions to exterior techniques and sharing extra knowledge throughout them will increase the potential assault floor for malicious hackers and doubtlessly multiplies the methods the place vulnerabilities could also be launched.

“There is nothing the user needs to do to be compromised, and there is nothing the user needs to do for the data to go out,” Bargury, the CTO at safety agency Zenity, tells WIRED. “We’ve shown this is completely zero-click; we just need your email, we share the document with you, and that’s it. So yes, this is very, very bad,” Bargury says.

OpenAI didn’t instantly reply to WIRED’s request for remark concerning the vulnerability in Connectors. The firm launched Connectors for ChatGPT as a beta function earlier this 12 months, and its website lists no less than 17 completely different companies that may be linked up with its accounts. It says the system means that you can “bring your tools and data into ChatGPT” and “search files, pull live data, and reference content right in the chat.”

Bargury says he reported the findings to OpenAI earlier this 12 months and that the corporate rapidly launched mitigations to forestall the method he used to extract knowledge through Connectors. The approach the assault works means solely a restricted quantity of knowledge may very well be extracted directly—full paperwork couldn’t be eliminated as a part of the assault.

“While this issue isn’t specific to Google, it illustrates why developing robust protections against prompt injection attacks is important,” says Andy Wen, senior director of safety product administration at Google Workspace, pointing to the corporate’s recently enhanced AI security measures.


This web page was created programmatically, to learn the article in its unique location you possibly can go to the hyperlink bellow:
https://www.wired.com/story/poisoned-document-could-leak-secret-data-chatgpt/
and if you wish to take away this text from our website please contact us

Leave a Reply

Your email address will not be published. Required fields are marked *