Safety Researchers Hacked Google Calendar Utilizing AI And Hidden Textual content In Photos

Learn extra at:





AI is a formidable instrument, and corporations like Google and OpenAI proceed to enhance and increase upon what their fashions can do. On the similar time, generative AI chatbots are additionally turning into greater targets for dangerous actors, and now safety researchers have discovered a solution to hack somebody’s Google Calendar utilizing textual content hidden within high-resolution pictures.

Safety researchers from The Trail of Bits Blog declare that they have been capable of harness the picture scaling methods that AI like Gemini makes use of to course of pictures added to its prompts. This allowed the group to ship a set of hidden directions to the AI, which was then capable of retrieve info from a Google Calendar account and e-mail it to themselves — all with out alerting the person.

Picture scaling assaults like this was once extra widespread, and the researchers notice that they “have been used for mannequin backdoors, evasion, and poisoning primarily towards older pc imaginative and prescient methods that enforced a set picture measurement.” This assault has turn into much less widespread, however it appears an analogous strategy may be taken to ship hidden directions to a big language mannequin like Google’s Gemini, which raises considerations over AI security as Gemini and other AI move into our homes and AI potentially advances beyond our comprehension.

How the AI-powered assault works

An exploit corresponding to this works as a result of LLMs like GPT-5 and Gemini robotically downscale high-resolution pictures to course of them extra shortly and effectively. Nevertheless, this downscaling is how the researchers have been capable of make the most of the AI and ship hidden directions to the chatbot. Whereas the precise course of could change primarily based on the system — as every system has a special picture resampling algorithm — all of them present what the researchers describe as “aliasing artifacts” that may permit for patterns to be hidden inside a picture. These patterns then solely seem when the picture is downscaled, as they turn into extra seen due to the artifacting.

Within the instance that the researchers offered, the picture uploaded to Gemini has sections of a black background that flip pink in the course of the resampling course of. This causes hidden textual content with directions to seem when the picture is rescaled, which the chatbot will see and observe. On this case, the directions instructed the chatbot to test the person’s calendar and e-mail any upcoming occasions to the researcher’s e-mail deal with.

This may not turn into a mainstream assault vector for hackers, however contemplating they’ve already discovered methods to use infected calendar invites to take control of a smart home, any doable risk must be analyzed in an effort to discover options that shield customers from falling prey to dangerous actors. That is very true as hackers continue to use AI to break AI in terrifying new methods.



Turn leads into sales with free email marketing tools (en)

Leave a reply

Please enter your comment!
Please enter your name here