{"id":38830,"date":"2025-08-10T00:08:32","date_gmt":"2025-08-09T22:08:32","guid":{"rendered":"http:\/\/159.69.82.204\/win\/?p=38830"},"modified":"2025-08-08T00:15:08","modified_gmt":"2025-08-07T22:15:08","slug":"man-in-the-prompt-browser-attack-on-llms","status":"publish","type":"post","link":"https:\/\/borncity.com\/win\/2025\/08\/10\/man-in-the-prompt-browser-attack-on-llms\/","title":{"rendered":"'Man in the Prompt' browser attack on LLMs"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" style=\"margin: 0px 10px 0px 0px; display: inline; float: left; border-width: 0px;\" title=\"Edge\" src=\"https:\/\/www.borncity.com\/blog\/wp-content\/uploads\/2025\/04\/image-4.png\" alt=\"Copilot\" width=\"95\" height=\"91\" align=\"left\" border=\"0\" \/>[<a href=\"https:\/\/www.borncity.com\/blog\/2025\/08\/03\/man-in-the-prompt-browser-attacke-auf-llms\/\" target=\"_blank\" rel=\"noopener\">German<\/a>]Large Language Models (LLMs) can be attacked via prompts in order to elicit unauthorized data from the models. Man-in-the-prompt browser attacks could also be used to manipulate AI requests from users and use them for criminal activities.<\/p>\n<p><!--more--><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/vg09.met.vgwort.de\/na\/058723774aa34531915409392ca1fb9e\" alt=\"\" width=\"1\" height=\"1\" \/>The advent of AI systems in companies is also opening up new methods of attack, some of which are already known from other areas. Man-in-the-middle attacks to read or manipulate data have long been known. Now there is a new attack method, known as 'Man in the Prompt', to attack LLMs via the user's browser. Nic Adams, co-founder and CEO of 0rcus (security provider in the AI sector) sent me some information on this topic.<\/p>\n<h2 class=\"msg-s-event-listitem__body t-14 t-black--light t-normal\">Man-in-the-prompt attack in the browser<\/h2>\n<p class=\"msg-s-event-listitem__body t-14 t-black--light t-normal\">The 'man-in-the-prompt' attack is a novel prompt injection vector as it operates at the Document Object Model (DOM) level. The attackers use a compromised browser extension to inject malicious instructions directly into the input field of an LLM.<\/p>\n<p class=\"msg-s-event-listitem__body t-14 t-black--light t-normal\">This method bypasses traditional application-level security because the attack payload is executed on the client side via a trusted extension instead of relying on a directly user-generated prompt.<\/p>\n<p>Internal corporate hosted LLMs are particularly vulnerable as they provide a lucrative and exposed attack surface due to their reduced security measures and trusted environment where they are often trained with sensitive proprietary data.<\/p>\n<p>DOM-level injection allows an attacker to exfiltrate highly sensitive corporate information, from financial projections to intellectual property. The attack uses an organization's internal LLM as a tool for data exfiltration. This is a method that cannot be easily blocked by standard network security measures.<\/p>\n<p class=\"msg-s-event-listitem__body t-14 t-black--light t-normal\">This attack is possible without the user's knowledge, as a malicious actor can acquire a legitimate, popular browser extension and inject malicious code into it, which is then transmitted to the user's browser as a silent update. The security implications are severe, as the user continues to trust the extension while it secretly leaks data in the background by interacting with LLMs &#8211; a process that is invisible to both the user and traditional security tools.<\/p>\n<p>The most likely initial attack vector for this attack is a combination of social engineering and supply chain compromise, where users are tricked into installing malicious extensions, or where a trusted extension is sold and then used as a weapon. I anticipate that future exploitation is highly likely as this attack is both scalable and difficult to detect with traditional security controls, making it a lucrative low-effort opportunity for attackers.<\/p>\n<p>The defense requires a layered approach, starting with strict, granular permission models for all browser extensions and implementing webhooks to monitor DOM interactions in real-time. For internal LLMs, I would recommend isolating the LLM environment, sandboxing its processes from the main DOM, and implementing behavioral analytics to detect anomalous LLM queries and exfiltration patterns.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>[German]Large Language Models (LLMs) can be attacked via prompts in order to elicit unauthorized data from the models. Man-in-the-prompt browser attacks could also be used to manipulate AI requests from users and use them for criminal activities.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[580,1547],"tags":[2756,69],"class_list":["post-38830","post","type-post","status-publish","format-standard","hentry","category-security","category-software","tag-ai","tag-security"],"_links":{"self":[{"href":"https:\/\/borncity.com\/win\/wp-json\/wp\/v2\/posts\/38830","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/borncity.com\/win\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/borncity.com\/win\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/borncity.com\/win\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/borncity.com\/win\/wp-json\/wp\/v2\/comments?post=38830"}],"version-history":[{"count":3,"href":"https:\/\/borncity.com\/win\/wp-json\/wp\/v2\/posts\/38830\/revisions"}],"predecessor-version":[{"id":38833,"href":"https:\/\/borncity.com\/win\/wp-json\/wp\/v2\/posts\/38830\/revisions\/38833"}],"wp:attachment":[{"href":"https:\/\/borncity.com\/win\/wp-json\/wp\/v2\/media?parent=38830"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/borncity.com\/win\/wp-json\/wp\/v2\/categories?post=38830"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/borncity.com\/win\/wp-json\/wp\/v2\/tags?post=38830"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}