[German]On May 20, 2024, Microsoft presented the "Copilot+PC" concept and opened the next big barrel (hardware with AI support and Copilot). A feature called Recall is supposed to constantly take screenshots (or snapshots) of everything that happens on the system and have them analyzed by AI. Security researchers are up in arms against the "Recall" function, which Microsoft is selling as a "convenience feature". Everything should remain "private" and on the client. Let me pick up on the insights that security researcher Kevin Beaumont has gained with this "private AI function". Pandora's box has been opened and the logical consequence is: "Break up Microsoft and put the remnants under official supervision", because with Recall, Redmond has finally proven that they are an immanent threat to cyber security and simply cannot do it.
Advertising
Copilot+AI and Recall
In my blog post Microsoft's AI PC with Copilot – some thoughts – Part 1 I discussed the "Copilot+PC" concept presented by Microsoft. The so-called recall function was also mentioned there in passing. This enables Windows to constantly take screenshots of the user's screen and use a generative AI model to process the data and make it searchable. The user should be able to ask "what did I look at recently as a travel destination" and then be shown the relevant documents, websites, emails etc. by Recall.
Microsoft's technical documentation about Recall talks about "Privacy". But Recall takes and saves a screenshot (snapshot) every five seconds. And after the initial setup of Windows, Recall is switched on by default. This is the ultimate surveillance tool in every PC. This prompted Elon Musk to recommend that users switch to Linux.
Satya Nadella, CEO of Microsoft, said during an interview to introduce the Copilot+PC concept, when asked about people's concerns about this recall function: "You have to bring two things together. This is my computer, this is my recall, and it's all done locally. So that's the promise. That's one of the reasons why Recall works like magic, because I can trust that it's on my computer." So it's all fine and safe?
Recall a security disaster
vx-underground, who deal with ransomware and security, put it in a nutshell in a tweet: "Microsoft introduces a 24/7 surveillance feature for the NSA and/or CIA, but markets it as a feature that people will like". Nadella's interview is linked in the tweet.
Advertising
Microsoft's move is causing extreme controversy. German user Hans pointed out in this comment in my German blog that "everyone has tested a system that is not yet for sale." Recall works locally and can also be used without the Internet. This is a point that was already clear to me – and normally I would even have gone along with it.
Recall is a paradigm shift and a No-Go
But with Recall, Microsoft is initiating a paradigm shift. With Recall, a personal computer is no longer private – anyone with access can see what the user has done – in a level of detail that has never been seen before.
Copilot+PC can actually no longer be used in the corporate environment, and especially in doctors' surgeries, lawyers' offices or other sensitive areas. Many articles has been published about that topic, and at the The Register they asked Was there no one at Microsoft who looked at Recall and said: This really, really sucks.
Recall can be tested
Recall is primarily limited to the special Copilot+AI PCs with ARM processors and special AI hardware. However, the relevant code is included in the Windows 11 Insider Preview (build 26100.712) and can also be activated on any ARM computer. There is also the AmperageKit app on GitHub, which can be used to activate the functionality.
If you manage to emulate the ARM version of the Windows 11 Insider Preview on an AMD/Intel system, you could also test Recall. And Albacore has described another way to unlock Recall on Tom's Hardware in the article How to try Windows 11's Recall AI feature right now, even on unsupported hardware.
Blocking Recall
There is also a group policy to disable Recall, which is described in this tweet. And here are the registry entries for blocking.
Disable Recall – User
[HKEY_CURRENT_USER\Software\Policies\Microsoft\Windows\WindowsAI] "DisableAIDataAnalysis"=dword:00000001
Disable Recall – Machine (not yet official)
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsAI] "DisableAIDataAnalysis"=dword:00000001
Recall, license to steal data
Security researcher Kevin Beaumont has now taken a closer look at Recall, especially with regard to Microsoft's attempts to reassure users that everything only runs locally, is secure and everything remains on the client. Beaumont, who briefly worked for Microsoft years ago, had already expressed his concerns after the function was introduced (see my blog post Microsofts Copilot+PC, a privacy and security nightmare – Part 2).
In a series of tweets, he revealed the insides and implications of his own test of Recall on Thursday, May 30, 2024 (I'm only getting around to rehashing this today). The statement that the data is only processed locally on the client is true. There is a whole subsystem of Azure AI etc. that processes the data. But Recall stores the records in a SQLite database, and the whole Microsoft approach is a license to AI-powered data theft. The above tweet shows a screenshot of the database of records in question. Beaumont writes.
It [the Recall data store] is just a SQLite database that will be available in a few weeks – I've already modified it into an Infostealer hosted on Microsoft's Github (a few lines of code).
Microsoft is intentionally setting cybersecurity back a decade and putting its customers at risk by giving petty criminals a chance.
Beaumont writes that he can access the SQLite database without SYSTEM rights and then read what snapshots Recall has stored there. In his article Stealing everything you've ever typed or viewed on your own Windows PC is now possible with two lines of code — inside the Copilot+ Recall disaster. he has compiled his findings. With the "Recall" bug, he can read everything the user has ever typed or entered on their PC.
He tested it with messaging apps such as WhatsApp, Signal and Teams. Someone sends you messages that disappear? The messages are still recorded by Recall. The user writes a message that is supposed to delete itself automatically? The message is recorded by Recall. The user tries to delete a message? But it is already recorded by Recall, as he also describes the drama here on Mastodon in a series of posts. Here are excerpts of Beaumont's statements from this FAQ:
- Recall stores everything a user has ever viewed, organized by application. Every piece of text the user has viewed is recorded, with a few minor exceptions (e.g. Microsoft Edge's InPrivate mode is excluded, but Google Chrome's is not). Even if the user deletes something (e.g. emails, messenger messages, etc.), it is saved as a copy of Recall.
- Every user interaction, e.g. minimizing a window. There is an API for user activities, and third-party applications can hook in to enrich data and also display memory data.
- Data of the logged-in user is always decrypted and can also be extracted remotely. InfoStealer Trojans, for example, which automatically steal user names and passwords, have been a major problem for over ten years, writes Beaumont. These can now be easily modified to support recall.
- According to Microsoft, only the user can access (their) data. This is not true, the files are located in AppData, in the new CoreAIPlatform folder. Beaumont can show that another user account on the same device is accessing the database.
- How does the whole thing work? Screenshots are taken every few seconds. These are automatically recognized by Azure AI running on the device and written to a SQLite database in the user's folder. Everything users have ever viewed on their system is stored in plain text in this database file. OCR is the process of looking at an image and extracting the letters or text.
Recall enables threat actors to automatically read everything the user has ever viewed within seconds. Beaumont describes that he tested the whole thing with a "standard" Infostealer. Microsoft Defender for Endpoint was running on Windows – which recognized this Infostealer. But by the time the Defender eliminated the threat, which took over ten minutes, his recall data was long gone, writes Beaumont.
When asked: Did Microsoft mislead customers about the security of Copilot? Beaumont's answer is simply: Yes. For example, Copilot was described "as an optional experience", which is a lie. Copilot is activated by default and can be deactivated as an option. Microsoft's ranting about this is just a bunch of wordplay. Here are some of his statements in various tweets:
- Microsoft should not tell outright lies to the media. And customers and the press should hold Microsoft directly accountable in this matter. Recall is a feature that will be included in Windows 11 very soon, and the Microsoft website says it will support Intel and AMD systems in the future. And it's enabled by default.
- It's a primary AI product from Microsoft that is clearly being delivered in a way that jeopardizes users' security without holding them accountable. This is all done with blatantly false information about how it works.
At this point at the latest, Recall is dead in the corporate environment in Germany because it constitutes employee monitoring. This may be the wet dream of US employers or the NSA and its counterparts, but it is a no-go in the EU. The hope that "everything is private" was literally pulverized by Beamont in the point quoted above. Anyone who has read access to the database gains access to everything the user does on the system – a jackpot for any cybercriminal – the database is remotely accessed and all "secrets" are open to them. Ransomware is kindergarten by comparison.
Beaumont dismisses the rumors that Microsoft wants to build in NSA spying capabilities or create an evil empire to enslave users, suggesting "they [Microsoft executives] just haven't thought about the impact on the Internet or don't care about AI security and their customers."
But that brings us back to The Register article quoted above Was there no one at Microsoft who looked at Recall and said: This really, really sucks and my articles Microsoft's AI PC with Copilot – some thoughts – Part 1 and Microsofts Copilot+PC, a privacy and security nightmare – Part 2, where I pointed out the discrepancy that Microsoft's CEO Nadella assured us just a few weeks ago that "security will come first for his company in the future". With a half-life of less than a month, plans are already being announced that throw all cyber security measures overboard – and the snoozers from Redmond are celebrating along with the claqueurs from the media.
These feature collide massively with legal requirements in the European Union. What about employee monitoring in companies if I can see at the click of a mouse what the person did a few days ago (this would be subject to co-determination, usually a no-go). Or what about the GDPR, when data has supposedly been deleted but can then be recalled by anyone at any time. This will be a feast for hackers and possibly lawyers.
Beaumont is stunned by how bad Recall is. He is convinced that the feature will directly endanger people. Simpler-minded contemporaries are now coming around the corner with "no problem, I'll deactivate the feature and that's that" (I don't want to repeat the statement quoted above from Hans).
Instead, I'll take a different tack: the people at Microsoft, having lost their way of thinking, naturally activate these AI functions and recall. After all, they are very proud to use their own products. And customer data is diligently used on these systems and confidential information may be shared. In recent months, we have had major cyber incidents such as Storm-0558 and Midnight Blizzard (see Microsoft confirms: Russian spies (Midnight Blizzard) stole source code while accessing systems). It is also suspected that user systems were compromised by Microsoft e-mails containing sensitive access data. And now you should read through the above implications again. Pandora's box has just been opened wide by Microsoft.
The recommendation to Microsoft is: pull it, re-develop it or leave it. And if Microsoft is not in a position to do this, the authorities should intervene rigorously. Personally, I would go even further: Microsoft should be broken up and placed under state supervision. Because the company and its executives have just proven that they are not in a position or capable of implementing the most minimal cyber security measures, let alone developing truly secure products.
Similar articles:
Microsoft's AI PC with Copilot – some thoughts – Part 1
Microsofts Copilot+PC, a privacy and security nightmare – Part 2
Windows 11: Available as a release preview; Copilot coming as a store app, IoT LTSC announced
Advertising