[German]German security researchers from Security Reasearch Labs were able to show how smart speakers such as Amazon Echo or Google Home can be used for monitoring and phising (determining passwords, etc.).
Smart speakers from Amazon and Google provide easy access to information about voice commands. It has become know, that devices such as Amazon Echo or Google Home transmit more or less everything that is spoken to the services of the providers – and that employees there may have to evaluate these conversations.
An app as a smart spy
But these smart speakers can be used as real spy using apps. The following tweet drew my attention to this topic.
— Ars Technica (@arstechnica) October 20, 2019
The article Smart Spies: Alexa and Google Home expose users to vishing and eavesdropping by the Berlin security researchers of SRLabs (Security Reasearch Labs) is publicly available.
The background: The performance of the intelligent loudspeakers can be enhanced by small apps (Skills for Alexa and Actions on Google Home) from third-party providers. The apps currently pose real privacy issues because they can be misused to listen (or spy) on users.
The approach of security researchers
Both Alexa Skills and Google Home Actions are activated by the user selecting the call name chosen by the application developer. For their demonstration of security issues, researchers Luise Frerichs and Fabian Bräunlein from SRLabs have each developed a horoscope app. In the Alexa Skills Horoscope app, the security researchers have chosen the phrase “Alexa, turn on my horoscopes.” to turn on the skills app. Users can then call up features (called intents) within the app using certain voice commands (e.g. “Tell me my horoscope for today”).
These voice commands can contain variable arguments, which are given by the user as slot values. The input slots are converted to text and sent to the application backend, which is often operated outside the control of Amazon or Google. Of course, there is also a command to stop the evaluation by skills or actions. In this case, speech analysis should no longer be possible.
Amazon and Google check the apps before releasing them (skill or action) and probably also test whether the stop statement works. The security researchers at SRLabs, however, were able to change the code after the check without it being checked again by Amazon or Google. This gave them control over the intelligent speakers because they controlled the apps used by the users. The command to stop the analysis of the voice input simply had no effect – the app could continue to listen in.
In the video above, one of the security researchers involved demonstrates the approach with the faked Horoscope Skills app. The video shows the speech information recognized by the app. It can be seen how the security researcher gives the stop command, but then continues speaking. The app continues to evaluate the voice commands. Using the apps, the security researchers can retrieve information such as passwords or other information spoken by the user.
A spoken password appears in the demo. This way, however, other sensitive information can be collected. Arstechnica has embedded an entire video series of researchers on the subject in this article. More information can be found in the SRLabs article.