Voice assistants could be fooled by commands you can't even hear
- by Jake Bell
- in Research
- — May 11, 2018
The controls, undetectable to the human ear, could prove troublesome if placed in the wrong hands, according to The New York Times.
The New York Times reports that researchers in China and the US have discovered a way to surreptitiously activate and command those virtual assistants by broadcasting instructions that are inaudible to the human ear.
This attack required the perpetrator to be within whispering distance of the device, but further studies have since shown that attacks like this could be amplified and carried out as far away as 25ft.
More news: Leave.EU fined £70k for breaching law during 2016 referendumThat warning was borne out in April, when researchers at the University of IL at Urbana-Champaign demonstrated ultrasound attacks from 25 feet away.
Researchers from the University of California, Berkeley, want you to know that they might be also be vulnerable to attacks that you'll never hear coming.
They were even able to hide the command, "O.K. Google, browse to evil.com" in a recording of the spoken phrase, "Without the data set, the article is useless".
More news: Trump to present plan to reduce drug pricesThis year, another group of Chinese and US researchers from Chinas Academy of Sciences, and other institutions, demonstrated they could control voice-activated devices with commands embedded in songs that can be broadcast over the radio or played on services like YouTube.
"We wanted to see if we could make it even more stealthy", said UC Berkeley fifth-year computer security Ph.D. student Nicholas Carlini, one of the authors of the research that has been published online.
So what are the makers of these voice platforms doing to protect people? But someone else might be secretly talking to them, too. None would say, for instance, whether or not their voice platform was capable of distinguishing between different audio frequencies and then blocking ultrasonic commands above 20kHz.
More news: London's Khan Moves to Ban 'Junk Food' AdvertsThe idea that voice assistants can be exploited or fooled is by no means new, and many stories have surfaced revealing potential and hypothetical exploit situations involving your typical at-home assistant device. Despite a heavy focus on artificial intelligence and voice at its yearly I/O developer conference this week, Google's keynote presentation hardly mentioned security at all. "For example, users can enable Voice Match, which is created to prevent the Assistant from responding to requests relating to actions such as shopping, accessing personal information, and similarly sensitive actions unless the device recognizes the user's voice". And what about users who don't have Voice Match enabled in the first place? In demonstrating what's possible with this method, Carlini's goal is to encourage companies to secure their products and services so users are protected from inaudible attacks.