Alexa, Cortana, Google, Siri user? Watch out for these inaudible command attacks

amazon-echo-2016-promo-pic-2.jpg

The attack works against a variety hardware with the Alexa, Cortana, Google, and Siri, assistants by transforming human voice commands into an ultrasonic frequency above 20kHz.

Chris Monroe/CNET

Researchers have devised a method to give potentially harmful instructions to most popular voice assistants using voice commands that can't be heard by humans.

The researchers from Zheijiang University have validated that their so-called DolphinAttack works against a variety hardware with the Siri, Google, Cortana and Alexa assistants.

They have also developed proof-of-concept attacks to illustrate how an attacker could exploit inaudible voice commands, including silently instructing Siri to make a FaceTime call on an iPhone, telling Google to switch the phone into airplane mode, and even manipulating the navigation system in an Audi.

A portable attack method they devised is also extremely cheap, requiring an amplifier, ultrasonic transducer and battery that cost just $3. Potentially even worse is a remote attack they describe using the target's own phone to attack Google Home or an Echo.

"An adversary can upload an audio or video clip in which the voice commands are embedded in a website, eg, YouTube. When the audio or video is played by the victims' devices, the surrounding voice-controllable systems such as Google Home assistant, Alexa, and mobile phones may be triggered unconsciously," they write.

The attack works by transforming human voice commands into an ultrasonic frequency above 20kHz, which is the highest frequency humans can hear. They then relay the inaudible instructions to the targeted voice-controlled system.

VIDEO


Researchers say they've been able to inject covert voice commands into seven speech-recognition systems.

Source: Zheijiang University/YouTube

Other devious things an attacker could do using inaudible voice commands include forcing a device to open a malicious website, activate the video camera to spy on a person, and instruct the device to spam contacts.

The DolphinAttack doesn't exploit a vulnerability in any particular voice-controlled system, but rather each system's audio hardware. The attack worked against an iPhone, iPad, MacBook, Apple Watch, Amazon Echo, Samsung Galaxy 6S Edge, and a Lenovo ThinkPad running Cortana. The only device that was resistant to the attack was the iPhone 6 Plus.

One limitation to the equipment they experimented with was that an attacker would need to be within two meters, 6.5ft, of the target voice-controlled hardware. However, they argue the distance could be extended with better equipment.

They also found it was more difficult to instruct a system to call a specific phone number or open a specific website than more general commands such as, "Turn on airplane mode".

They also devised recognition and activation attacks to account for each different system's unique wake-up commands, such as Hey Siri.

To defend against this attack, the researchers suggest microphones shouldn't be allowed to sense sounds at frequencies higher than 20kHz. They note the microphone in the iPhone 6 Plus handled this well.



from Latest Topic for ZDNet in... http://ift.tt/2wJPkx7