MACHINE LEARNING – Adversarial audio can be used to attack voice assistants with hidden commands

In yesterday’s New York Times, Alexa and Siri Can Hear This Hidden Command. You Can’t:

Over the last two years, researchers in China and the United States have begun demonstrating that they can send hidden commands that are undetectable to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assistant. Inside university labs, the researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites. In the wrong hands, the technology could be used to unlock doors, wire money or buy stuff online — simply with music playing over the radio.

A group of students from University of California, Berkeley, and Georgetown University showed in 2016 that they could hide commands in white noise played over loudspeakers and through YouTube videos to get smart devices to turn on airplane mode or open a website.

This month, some of those Berkeley researchers published a research paper that went further, saying they could embed commands directly into recordings of music or spoken text. So while a human listener hears someone talking or an orchestra playing, Amazon’s Echo speaker might hear an instruction to add something to your shopping list.

You can also hear samples of adversarial audio and read the actual paper.

Of course, the more ubiquitous and capable networked computation becomes, the harder it gets for people to reason about attack vectors like this one. While these are vulnerabilities in the affected system and not designed features, this is reminiscent of the news a while back about smartphone apps picking up “audio beacons” embedded in TV output to track users’ exposure to advertising.



from Adafruit Industries – Makers, hackers, artists, designers and engineers! https://ift.tt/2Kfbyet
via IFTTT