Digital assistants have come a long way in a relatively short amount of time. Starting with Apple’s clunky early iteration of Siri in 2011, it’s only taken a few years to get here, with multiple competitors in common use (including Cortana and Google Now) and dozens of new versions to come.
Their degree of sophistication makes our lives easier, going beyond simple search functions to help us with basic tasks, from working on files to managing financial accounts.
With this kind of access, anyone with access to your device’s digital assistant could potentially gain access to your life. Thankfully, most devices are equipped with a locking mechanism to prevent this, but what if a hacker could remotely tap into your device and digital assistant?
The Latest Report
According to a recent report by researchers Jose Lopes Estevez and Chaouki Kasmi, this remote digital assistant access is a real possibility. By using simple open source software in combination with a basic antenna and amplifier, the pair was able to issue and execute a series of basic commands to both Android and iOS devices (running Google Now and Siri, respectively). GNU Radio is a free program, theoretically making all the necessary tools for such an attack both cheap and easy to obtain.
Gaining access to a device’s digital assistant could prove fruitful in a number of ways—access to personal information, important accounts, and important files or emails are all possibilities. Take into consideration the fact that digital assistants are being featured in a more diverse range of devices—including upcoming self-driving cars—and nobody would blame you for being a bit worried.
The Limits of the Attack
Fortunately, there are a handful of limitations preventing this kind of attack from becoming commonplace. Despite the ease of obtaining and utilizing the materials necessary for execution, they can only be harnessed in very specific situations.
The device being hacked must currently have headphones with an enabled microphone plugged in—otherwise, hackers can’t gain access to the device remotely. Additionally, voice controls must be enabled at a login or lock screen for the attack to be successful, meaning one simple change in your settings menu could theoretically fend off this kind of attack.
Other limitations include the possible range of the attack—the researchers involved in the experiment needed to be within 6.5 feet of the intended device, though they admitted a small boost in battery power was all that stood between them and a greater distance, possibly up to 16 feet. New security features being enabled in devices like the iPhone 6s, like user voice recognition, could also feasibly prevent an attack from occurring.
In short, while the vulnerability is real and potentially destructive, it’s unlikely that any hackers would try to take advantage of it. The conditions for a perfect attack are too important and too rare to make it a scalable and practical means of gaining profit.
Why We’re Not Completely Out of the Woods
Still, these researchers didn’t fully exhaust their options; their goal was to prove that a vulnerability was present, not to find the most practical way of exploiting it. Exceptional hackers with a motivation to do actual damage could theoretically find new ways around some of the limitations of the method, such as using bigger amplifiers to extend the range of the attack or finding a way to bypass the necessity of a plugged-in microphone. With bigger and more sophisticated devices on the way, it’s important that device producers take this vulnerability seriously and include more ways of preventing a possible attack, such as multiple layers of personal identification.
In the meantime, don’t worry too much about your mobile device being controlled from afar. Disable any plugged-in microphone to your device and disable any voice controls at the lock screen and you’ll protect yourself from this current generation of attack. Also, remember that this was merely a research experiment—this type of attack has yet to be seen in practical use.