Voice-activated digital assistants, like Amazon’s Echo and Google Home, are becoming a more prevalent, and accepted, part of our daily routine. And although today’s functionality is a bit limited, the rise of artificial intelligence software some think these conversational systems are bound to take over the world.
Despite the “creepiness” factor, especially when you introduce a digital mirror, more and more people seem to be adopting these types of conversational systems. While we can debate amongst ourselves whether or not this is good for our society as a whole, the question at the forefront of my mind is: Can you truly trust Alexa?
If You’re Using, They Can Listen
As you can likely glean from the header, I don’t think Alexa can really be trusted—at least not 100 percent—but it’s likely that most digital assistant users already know that. The majority of us probably assume our credit card numbers are floating around out there somewhere anyway, and banks aren’t exactly impenetrable these days.
And while there has undeniably been a cultural shift in what we deem “private,” the prospect of a device listening to, recording and potentially relaying your conversations still feels incredibly invasive. Call me a traditionalist, but what you say in the privacy of your own home should stay in the privacy of your own home.
Alexa Helps the Law, but Is That Okay?
The question of privacy within the confines of your own home is interesting when you consider an event like the well-covered murder case in Arkansas.
If you’re unfamiliar with it, Amazon ended up handing over recordings from an Echo found at a murder scene in Bentonville, Arkansas, in order to help aide an investigation into the death of a man strangled in a hot tub.
This not only raises plenty of First Amendment concerns, many brought up by Amazon, but should have us pondering the role of digital assistants in general. Even though information provided by an Amazon Echo is intended to help solve a crime in this case, think about the other side of the coin.
Not to sound pessimistic, but people do not always have the best intentions. And if the police start conducting, or even basing, investigations on what is recorded by digital assistants, things could get complicated.
For example, say someone intends on committing a crime on a particular evening. If they know around what time they will commit the crime, it’s not out of the realm of possibility that they could automate their voice to ask Alexa (or the equivalent) to perform commands while they’re in the midst of committing said crime. This would provide the person who committed the crime with a built-in alibi if they were ever to end up in hot water.
Unsafe to Say, Unsafe to Listen?
The last example may never actually occur, but it doesn’t seem that far-fetched anymore. There is a lot of talk about watching what you say in front of your digital assistant, but we should also be careful what we listen to.
As the internet of things (IoT) grows at an incredible rate, security concerns for devices like digital assistants will grow with it. Whether we’re talking to a human or machine, we need to be mindful of what we say, and who we say it to.
All that being said, I leave you with a parting question: Do you feel safe talking to your Amazon Echo?