قالب وردپرس درنا توس
Home / Business / The Terrible Truth About Amazon Alexa and Privacy

The Terrible Truth About Amazon Alexa and Privacy



Illustration: Gizmodo / Amazon

This week, I read through a history of everything I said to Alexa, and it felt a little bit like reading an old diary. Until I remembered that the things I told had been read in a private server on an Amazon server and had possibly been read by an Amazon employee. This is all to make Alexa better, the company keeps saying. However, many people do not immediately notice how humans interact with your seemingly private voice commands. Alexa, these people say, is a spy hiding in a wiretapping device.

The debate on whether or not Alexa or any voice assistant is spying on us is years old at this point, and it's not going away. Privacy Laws have filed a complaint with the Federal Trade Commission (FTC) allegedly violating the Federal Wiretap Act. Journalists have investigated the dangers of always-on microphones and artificially intelligent voice assistants. Skeptical tech bloggers like me have argued these were more powerful than people realized and overloaded with privacy violations. Recent news stories about how Amazon employees review certain Alexa commands suggest the situation is worse than we thought.

It's starting to feel like Alexa and other voice assistants are destined to spy on us because that's how the system was designed to work. These systems rely on machine learning and artificial intelligence to improve themselves over time. The new technology is also currently prone to error, and even if they were perfect, the data-hungry companies that built them are constantly thinking of new ways to exploit users for profit. And where imperfect technology and powerful companies collide, the government tends to struggle so much with understanding what is going on, that regulation seems like an impossible solution.

The situation is not completely dire. This technology could be really cool, if we pay closer attention to what's happening. Never-ending Errors

One fundamental problem with Alexa or other voice assistants is that the technology is prone to fail. Devices like the Echo come equipped with always-on microphones that are only supposed to record when you want them to list. While some devices require the push of a physical button to be Alexa, many are designed to start recording you after you said the wake up word. Anyone who has spent any time using Alexa knows that it doesn't always work like this. Sometimes the software hears random noise, it is the wake word, and starts recording.

The extent to which false positives are a problem became glaringly evident the moment I started reading through my history of Alexa commands on Amazon's website. The entries are dull: “Hey Alexa;” “Show me an omelet recipe;” “What's up?” But sprinkled amongst the mundane dribble was also a series of messages that said, “Text not available for Alexa. ”Every time I saw it, I saw it twice again and read it aloud in my head:“ Audio was not intended for Alexa. ”These are the things Alexa heard that it should not have heard, commands that have been sent to Amazon's servers and sent back because the machine had not been said that Alexa had recorded audio when the user was not giving a command.

At face value, voice assistants picking up stray audio is an inevitable defect in the technology. The very sophisticated computer program that can understand anything you say is behind a very simple one that has been trained to become a wake-up call and then send whatever commands come after that to the smarter computer. The problem is that the simple computer often does not work right, and people do not always know that there is a recording device in the room. That's how we get Echo-based nightmares like the Oregon couple who inadvertently sent a recording of an entire conversation to an acquaintance. Amazon itself has been working on improvements to lower the error rate with wake words, but it's hard to imagine that the system will always be flawless.

"That's the scary thing: there is a microphone in your house, and you do not have final control over when it gets activated, "Dr. Jeremy Gillula, tech projects director at the Electronic Frontier Foundation (EFF), told me. “From my perspective, that's problematic from a privacy point of view.”

This sort of thing happens to be bad luck, although it's more common than most people would like. What's worse than the glitches is the very intentional behind-the-scenes workflow that reveals users' interactions with voice assistants to strangers. Bloomberg recently reported that a team of Amazon employees have access to Alexa users' geographic coordinates and that this data was collected to improve the voice assistant's abilities. This revelation came just a couple of weeks after Bloomberg also reported that thousands of people employed at Amazon around the world analyze users' Alexa commands to train the software. They can handle compromising situations, and in some cases, the Amazon employees make fun of what people say.

Amazon pushed back hard against these reports. A company spokesperson told me that Amazon only annotates "an extremely small number of interactions from a random set of customers in order to improve the customer experience." These recordings are kept in a protected system that uses multi-factor authentication so that "a limited number ”of carefully monitored employees can gain access. Bloomberg suggests that the team numbers in the thousands

But for Alexa and other artificially intelligent voice assistants to work, some human review is necessary. This training could prevent future errors and lead to better features. Amazon is the only company using humans to review voice commands, either. Google and Apple also employ teams of people to review what users say to their voice assistants to train the software to understand people better and to develop new features. Sure, the human element of these apparently computer-based services is creepy, but it is also an essential part of how these technologies are developed.

"In the end, for really hard cases, you have to be human to tell you what was going on, "Dr. Alex Rudnicky, a computer scientist at Carnegie Mellon University, said in an interview. Rudnicky's has been developing speech recognition software since the 1980s and has led teams competing in the Alexa Prize, an Amazon-sponsored contest for conversational artificial intelligence. While he humans are necessary for improving natural language processing, it is incredibly unlikely for a voice command to get traced back to one individual.

"Once you're one out of 10 million," Rudnicky said, “It's kind of hard to argue that someone's going to find it and trace it back to you and find out about you that you don't want to know.”

This idea doesn't make the idea of ​​a stranger read your daily thoughts or knowing your location history any less creepy, however. It might be uncommon for a voice assistant to record me accidentally, but the systems still seem smart enough to wake up with 100 percent accuracy. The fact is that Amazon's catalogs are available all the Alexa recordings — accidental or otherwise — makes me feel terrible.

Nobody Wants to Fix

In recent conversations, half a dozen technology and privacy experts told me that we need stronger privacy laws to deal with these problems with Alexa. The amount of your personal data is based on terms that are Amazon sets, and the United States' strong federal privacy legislation, like Europe's General Data Protection Regulation (GDPR). In other words, the companies that are building voice assistants are more or less making the rules.

So I find myself circling back to a few questions. Who's looking after the users? Why can I opt in to Amazon's record my commands instead of wading through privacy settings looking for ways to stop sending my data to Amazon? And what are my options for opting out limited?

In the Alexa privacy settings, you can opt out of letting Amazon use your recordings to develop new features and improve transcriptions.

Despite the toggles being off, Amazon is still retaining my Alexa recordings. 19659012] Screenshot: Gizmodo

Settings like these put the onus on the user to protect their own privacy. If that has the case, why can't these companies make my interactions with voice assistants completely anonymous?

Apple seems to be trying to do this. Whenever you talk to Siri, those commands are encrypted before they are sent to the company with a random Siri identifier attached. Your Siri identifier is not associated with your Apple ID, so you can open your privacy settings on an iPhone and see what you've been saying to Siri. Not all Siri functionality requires your device to send information to Apple's servers, either, so that cuts down on exposure. Apple does use recordings of Siri commands to train the software because you have to train artificially intelligent software to make it better. The fact that Apple is not associated with any user might explain why so many people think Siri is terrible. Then again, Siri might be your best bet for some semblance of privacy in a voice assistant.

This is the point in the debate when Tim Cook would like to remind you that Apple is not a data company. Companies like Google and Amazon make your data into products that they can sell to advertisers or to sell you more stuff, say. This is the same argument we saw from the Apple CEO when he wrote a Time Magazine column earlier this year and announced plans to push for federal privacy legislation.

The idea is starting to get some traction. In January, the Government Accountability Office released a report calling for Congress to pass comprehensive internet privacy legislation. This report joined a chorus of privacy advocates who have long argued that the United States needs its own version of GDPR. In March, the Senate Judiciary Committee heard testimony from several people who pushed for federal privacy legislation. It's far from clear or not Congress will act on this idea, however.

"Speech technology has gotten so good, it is important to be told about privacy," said Dr. Mari Ostendorf, an electrical engineering professor and speech technology expert at the University of Washington. “And I think that probably companies are more about it than the U.S. government is. ”

One would hope that Amazon is at least rethinking its approach to privacy and voice assistants. Because right now, it seems like the general public is just unraveling the myriad ways that devices like the echo are recording our lives without our permission or sharing our personal data with strangers. The most recent controversy about Alexa merely scratches the surface of how a world full of always-on microphones is an utter privacy nightmare.

The problem is that companies like these with data-driven business models have every incentive to collect as much information about their users as possible. Every time you use Alexa, for example, Amazon gets a sharper view of your interests and behavior.

"If a customer uses Alexa to make a purchase or interact with other Amazon services, such as Amazon Music," and Amazon spokesperson said. , We could use the fact that the customer took that action in the same way we would if the customer took that action through our website or one of our apps – for instance, to provide product recommendations. ”

There's evidence these kinds of recommendations could become more sophisticated in the future. Amazon has patented technology that can interpret your emotions based on the tone and volume of your voice. According to the patent, this hypothetical version of an Alexa-like technology could be happy or sad and deliver highly targeted audio content, such as audio advertisements or promotions. One could argue that only thing holding Amazon back from Releasing an ad-supported Alexa is the potential of blowing from the Echo-owning public. The government's probably not going to stop it

The Frightening Future

A future without more oversight could get very much Philip K. Dickian, very quickly. I recently spoke with Dr. Norman Sadeh, a computer science professor at Carnegie Mellon, who painted a grim picture of what a future without better privacy regulation could look like.

"At the end of the day all of these speakers connect back to a single entity," Sadeh explained. "So Amazon could use voice recognition to identify you, and as a result, it could potentially build extremely extensive information about who you are, what you do, what your habits are, all kinds of other attributes that you would not necessarily want to disclose. to them. ”

He suggests Amazon could make a business out of this, knowing who you are and what you like by the more sound of your voice. And unlike the most dystopian notions of what facial recognition could enable, voice recognition could work without ever seeing you. It could work over phone lines. In a future where internet-connected microphones are present in an increasing number of rooms, a system like this could always be listening. I talked to this dystopian idea and lamented its imminent arrival.

Such a system is so hypothetical, but if you think about it, all of the pieces are in place. There are millions of devices full of always-on microphones all over the country, homes as well as public places. They're allowed to list and record what we say at certain times. These artificially intelligent machines are also prone to errors and will only get better at listening to us more, sometimes letting humans correct their behavior. Without any government oversight, who knows how the system will evolve from here.

We didn't want a future than this, didn't we? Talking to your computer seemed like a really cool thing in the 90s, and it was definitely a significant part of the Jetsons' lifestyle. But so far, it seems like an unavoidable truth that Alexa and other voice assistants are bound to spy on us, whether we like it or not. In a way, the technology is designed in such a way that it can be avoided, and in the future, without oversight, it will probably get worse.

Maybe it's foolish to think that Amazon and the other companies building voice assistants are actually about privacy. Maybe they're working on fixing the problems caused by error-prone tech, and maybe they're working on addressing the people when they see devices like recording them, sometimes without users. Heck, maybe Congress is working on laws that would hold these companies accountable.

Inevitably, the future of voice-powered computers doesn't have to be so dystopian. Talking to our gadgets would change the way we interact with technology in the most profound ways, if everyone was on board with how it was being done. Right now, that doesn't seem to be the case. And, ironically, the fewer people we have are helping to develop tech like Alexa, the worse Alexa will be.


Source link