Amazon’s Alexa asked a child to touch a crown to the visible protrusions on a phone charger connected to the wall, according to a parent who post screenshots of your Alexa activity history shows the interaction (via Bleeding computer). The unit apparently took the idea for the challenge from an article that described it as dangerous, referring to news reports about an alleged challenge that is trending on TikTok.
According to Kristin Livdahl’s screenshot, the echo responded with “tell me a challenge to do”[ads1]; with “Here is something I found online. According to ourcommunitynow.com: The challenge is simple: plug a phone charger about halfway into an electrical outlet, and touch then a crown on the visible protrusions. ” In a statement to the BBC, Amazon said: “As soon as we became aware of this bug, we took swift action to fix it.” Livdahl tweeted yesterday that it no longer worked to ask for a challenge.
Amazon is not the only company that is having trouble trying to analyze the web for content. In October, a user reported that Google showed potentially dangerous advice in one of its highlighted snippets if you Google “had a seizure now what” – the information it showed was from the part of a website that described what not to do when someone had a seizure. At the time, The Verge confirmed the user’s report, but it appears to have been fixed based on tests we did today (no snippets appear when Googling “had a seizure now what”).
However, users have reported other similar issues, including a user who said Google gave results for orthostatic hypeThetension when searching for orthostatic hypisexcitement, and another like leave a screenshot by Google which shows terrible advice to comfort someone who is grieving.
We have also seen dangerous behavior warnings amplified to make the problem bigger than it originally was – earlier this month, some US school districts closed after self-sustaining reports of shooting threats on TikTok. It turned out that the fire storm on social media was overwhelmingly caused by people talking about threats, far more than any threats that may have existed. In the case of Alexa, an algorithm picked out the descriptive part of a warning and amplified it without the original context. While the parent was there to intervene immediately, it’s easy to imagine a situation where that is not the case, or where the answer Alexa shares is not so obviously dangerous.
Livdahl tweets that she used the opportunity to “go through internet security and not rely on things you read without research and confirmation” with her child.
Amazon did not respond immediately The Vergehis request for comment.