British regulators on Sunday unveiled a landmark proposal to punish Facebook, Google and other tech giants who fail to stop the spread of malicious content on the web, marking a major new regulatory threat to an industry that has long dodged responsibility for what the users say or share.
The aggressive, new plan – prepared by the UK's leading consumer protection authorities and blessed by Prime Minister Theresa May – targets a wide range of web content, including exploitation of children, fake news, terrorist activity, and extreme violence. If approved by Parliament, UK watchdogs will have unmatched powers to issue fines and other penalties if social media does not quickly remove the most egregious posts, images and videos from public display.
Top British officials said their plan would amount to "world-leading laws to make Britain the safest place in the world to be online." The document allows top executives from large technology companies to be held directly responsible for missing police on the platforms. It also asks lawmakers to consider whether regulators should be able to order Internet service providers and others to restrict access to some of the most harmful content on the web.
Experts said the idea could potentially limit the reach of websites, including 8chan, an anonymous message board where graphic, violent content often thrives and played an important role in spreading images of last month's mosque attack in New Zealand.
"The Internet can be brilliant for connecting people all over the world – but for too long these companies have not done enough to protect users, especially children and young people, from harmful content," May said.
For Silicon Valley's rules can make up the most serious regulatory consequence tech industry has faced globally in order not to clean up a series of disturbing content on the web. The industry's continued struggle was in severe relief last year, after videos from the deadly shooting in Christchurch, New Zealand spread online, despite increased investments by Facebook, Google and Twitter on several human reviewers – and more powerful technological tools – to Stop such posts from going viral.
The Mars shooter encouraged Australia to adopt its own rights legislation, and it has emboldened others across Europe to consider similar new rules targeting the technology industry. The wave of global activity is in stark contrast to the United States, where a decade-old federal law hides social media companies from being held accountable for the content posted by users. US lawmakers have also been reluctant to regulate online speech without concern that this would be contrary to the first amendment.
"The re-regulation of electronic businesses is over," UK's digital secretary Jeremy Wright said in a statement Sunday.
In response, Facebook was highlighted by its recent investments to better spot and remove malicious content, and add to the UK's proposal "should protect society from harm while supporting innovation, digital economy and freedom of expression." Twitter said it would work with the government to "find an appropriate balance between keeping users safe and keeping the open, free nature of the internet." Google refused to comment.
The UK's new regulatory requirement reflects a deeper skepticism of Silicon Valley in response to a range of recent controversies, including Facebook's role in the country's 2016 referendum to leave the EU. British legislators learned after the poll that an organization created by the Brexit supporters seemed to have links to Cambridge Analytica, a political consultant who inaccessibly has access to Facebook data of 87 million users to help customers better refine their political messages.
The revelation triggered a broad request in parliament, where lawmakers failed to claim testimony from Facebook CEO Mark Zuckerberg. Afterwards, many have called for strict new regulation of the giant for social networks and their peers.
"There is an urgent need for this new supervisory body to be established as soon as possible," said Damian Collins, the chairman of Digital, Culture, Media and Sports Committee of the House of Commons. He said the panel would hold hearings on government proposals in the coming weeks.
For now, U.K.'s plan comes in the form of a white paper that will ultimately provide new legislation. Previous details shared on Sunday suggested that legislators set up a new, independent regulator mission to ensure that companies "take responsibility for user safety." That audit – either through a new agency or part of an existing one – will be funded by technical companies, potentially through a new tax.
The agency's mandate would be enormous, from the politicization of large social media platforms such as Facebook to smaller websites & # 39; forum or comment sections. Much of the work will focus on content that can be harmful to children or pose a risk to national security. But regulators could ultimately play a role in investigating a wider range of online injuries, says the UK, including content "that may not be illegal, but is still very harmful to individuals or threatens our lifestyle in the UK." The document provides a litany of potential areas of concern, including hate speech, coercive behavior and under-exposure to illegal content, such as dating applications intended for people over the age of 18.
Many details, such as how it defines malicious content and how long businesses have to take it down, have not yet been hammered out. The UK regulators also said they would allow technical companies to be more transparent with the users about the content they are taking down and why.
"Despite our repeated calls to action, malicious and illegal content – including child abuse and terrorism – is still too accessible online," said Sajid Javid, UK home secretary. "Therefore, we are forcing these companies to clean up their action for once. "