When Chairman Ajit Pai got the Federal Communications Commission to revise network neutrality in 201
However, the rules he wanted to whistle were formed in 2015 in a process that elicited a record number of public comments – over 4 million, more than any other regulatory study in US history. The overwhelming majority of them supported net neutrality rules to prevent ISPs from blocking, slowing down, or speeding up access to websites or charging websites to reach users at faster speeds. But despite public support for the relatively new rules, Pai's bid for 2017 to undo net neutrality was finally successful. And, surprisingly, it broke the record for public participation in a statutory regulation again – but this time it seemed that the process was blurred by rudeness. A new BuzzFeed report makes it look even more sketchy. It underlines how vulnerable the federal government's comment process is – and what is at risk if it is not resolved.
When a federal regulatory agency wants to change its rules or create a new policy, it usually has to go through a "notice and comment" process in which the public is invited to weigh in on the impact of the rule change. Thousands of rules are announced this year, and they usually receive anything from a few dozen to a few thousand comments. It is very, very rare that the call and comment process attracts millions of responses – far less than 22 million comments, as the effort to undo the rules of net neutrality did in 2017.
As the comments poured in through the second half of the year, it quickly became clear that something was wrong. Just over a week after the comment period opened, John Oliver dedicated a 20-minute segment of the HBO show to the issue, asking users to make their voices heard to try to prevent, as he put it, "cable company women." The comments flooded the FCC, so that the agency's electronic filing system was discontinued – as an investigation by the FCC's inspector general determined when he looked at the case. However, when the system initially crashed, Pai incorrectly told Congress that it was due to a mysterious net attack. In late May, Vice found that comments in favor of FCC repeal were posted under the names of dead people. Further investigations found that comments in favor of abolishing net neutrality also came from stolen identities, including by lawmakers, such as Oregon Sen. Jeff Merkley and Rep. In Arizona, Ruben Gallego, who had made false comments on their behalf and advocated for net neutrality. Bots sent comments. Hundreds of thousands of comments came in from Russian email addresses. Despite these inaccuracies, more than 99 percent of the organic comments – suggesting that the evidence is from actual people and not required – found to be to maintain net neutrality.
Now, according to the new BuzzFeed investigation, it appears that more than a million of the suspicious comments made to the FCC were the product of a shady company outside of political campaigns using people's information stolen from a data breach .
With these many snafuses, it is clear that the online commentary system at FCC, and very likely other government agencies, is easy to exploit and probably destroyed to the point that it does more harm than good. While this may seem like a tremendous problem, it is a major problem. When it comes to designing new federal policies, the announcement and comment process can be the only direct way a member of the public can have a voice in federal decisions. Regulators are legally required to consider opinions shared by Americans. Although policy makers cannot read every comment if millions are posted, comments can be gathered to help reshape political proposals. Take what happened in 2014, when the FCC first proposed new rules on net neutrality. At that time, during the Obama-era FCC, the original proposal would have allowed Internet service providers to offer websites to reach users at faster speeds, but prevented any site blocking. This would have created a two-tier Internet. But the audience commented in the comment process. After enormous pressure, the FCC rewrote the rules to prevent any form of paid priority – and that version of the rules was finally adopted at the end of the year. In 2004, a nonprofit that I used to work for, Prometheus Radio Project, sued the FCC after failing to assess demonstrated public opinion through its comment process as he made new media ownership rules – and won. The agency was eventually tasked with going back and holding six public hearings across the country to better understand the impact of its rules on different communities across the country.
It is not surprising that the FCC commentary process has become a mess. There is currently no CAPTCHA system asking you to prove that you are a person when you post a comment. Writing a web application is incredibly easy to automatically submit comments. Pai even refused to delete false comments about the net neutrality link when asked by identity theft victims to do so. Despite reports from more than a year ago that the agency plans to review its comment system, it is not clear that anything has actually been done. On Thursday, FCC Commissioner Jessica Rosenworcel called FCC's "continued silence" on the broken comment system "disgraceful."
This is not just a problem at FCC. The Department of Labor has had false comments, as well as the Consumer Financial Protection Bureau, the Federal Energy Regulatory Commission and others. A Wall Street Journal investigation found thousands of fraudulent comments on agency websites. This problem is endemic and it is not addressed.
The answer to this mess does not end the comment process. We need ways to weigh in on policies that affect our lives beyond election day, especially when it comes to decisions made by unelected officials at regulatory agencies. The answer is to fix the broken system – quickly. It requires understanding how false comments are submitted and working with technologists, consumer advocates and other stakeholders to tempt out ways the system can be abused and built better. Perhaps a new system may require posters to use two-factor authentication. Or maybe the agencies should build a detection system to eliminate duplicates. When the audience is asked to participate online, there will always be actors trying to mask it up. But democracy is messy. And it takes continuous work to protect it.