Drive to Regulate Speech On the Internet Is Gaining Steam

Last week the House of Commons’ Select Committee on Culture, Media and Sport released an interim report on fake news and disinformation. While couched in the usual do-good language, it illustrates the seriousness of the threat to free speech, most of which takes place, nowadays, on the internet. The report is worth reading in its entirety. Here are some selections, along with my comments.

There are many potential threats to our democracy and our values. One such threat arises from what has been coined ‘fake news’, created for profit or other gain, disseminated through state-sponsored programmes, or spread through the deliberate distortion of facts, by groups with a particular agenda, including the desire to affect political elections.

Such has been the impact of this agenda, the focus of our inquiry moved from understanding the phenomenon of ‘fake news’, distributed largely through social media, to issues concerning the very future of democracy.

It is common for liberals to assert that the future of our democracy (or, as here, Britain’s) is somehow threatened by the fact that misinformation can be posted to Facebook or Twitter, or elsewhere on the internet. I have yet to see any coherent explanation of why that is the case. Misinformation has been a common feature of our democracy for a long time.

Arguably, more invasive than obviously false information is the relentless targeting of hyper-partisan views, which play to the fears and prejudices of people, in order to influence their voting plans and their behaviour.

It is interesting to see how quickly the committee moves from false information to “hyper-partisan,” or “targeted” information. More:

In August 2014 Dr Kogan worked with SCL to provide data on individual voters to support US candidates being promoted by the John Bolton Super Pac in the mid-term elections in November of that year. Psychographic profiling was used to micro-target adverts at voters across five distinct personality groups. After the campaign, according to an SCL presentation document seen by the Committee, the company claimed that there was a 39% increase in awareness of the issues featured in the campaign’s advertising amongst those who had received targeted messages. In September 2014, SCL also signed a contract to work with the American Conservative advocacy organisation, ‘For America’. Again, they used behavioural micro-targeting to support their campaign messages ahead of the mid-term elections that year. SCL would later claim that the 1.5 million advertising impressions they generated through their campaign led to a 30% uplift in voter turnout, against the predicted turnout, for the targeted groups.

The instances of pernicious micro-targeting are of course on the right, but this is done by all sides and has been for a long time, e.g. through direct mail, which is now commonly targeted on a household-by-household basis. Why is such micro-targeting effective? Because it delivers information on the particular issues a given voter cares most about. Why, exactly, does that call for new regulations?

The freedom that exists on a platform like Facebook is seen as problematic, since Facebook doesn’t prioritize “good journalism” over other material:

What appears on individuals’ newsfeeds is there either by an algorithm, based on their behaviour and profile, or it is targeted at their demographic by paid promotion. Indeed, it is common for publishers to pay for their content to be posted so that they can reach a wider audience, due to the fact that Facebook, for example, does not recognise or seek to categorise good journalism or news over other material.

The Parliamentary committee has various ideas about how government can promote “good journalism”–which, of course, is journalism that promotes government.

We recommend that the Government uses the rules given to Ofcom under the Communications Act 2003 to set and enforce content standards for television and radio broadcasters, including rules relating to accuracy and impartiality, as a basis for setting standards for online content. We look forward to hearing Ofcom’s plans for greater regulation of social media this autumn. We plan to comment on these in our further Report.

Yes, the same rules that guarantee the “accuracy and impartiality” of the BBC should be applied to all online information sources! More:

Algorithms are being used to help address the challenges of misinformation. We heard evidence from Professor Kalina Bontcheva, who conceived and led the Pheme research project, which aims to create a system to automatically verify online rumours and thereby allow journalists, governments and others to check the veracity of stories on social media. Algorithms are also being developed to help to identify fake news. The fact-checking organisation, Full Fact, received funding from Google to develop an automated fact-checking tool for journalists. Facebook and Google have also altered their algorithms so that content identified as misinformation ranks lower. Many organisations are exploring ways in which content on the internet can be verified, kite-marked, and graded according to agreed definitions.

This wouldn’t sound so sinister if we believed that governments and various liberal groups and companies are trying to stamp out online reports of Martian landings in Ohio. But of course that isn’t their agenda. As I wrote here, when Facebook executives gave a press conference for members of the Television Critics Association, they were jeered and hissed because Facebook allows Fox News, the most-watched cable news network, to use its platform.

The Parliamentary committee recommends that the British government collaborate with “experts” to devise a system for rating web sites according to their “level of verification.”

The Government should support research into the methods by which misinformation and disinformation are created and spread across the internet: a core part of this is fact-checking. We recommend that the Government initiate a working group of experts to create a credible annotation of standards, so that people can see, at a glance, the level of verification of a site. This would help people to decide on the level of importance that they put on those sites.

“Hate speech” naturally rears its head. The committee speaks positively about the draconian German system:

In Germany, tech companies were asked to remove hate speech within 24 hours. This self-regulation did not work, so the German Government passed the Network Enforcement Act, commonly known as NetzDG, which became law in January 2018. This legislation forces tech companies to remove hate speech from their sites within 24 hours, and fines them 20 million Euros if it is not removed.

20 million Euros! It would be helpful if anyone had any idea what “hate speech” is.

Some say that the NetzDG regulation is a blunt instrument, which could be seen to tamper with free speech, and is specific to one country, when the extent of the content spans many countries. Monika Bickert, from Facebook, told us that “sometimes regulations can take us to a place—you have probably seen some of the commentary about the NetzDG law in Germany—where there will be broader societal concerns about content that we are removing and whether that line is in the right place”. The then Secretary of State was also wary of the German legislation because “when a regulator gets to the position where they are policing the publication of politicians then you are into tricky territory”. However, as a result of this law, one in six of Facebook’s moderators now works in Germany, which is practical evidence that legislation can work.

It is evidence of something, anyway. Theoretically, that can’t happen in the U.S. because “hate speech” is constitutionally protected here. But don’t think the liberals won’t try.

This Parliamentary report is an interim version, with more to come later this year. But it is easy to see which way the wind is blowing. “Mainstream” media outlets are deemed to be mainstream because of their support for the governing class and its favored policies, which broadly can be described as liberal. The internet–not just Facebook and Twitter, although they have come to play a huge role–is “unregulated.” Worse, it is home to lots of dissenting voices. (I say dissenting, even though such voices–ours, for example–probably represent the views of a majority of Americans on most issues.) This freedom poses a serious threat to the power of the governing class and its media toadies, and they aren’t taking it lying down.

The good news is that here in the U.S., we have robust legal protections for free speech. The bad news is that the companies that control discourse on the internet–Google, Facebook, Twitter and Apple–are all run by liberals who may prove happy to accede to pressure to stifle views that run contrary to the liberal orthodoxy of Silicon Valley.

Notice: All comments are subject to moderation. Our comments are intended to be a forum for civil discourse bearing on the subject under discussion. Commenters who stray beyond the bounds of civility or employ what we deem gratuitous vulgarity in a comment — including, but not limited to, “s***,” “f***,” “a*******,” or one of their many variants — will be banned without further notice in the sole discretion of the site moderator.