As things stand, what you can and cannot say on the internet is largely a matter for national law, decided by national parliaments. This means that every nation in Europe currently has different laws and practices.
But the EU has quietly been moving to change this. Take last year’s Copyright Directive, which more or less demands the introduction of automated content filters on social-media platforms. And last month, it became clear that an impatient Brussels wants to turbocharge this process by bringing internet regulation to the EU level, where it can pull the necessary strings.
The EU Digital Services Act sounds innocent on the surface. It is ostensibly aimed (in Euro-speak) at enhancing the so-called Digital Single Market by harmonising national laws and removing competitive barriers. Member states have not yet been consulted or made aware of any specific proposals in the Act. But thanks to the leak of an internal briefing to the Digital Single Market steering group, obtained German digital freedom activists Netzpolitik, we can see what Brussels has planned.
One of the EU’s key concerns, as the briefing makes clear, is the lack of EU-wide rules and regulations covering what people can see and say online. The fight against online hate speech, for example, is said to be ‘expensive and inefficient across the Single Market’. There are also no EU-wide rules on online advertising, nor does the EU have oversight of online services as a whole.
The prescription? EU regulation of the internet. EU law should cover the ‘entire stack of digital services’, from internet service providers (ISPs) and social media to search engines and cloud services. ‘Uniform rules for the removal of illegal content such as illegal hate speech’ need to be made binding across the EU, says the briefing. Online advertising, including political advertising, should come under EU control, too. And there must be a ‘dedicated regulatory structure to ensure oversight and enforcement of the rules’.
Currently, EU law has explicit safeguards against ‘general monitoring obligations’ – meaning member states are prohibited from asking ISPs and social-media platforms to automatically filter and monitor content for undesirable material. In a beautiful piece of EU doublespeak, this state of affairs should continue, but ‘specific provisions governing algorithms for automated filtering technologies – where these are used – should be considered, to provide the necessary transparency and accountability’. Put another way, automatic filtering should continue to be banned, but filtering of an automatic nature should be both required and extended. Clear?
These proposals are worrying for several reasons. For one thing, you can’t have rules for the compulsory removal of illegal hate speech unless you have rules defining hate speech. At present, there is healthy political argument about what hate speech is, how to balance free speech and offence, and indeed if there should be any prohibition on hate speech at all. Yet the logic of the EU proposal is to take this vital debate out of the national democratic process entirely, and instead entrust it to unelected EU technocrats.
Platforms will be issued with take-down notices for hosting hate speech. The EU also plans to regulate what it calls ‘harmful content’. It suggests that, due to the ever changing nature of ‘harms’, EU-approved codes of conduct for ISPs might be more appropriate. While it is too early to predict the strictness of the codes or the heavy-handedness of the regulator, ominously, the briefing cites the UK’s extremely censorious Online Harms White Paper and France’s fake-news law with apparent approval.
Then there are the calls for EU-wide rules about political advertising. Although this is supposedly aimed at ‘micro-targeted disinformation campaigns’, this should not deceive anyone. Essentially, this is a demand for overall EU oversight over political speech. The omens aren’t hard to spot. Highlighting EU corruption or waste, backing Brexit or supporting a populist politician could easily be labelled as ‘misinformation’.
There is no doubt that new rules and regulations will have a chilling effect on online speech. ISPs, social-media sites and other platforms have businesses to run. Few will want to risk intervention from regulators. Still fewer will chose to defend the free-speech rights of individual users when an EU regulatory body, armed with possibly draconian sanctions, makes a takedown request. Companies will find it far simpler and safer to take down any material that is likely to draw complaints.
The Digital Services Act will allow the EU to set the acceptable parameters of ‘free speech’ online. ISPs and European websites will fall over themselves to avoid publishing anything that makes Brussels uncomfortable. Internet freedom is in serious danger.
Andrew Tettenborn is a professor of commercial law and a former Cambridge admissions officer.