By Anjum Shabbir*
The rise of disinformation in the digital age
There is no definition of ‘disinformation’ under EU law. It is invariably or interchangeably described as ‘disinformation’, ‘online disinformation’, ‘misinformation’, ‘misleading information’, ‘false or fake news’, or even ‘propaganda’, in documents of the EU Institutions. Commissioner Vera Jourová, who is Vice-President for ‘Values and Transparency’, referred to this global and European problem yesterday as ‘disinformation, misinformation and digital hoaxes’.* Member States and private companies also have their own ways to refer to it, and the human and automatic means of spreading disinformation are also wide-ranging: advertising, algorithms, bots, trolls, deep fakes, re-sharing of information, social profiling, and more.
Emergence on the EU’s agenda
Although disinformation is rather an old problem, it only fully landed on the EU legislative agenda in 2015 due to the explosive shape it took in digital form around that time. The EU predominantly saw it as a real threat to fair and democratic elections in the European Union, unsurprising when the triggers for it to become a policy objective are considered: Russian disinformation campaigns targeting foreign elections, the Salisbury chemical attack, and Cambridge Analytica’s role in the UK Brexit referendum on leaving the EU. As a result it was taken very seriously by the EU, in particular before the 2019 European Parliament elections, but has not yet resulted in any binding legislative measure.
A brief rundown of the EU’s actions in this field, in the years leading up to now, reflects that response. In 2015, an East Strategic Communication Task Force was established by the European External Action Service, an Action Plan on Strategic Communication; a Joint Communication on Countering Hybrid Threats was published in April 2016; a Communication on fake news and online misinformation was published in November 2017. Although a Recommendation was issued in February 2018, this was in the context of the European Parliament elections, and made only a brief mention of disinformation.
The precursor of the newer (but also non-legislative) measures that are emerging today are a Communication on tackling online disinformation of April 2018, a Code of Practice on Disinformation, published in September 2018, and a Commission (and EEAS-supported) Action Plan against Disinformation of December 2018. 2019 saw the first ever European Media Literacy Week, the launch of the information-exchange mechanism the Rapid Alert System and, monitoring of reports to evaluate progress, to track the extent to which commitments made under the Disinformation Code had been respected.
Disinformation in 2020
Disinformation has taken a different shape in 2020. Recent events include the exponential rise of disinformation narratives connected and in parallel to, and as quickly if not quicker than the spread of the COVID-19 pandemic, the impact of which serves as a stark reminder of the need for rules in this field. These include, for example, that the coronavirus was man-made, that it was a plot hatched by big pharma, that the pandemic was linked to the rolling out of 5G, or that it is a pretext for installing microchips in humans. Another is the unceremonious whipping of social media giant Twitter, for flagging up two potentially misleading tweets of the President of the United States’s on mail-in voting and voter fraud on 26 May, and restricting another for glorifying violence in violation of its policy on 29 May.
The EU’s approach in 2020 so far
The EU is clearly following a different approach to that of the US (President). This is signalled by Commissioner Jourová’s expression of support for Twitter’s moderation policy in response to the uproar of the Twitter-Trump saga, advocating more responsibility for such platforms. A selection of some of the EU’s actions show a continued theme in the choice of measures (as of yet, a glaring lack of Decisions, Regulations or Directives), which are found couched under the heading of ‘Digital’, ‘Human Rights and Democracy’, or ‘COVID-19’ rather than dedicated its own category as ‘Disinformation’:
- A Commission Communication of 19 February, Shaping Europe’s digital future refers to targeted and coordinated ‘disinformation campaigns’, the threat posed to democracy, and the need for ‘greater transparency’, and ‘quality media’.
- On 20 February the Council’s Conclusions on COVID-19 mentioned the need to counter ‘misinformation and disinformation’ in the midst of the pandemic, highlighting that it can ‘lead to discrimination’.
- On 3 March 2020, the EU held meetings with social media and online platforms, organising the removal of millions of ads and content that were misleading and for goods at inflated prices, and securing the cooperation of those actors to add links to authoritative sources.
- References were made to disinformation and the need for ‘accurate and verified information’ by the EU Action Plan on Human Rights and Democracy 2020-2024 of 25 March 2020, ‘disinformation and hate speech’ and the impact on ‘democracy and human rights’, and ‘freedom of expression’.
- Disinformation is mentioned in a Commission Communication on the Global EU response to COVID-19 of 8 April 2020, a European Parliament Resolution of 17 April, and the Commission’s Communication of 27 May 2020, Europe’s moment: Repair and Prepare for the Next Generation, which uses the word ‘infodemic’, and recognises the threat to ‘democracy and freedom of expression’. On that same date, disinformation was also listed as an adjusted Commission priority under the heading ‘democracy’ in the Adjusted Commission Work Programme 2020.
- The Commission, European Parliament and EEAS (through the Task Force referred to above) each have dedicated websites on how to fight disinformation.
- The most recent measure is the funding of a European Digital Media Observatory (EDMO), a European hub to fight online disinformation. The EDMO commenced operations on Monday 1 June 2020. Based at the European University Institute, it aims to bring together fact-checkers and academic researchers with expertise in the field of online disinformation, collaborate with media organisations and media literacy practitioners, advance the development of EU fact-checking services and support media literacy programmes, and promote scientific knowledge on online disinformation.
What to expect
Next week the Commission will be publishing a Communication on Disinformation in the COVID-19 context, and by the end of the year, the Commission will produce a Democracy Action Plan.
Commissioner Jourová referred to objectives going forward (largely restating those identified in the 2018 Action Plan)* as being to: take stock of the actions taken so far; propose complementary actions; protect and strengthen the flow of reliable information; produce more strategy information to ensure information about EU actions is distributed; improve support for fact checkers, and scientists, through the EDMO; encourage online platforms to be more transparent; stop or minimise support given to online platforms to spread disinformation; engage participation of citizens; promote the EU’s ability to detect, analyse and expose misinformation; work on becoming more ‘media savvy, such as by investing in education and awareness’; protect media freedom so journalists can do their job freely; make the voices of experts heard; and impose a digital tax;
From the above, it cannot be concluded that the EU has idled in its approach to preventing and fighting disinformation, but it is clear from the past, present, and expected measures that there is no legally binding regulatory framework on public bodies or private companies. There is a long way to go before regulation can or will emerge, many matters still need to be debated and take shape, and there are continuous advances in the manner in which disinformation spreads, and the tools used to do so.
If it opted for that road, the EU would have to first and fundamentally decide on a fixed definition(s) of ‘disinformation’, who to protect and how (should there be an appeal process against disinformation that has been removed?), and define the relevant actors (are social media and online platforms editors or publishers, for example?), and apportion liability to them – so far, the EU’s proposals and actions spread the burden – private companies and social media giants must voluntarily follow the Disinformation Code, citizens must be educated and made aware, and the EU must continue to fund research initiatives such as the EDMO. Any legal framework, for which the legal basis has to be determined, would have to protect a number of European law rights and principles: such as the rule of law for governments that engage in disinformation, freedom of expression (restrictions of which must be lawful, legitimate, proportionate, and necessary in a democratic society), media pluralism, health and safety (against disinformation such as to inject yourself with bleach, take hydroxychloroquine, and so on), public harm, data protection law (under the GDPR), discrimination (under the Charter), and cybersecurity. Distinguishing lines would also be needed between disinformation and illegal content, disinformation and hate speech, and disinformation and misrepresentation in the consumer context. There is a long way to go, and it will have to be seen whether the EU’s current approach is an adequate one.
*Anjum Shabbir is an Assistant Editor at EU Law Live