ARE THERE COMPETITION ISSUES ASSOCIATED WITH FAKE NEWS DURING COVID-19?2

Fake news is getting more attention because of the Internet and the rise of the online platforms and social networks, particularly in the age of COVID-19. Its sudden popularization creates important issues regarding how this phenomenon affects the society and democracy, as well as the consumers, competition and market. The question is what happens when fake news are spread (online) and misused during pandemic – whether to apply the European Union competition law in such cases? The author considers that European Commission should not deal with fake news challenges in the context of potential anticompetitive conducts. It is pointed out that fake news problem is not a competitive problem because the struggle against fake news is about the content not the competition and market power.


Introduction
It is not fake news. The American Dialect Society and the Collins English Dictionary elected "fake news" as word of the year for 2017 (American Dialect Society, 2017;Collins English Dictionary, 2017). The term "fake news" has also been named the word of the year for 2016 by Macquarie Dictionary (Macquarie Dictionary, 2016). This could be justified by circumstances that fake news were one of the big issues around the world; the phrase was constantly in the headlines; it emerged as the most-used new expression and was popularised during the US 2016 presidential election. According to the Collins Dictionary, the usage of the term has climbed since 2015 following a 365% rise in usage since 2016 (Independent, 2017;The Guardian, 2017).
It is pointed out that the 2016 US presidential election was the key event in the history of this expression (Cunha et al., 2018, p. 164). But "fake news" is not a new social phenomenon linked to misinformation and manipulation. It has been known since the appearance of the earliest writing systems -from the invention of the Gutenberg machine in 1493 or from 1830s when the "penny press" appeared. The War of the Worlds radio drama in the USA in 1938 was a classic example of widespread misinformation in the context of broadcasting.
Nowadays, fake news is getting more attention because of the Internet and the rise of the online platforms and social media. The sudden popularization of this term consequently creates important issues regarding how this phenomenon affects the society and democracy, as well as the consumers, competition and market. Nevertheless, in the context of digital markets and practice of the European Commission, the important question is what happens when fake news are spread (online) and misused by (Internet) companies, such as social media and search engines (Google, Yahoo, Viber, WhatsApp, Facebook, Twitter, TikTok, etc.) -whether and how to apply European Union (EU) competition law in these cases?
In the time of current COVID-19 crisis, fake news has been put in the context of a 'infodemic' -an overabundance of information (accurate or not), that makes it hard for people to find trustworthy sources and reliable guidance when they need it (WHO, 2020, p. 2). The growing fast circulation of information on the COVID-19 generated false information spreading from questionable sources, including dissemination of misleading information and myths, which could be as dangerous as the coronavirus. In order to combat fake news about COVID-19, besides the pandemic, the World Health Organization (WHO) launched website dedicated to myth-busting and false information and is publishing daily situation reports to provide the population with reliable data. In addition, the WHO is working with the big tech companies to tackle 'infodemic' of misinformation about during the pandemic (WHO, 2020. p. 2).
The author analyses whether European Commission should deal with fake news challenges to society and market in the context of competition issues and potential anti-competitive behaviour of big tech corporations associated with fake news. These challenges for EU competition policy in the digital economy reflect a widespread and persistent problem of fake news. The authors point out that fake news should not be assessed in EU competition proceedings, despite the recent initiatives by the European Commission that go in fake news direction (setting up the High Level Group on fake news and online Disinformation; Public Consultation on fake news and online disinformation; and the Communication on "Tackling online disinformation") (European Commission, 2018a), and initiatives by many countries worldwide, that have regulated issues of harmful content on online platforms, i.e. problem of online disinformation.

Definition of fake news and legal framework
Although the concept and the constituent elements of fake news still have not been defined in a unique and uniform way, it is obvious that no single definition of fake news will be adequate. Fake news has become a buzzword and is currently used to describe (intentionally) false stories spreading on social media (Tandoc, Wei Lim, Ling, 2018, p. 138).
In describing fake news, current references seem to define it differently -from journalistic, psychology, computer science or political science perspective. The phrase is generally accepted and it is usually referred to the false, often sensational, information disseminated under the guise of news reporting, according to the Collins Dictionary (Collins Dictionary). According to the Cambridge Dictionary, fake news are false stories that appear to be news, spread on the internet or using other media, usually created to influence political views or as a joke (Cambridge Dictionary), while according to a Merriam-Webster article, "fake news is, quite simply, news ('material reported in a newspaper or news periodical or on a newscast') that is fake ('false, counterfeit')" (Merriam-Webster). Fake news could be also defined as news articles that are intentionally and verifiably false, and could mislead readers, including intentionally fabricated news articles, such as a widely shared article via internet (Allcott, Gentzkow, 2017, p. 213).
Definitions of fake news rule out unintentional reporting mistakes, rumours (speculative news), conspiracy theories, editorial, satire or propaganda, false statements by politicians, and reports that are slanted or misleading but not outright false (Allcott, Gentzkow, 2017, p. 214;Sacher, Yun, 2017, p. 30). Hence, fake news is not news that are, in and of itself, inconvenient or biased (Sacher, Yun, 2017, p. 30).
However, the EU Commission's High Level Expert Group on fake news and online disinformation (HLEG), in its Report on fake news and online disinformation, deliberately avoids the term "fake news" regarding it inadequate and misleading -inadequate to capture the complex problem of disinformation and misleading because it has been appropriated and misused by some politicians and their supporters (European Commission, 2018a, p. 10). The HLEG rather focuses specifically on problems associated with disinformation (including misinformation), defining it as false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit (European Commission, 2018a, p. 10). Similarly, the European Commission defines disinformation as verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm (European Commission, 2018b, pp. 3-4). Such public harm includes threats to democratic political and policymaking processes as well as public goods such as the protection of EU citizens' health, science, education, finance, environment or security (European Commission, 2018b, p. 4;European Commission, 2018a, p. 10).
The key constraint for policy measures to tackle the issue of disinformation is a balance between freedom of expression and the right of the public to be properly informed (right to receive and impart information). The freedom of expression and information is stipulated in the EU Charter of Fundamental Rights (Article 11(1)) and the European Convention of Human Right (Article 10) as well as uphold by the European Court of Justice and the European Court of Human Rights. Therefore, the complexity of online disinformation and its spreading requires a multi-dimensional approach to this multifaceted problem in a manner that protects and promotes freedom of expression, media freedom, and media pluralism. Issues of disinformation are hence connected with wider political, social, civic and media issues in Europe (European Commission, 2018a, p. 11).
As a result, many countries around the world, including EU Member States, are taking action against disinformation or fake news. In the USA, the Honest Ads Act, a bipartisan bill, would require online platforms to disclose details on political advertisements placed on such platforms -how advertisements were targeted as well as how much the ads cost, i.e. who has paid for the advertisement. 3 In Germany, the Network Enforcement Act (also known as the Facebook Act) regulates combating fake news in social networks.
The Act imposes fines on social networks, for instance, if they fail to remove or block access to content that is manifestly unlawful within 24 hours of receiving the complaint. 4 In France, the law against the "manipulation of information" (law against 'fake news') allows courts to order the immediate removal of "fake news" during election campaigns. Additionally, it imposes transparency obligation for digital platforms, who must report any sponsored content by publishing the name of the author and the amount paid. 5 In UK, the Digital, Culture, Media and Sport Committee (appointed by the House of Commons) undertook an inquiry into disinformation and 'fake news' and published its final report. This report calls for clear legal liabilities to be established for tech companies to act against harmful or illegal content on their sites (House of Commons, 2019).
Some countries implement non-legislative measures such as fact-checking, counter fake news websites or media literacy initiatives (e.g. Malaysia, Qatar, Canada, Italy and Taiwan) or regulatory measures such as identity management in registration of online domains (Italy). There are also other important initiatives in Philippines, Indonesia and Czech Republic (Renda, 2018, pp. 19-20).

EU actions to tackle fake news during COVID-19 infodemic
On 10 June 2020, the European Commission published a "Joint Communication to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions" with purpose to strengthen action to tackle a massive wave of COVID-19 disinformation.
EU acknowledges that a lot of gaps in knowledge of coronavirus induced spreading of an excessive amount of COVID-19 disinformation. Being aware that spreading false or misleading information can impede the effectiveness of the public health response and harm people due to created confusion and distrust, EU designated clear and accessible communication and accurate information as one of the main goal in order to protect citizens' health (European Commission, 2020). EU and its Member States provided a rapid response to COVID-19 infodemic by agreeing to act immediately in order to jointly counter disinformation with transparent, timely and fact-based communication on their actions and reinforce the resilience of their societies (European Commission, 2020).
Being forced to stay in their homes, people inform themselves mostly via internet and social media which impact on the increasingly spreading of COVID-19 fake news. Such disinformation and misinformation have serious repercussions on people and their health because of people's most basic anxieties. COVID-19 infodemic can lead people to ignore official health advice and engage in risky behaviour, including risks to be exploited or be victims of criminal practices, in addition to possible negative impact on EU democratic institutions, societies, economic and financial situation (European Commission, 2020, p. 2).
The key EU challenges of the COVID-19 infodemic are the following: information circulating includes dangerous hoaxes and misleading healthcare information, conspiracy theories, illegal hate speech, consumer fraud, cybercrime, foreign actors and certain third countries, etc. (European Commission, 2020, p. 4). Therefore, EU is focusing on concrete actions to be taken: strengthening strategic communication within and outside the EU, cooperating better within the EU, cooperation with third countries and international partners, greater transparency of online platforms about disinformation and influence operations (including an intensified role for online platforms in the crisis and support for fact-checkers and researchers), ensuring freedom of expression and pluralistic democratic debate, empowering and raising citizen awareness, and protecting public health and consumers' rights (European Commission, 2020, pp. 5-15). 4. The competitive significance of fake news in the eu competition law EU competition law is applied equally to all undertakings spreading the news, whereby Articles 101 and 102 of the Treaty on the Functioning of the European Union (TFEU) prohibit restrictive agreements and abuse of a dominant position, and Council Regulation No 139/2004 contains the main rules and procedures for merger control.
The importance of properly assessing behaviour of those undertakings who are primarily aggregators and distributors of (fake) news (particularly, Facebook, Google and Twitter) is obvious in the context of a development of digital markets. The challenges these companies create for EU competition policy are connected with the ideas that EU competition policy should do something to address the concerns raised by market power in digital markets (e.g. two-sided markets, intermediation platforms, "sharing economy", mobile operating systems, big data and business models based on the offering of "free" services) (Crémer, de Montjoye, Schweitzer, 2019). While it is generally cautioned against an over-extensive application of EU competition law, the European Commission has become increasingly concerned about the anticompetitive harm created by certain conducts pursued by digital platforms.
When it comes to the concrete issue of spreading fake news online and question how these activities could harm competition, there is recent debate in USA whether the antitrust agencies should really worry about fake news as a unique issue form the antitrust policy point of view (Sacher, Yun, 2017, pp. 28-35;Grunes, 2017, pp. 8-15;Hurwitz, 2017, pp. 36-40;Chaiehloudj, 2018, pp. 17-40;Hubbard, 2017, pp. 16-22). In EU, there are no such alerts on this matter and no announced reasons for special treatment of fake news under EU competition law. 6 Considering that misinformation and disinformation are old phenomena, this is the first time that we have very effective technology, such as digital platforms that can be used to disseminate it promptly and effectively. Due to the increased role of the Internet as an important means of communication in modern societies, fake news is subject to public debate and interest. Traditionally, it is considered as an issue of content that is solved by identifying and removing misinformation and disinformation via algorithms or moderators. The prevailing view in literature is hence that fake news is not appropriately framed as an issue of protection of competition (Sacher, Yun, 2017, pp. 28-35;Grunes, 2017, pp. 8-15;Hurwitz, 2017, pp. 36-40;Dolmans, Turner, Zimbron, 2017, pp. 44-57). 7 According to this, the only objective of the competition policy should be promotion of competition and not solving the problem of online disinformation. Since both objectives are in the fileds of different policies, current legislative actions against disinformation are sufficient to tackle these concerns. Therefore, fake news is not a competition issue in EU.
Some authors think that fake news could in the future infringe competition rules if it would provide a support base for collusive behaviour of undertakings or abuse of their dominant position in digital markets (Chaiehloudj, 2018, pp. 14-40). 8 In this article, it is pointed out that involving EU competition rules in what seems to be matters of speech and expression would appear to be pushing competition enforcement in particularly questionable directions (Sacher, Yun, 2017, pp. 35). EU competition law tools are not appropriate to oppose undertakings' practices in the context of fake news.
One of the main motivations for spreading fake news is pecuniary -news articles go viral on social media, by drawing, at the same time, significant advertising revenue when users click to the original site (Allcott, Gentzkow, 2017, p. 217). Therefore, it is claimed that the fake news problem arose after Facebook implemented product changes that deterred its users from clicking on external news links, and to rely instead on its Instant Articles (Stucke, Ezrachi, 2018, 1272Hubbard, 2017, p. 19). But, Facebook and other Internet companies did not create fake news although they can manipulate what theirs users can easily see.
Powerful tech corporations, such as Facebook, Google and Twitter, compete for users' attention -to win more resources on the Internet. Their business models are based on the users' private data used by advertisers for targeted advertising, foremost by owning digital platforms that offer free access to services. But if consumers do not necessarily pay for the services they receive, how to deal with the issue of consumer detriment in digital platforms?
Consumer detriment is harder to quantify in digital markets, because the competition between firms in these environments occurs on several dimensions -it is not solely about low prices. The price is the most evident and measurable parameter in other markets, but the non-price parameters may be of equal and sometimes greater importance in digital sphere. These non-price effects contribute to consumer detriment and may arise at the same time with price effects, or can exist in their absence. To conclude, strengthened market power in digital markets may also be manifested in non-price terms and conditions that adversely have an effect on customers (OECD, 2018).
The illustrated role of non-price parameters (such as quality, variety and innovation) could be applied in merger control cases and take into account by competition authorities when deciding on mergers. When it comes to measuring platforms' market power, competition in digital markets is not primarily driven by price. Market power here is not about price, it is about non-price effects, which means that market power can arise in dimensions such as quality (Grunes, 2017, 13).
The above is supported by EU competition rules which explain that the Commission prevents mergers that would be likely to deprive customers of these benefits and impede effective competition by significantly increasing the market power of firms, whereby the "increased market power" is meant the ability of one or more firms to profitably increase prices, "reduce output, choice or quality of goods and services, diminish innovation", or otherwise influence parameters of competition [emphasis added]. 9 It appears that EU merger control rules are adapted to the circumstances of the digital markets and potential challenges arising from conducts of big tech corporations.
In the context of Articles 101 and 102 TFEU, there is also no appropriate theory of harm to prove allegedly anticompetitive conducts due to fake news. If undertakings automate business processes and distribute fake news via search engines and news feed algorithms, it is suggested that it could be a form of a collusive agreement entered into between undertaking that distributes fake news and undertaking's news feed (Chaiehloudj, 2018, pp. 14-40). 10 But, even if an agreement is proved, this is not а consent between competitors that facilitates or supports restrictive agreement in accordance with Article 101, particularly not about price increases and facilitating price monitoring. Even in the context of abuse of dominance in accordance with Article 102, there is no theory of harm because algorithms can determine the (fake) news that online users read but not the price. In principle, competition concerns could arise where such use of algorithms results in differentiated prices set at the highest price that a given consumer is likely and willing to pay. But, in the case of fake news consumers do not necessarily pay for the news they read.
When it comes to the abuse of dominance cases, it is also suggested that as in the Google Shopping case, 11 Facebook would give priority to its information content (i.e. fake news) by placing it in a good position on the platform's news feed, to the detriment of competing information websites ('self-favouring') (Chaiehloudj, 2018, pp. 14-40). 12 Three categories of possible abuse are mentioned in that context: discriminatory practice, leverage and refusal to deal or supply (including the essential facilities doctrine) (Chaiehloudj, 2018, pp. 14-40). A thorough analysis of that case law indicates that fitting the fake news facts into one of these suggested types of abuse cannot be adjusted. However, in any case it would be necessary to tackle additional challenges about whether the fake news is correctly identified, if the relevant market is properly defined and whether digital platform might have a dominant position.
First, a legal assessment of the essential facilities doctrine, the necessity of which arises from the refusal to deal practice being indicative, shows that this requirement of abuse is not fulfilled. The conduct of applying an algorithm that selects fake news cannot be considered as a refusal to give access to the digital platform because that platform (such as social network) cannot be described as an "essential facility", having in mind established practice and the fact that competitors in the downstream market have available actual or potential substitute products.
Secondly, in accordance with the Article 102(c) TFEU, there is no discrimination to put "other trading parties" at a "competitive disadvantage" due to applying dissimilar conditions to equivalent transactions, because there is no transactional relationship between 9 Guidelines on the assessment of horizontal mergers under the Council Regulation on the control of concentrations between undertakings, 2004, OJ C 31/3, paragraph 8; Guidelines on the assessment of non-horizontal mergers under the Council Regulation on the control of concentrations between undertakings, 2008, OJ C 265/7, paragraph 10. 10 For similar (Hubbard, 2017, pp. 16-22). 11 AT.39740, Google Search (Shopping), 27 June 2017. 12 For similar (Hubbard, 2017, pp. 16-22). undertakings (platforms) and their competitors as trading parties that would place the conduct in question. Consequently, there is no differential treatment applied to competitors, considering that competitors may apply the similar method to disseminate fake news. This means that competitors are not suffering a competitive disadvantage or harm.
Finally, in the context of the leveraging of market power, it is possible to consider the case in which dominant undertaking (digital platform) uses its market power in social network to develop positions in other markets (e.g. online advertising). But there is no exclusion of competitors from the intermediation platform, i.e. from the market, and competitors are not harmed because they are not constrained to behave the same as digital platform does. This means that fake news algorithms are not used to raise barriers to entry.
To conclude, it is unappropriated to use competition law tools to tackle fake news because such tools are inapplicable in this context. The fake news issue arises regardless of the use of internet and algorithms in media and advertising sectors. There is no difference between spreading false information via different media or platforms and printed media, consequences are the same, and we cannot see that competition law intervenes in those situations.

Undertakings' efforts to combat fake news
The final HLEG Report recommended a "multi-dimensional" approach to the fake news problem in EU. This approach includes appropriate competition instruments which European Commission should consider, in a second step, in order to ensure that the actions recommended in the Report are effectively implemented (European Commission, 2018a).
Additionally, it is proposed to adopt self-regulatory measures with aim to strengthen media and information literacy, a problem mostly related to social media. Therefore, online platforms and leading social networks (Google, Mozilla, Facebook, and Twitter) , as well as the advertising industry and advertisers worldwide agreed, on a voluntary basis, to self-regulate the spread of online disinformation and fake news. They signed the Code of Practice in October 2018, while , Microsoft became the signatory of the Code in May 2019 (News article, 2018).
In their annual self-assessment reports referred to European Commission, they detailed policies, processes and actions undertaken to fulfill their relevant commitments under the Code during the first year of Code's implementation. These reports indicate their thorough efforts to implement stipulated commitments, as well as progress over the situation prevailing before the Code's entry into force, including improved transparency (Statement, 2019).
For instance, Facebook's report on the implementation of the Code of Practice for Disinformation declared fighting false news and misinformation on Facebook as one of the most important things it does (Facebook, 2019). Facebook made substantial improvements and is constantly looking for new ways to keep false news and other types of misinformation off its platform, including blocking and removing fake accounts, finding and removing bad actors, limiting the spread of false news and misinformation, and bringing unprecedented transparency to political advertising (Facebook, 2019). Its approach to fake news is based on efforts to provide people with informative information, while balancing free expression, which is why Facebook's strategy to combat misinformation has three parts: removing, reducing and informing (Facebook, 2019).
Annual self-assessment reports also showed that joint efforts of the digital platforms and other stakeholders, as well as fact-checkers, researchers, civil society and national authorities, are intensified while there is a need for further efforts to raise awareness about of negative societal effects of fake news (News article, 2019). If we consider these results, in particular reported progress achieved in the fight against disinformation by the European Commission and the High Representative (Press release, 2019), there is no need to take additional measures against fake news from the competition policy point of view. Apart from the unsuitability of competition law measures, the European Commission already took comprehensive approach to fake news, including self-regulatory actions of digital platforms in combating fake news and actions during COVID-19 infodemic. Rather focusing on fake news detection methods, the European Commission should not consider fake news as an antitrust issue.

Conclusion
In the era of digital platforms and COVID-19, competition law is at the crossroad between market power of dominant companies, protection of competition, digitisation and COVID-19 infodemic. Fake news as a potential anti-competitive concern is a relatively new challenge whose scope has to be understood, as well as the effectiveness of regulatory measures already put in place to prevent the spread of disinformation via internet. 13 However, it may be concluded that fake news is not a competition problem in EU because the struggle against disinformation or fake news is about the content and regulation and not the competition and market power. But the explosion of fake news suggests that it cannot be ruled out the possibility that competition authorities in EU initiate antitrust proceedings at some point. Such investigations would not be easy to conduct, including allegations that would need to be proved, and consequently, the findings in these investigations would not be clear.
European Commission therefore should not approach this problem by using the competition policy tools to resolve it. EU competition proceedings in digital market should follow the normal pattern of competition analysis, taking account of sectoral specificities. This means that EU competition law should not be instrumentalised to address concerns over content, media pluralism or data protection. Fake news is not an antitrust issue but a major regulatory issue, important for society and democracy.
New competition rules are not needed to deal with fake news because European Commission already has the appropriate tools to handle with anticompetitive conduct relating to fake news. It seems also that Commission does not need a new theory of harm to consider anticompetitive behaviour of tech corporations spreading fake news but need to understand new types of non-price effects in digital markets.
Digital platforms are not publishing companies; they don't create content and fake news; they don't spread false information about competitors, but only attracts users with its social networking system and digital platforms. Hence, it is likely that fake news problem will self-correct, having in mind above-mentioned all EU actions and self-regulatory measures, in combination with proper enforcement of media plurality rules and media ethics. This is a better approach because using of competition law tools cannot resolve the problem of fake news (Dolmans, Turner, Zimbron, 2017, p. 49