Political parties must ACT now to assure voters of clean online campaigning


Academic Views, GE2020 / Monday, June 29th, 2020

Voters should require parties and candidates to pledge themselves to Accountability, Civility and Transparency in the use of online tools. By Damien Chng, Chong Ja Ian, Cherian George, Howard Lee and Netina Tan.

This General Election will be the most internet-reliant in the republic’s history. It is also Singapore’s first since it became clear to the world that online tricks for manipulating public opinion have outstripped societies’ traditional defences against disinformation. 

Many voters are by now on guard against “fake news”. Vigilant netizens are active in pushing back against abuses of online freedoms. Nevertheless, we believe Singapore remains vulnerable to parties’ sophisticated and often invisible computational propaganda as well as cybertroopers [1] — traditionally associated with authoritarian regimes such as China and Turkey.

Candidates and political parties involved in GE2020 should publicly commit to clean and fair online campaigning. Voters should hold to account those trying to benefit from the cynical and underhanded use of manipulative technologies. Two risks are of concern. First, there is the abuse of online tools to deceive voters by, for example, the use of bots to give voters a misleading picture about the state of public opinion. Second, although polarising rhetoric has always featured in elections, digital techniques such as micro-targeting [2] and profiling take this tendency to new extremes, with divisive effects that may long outlast the elections. Both risks were highlighted by government ministers in recent years [3], so we hope that the incumbent party as well as challengers disavow these campaign methods unequivocally. 

Though produced for short-term electoral advantage, dirty tricks by combatants in positions of authority and influence will pollute the online and offline environment, risking irreversible damage to the polity. We already see this in a number of established democracies. As a country trying to strengthen democratic institutions and processes, Singapore can ill-afford such irresponsible politicking. 

• Also see our fun guide to the not-so-fun menu of nasty tricks you may be exposed to in this GE.

Most of what we know about practices in Singapore is anecdotal. However, systematic research in other countries alerts us to some of these threats. A global audit in 2018 found organised social media manipulation campaigns in 70 countries (Singapore was not studied) [4]. The toolbox of digital techniques to manipulate public opinion and suppress dissent has been expanding rapidly. Although international headlines have focused on foreign interference in elections in developed democracies, researchers have found bots, trolls or cybertroopers being used by both ruling and opposition parties in Indonesia, Malaysia, Thailand and the Philippines [5] — countries whose internet capacities and reach are less than Singapore’s.

Singapore has several statutes and regulations governing online content [6]. However, these provide a false sense of security to voters expecting fair and honest online campaigning. For example, correction and take-down orders under POFMA can only be triggered by senior civil servants designated by ministers, and not by opposition politicians [7]. The anti-harassment law, POHA, criminalises doxxing and cyber-bullying, but much of the online abuse taking place during elections does not meet this legal threshold [8]. Besides, Singapore’s short campaign period makes it unlikely that police and courts can stop the dissemination of abusive content until after Polling Day, by which time the damage would have been done [9].

Singapore’s discussions about social media manipulation campaigns lag behind the times in key areas, such as data transparency, surveillance by cybertroopers, and the digital toolkits used by political parties’ hired digital consultancy companies [10]. We cannot discount the possibility that the consultants or cybertroopers here will behave as unscrupulously as Cambridge Analytica, to snoop, micro-target and insert dis/malinformation on social media platforms to influence voting behaviour [11].

Under current election rules, parties only need to present invoices to account for the amounts they spend on digital advertising [12]. They are not required to disclose what their digital campaign team and digital consultancy firms actually do. What kind of social media data are used? Are privacy cookies used to profile and segment voters? How is such data used to micro-target voters? When we go online as citizens, how do we know if we are participating in bona fide public opinion formation, or if we are being targeted by hidden influence operations and inauthentic behaviour? The alarming truth of the matter is that we don’t know. In the case of the ruling party, it is also difficult to ascertain whether negative campaigning against the opposition could be facilitated by ethical lapses in how data are collected or shared between state institutions and digital platforms [13].

In this internet-dependent election, voters need trustworthy online content if they are to make informed choices. Responding to this global problem, internet platforms have in recent years tried to deal with inauthentic behaviour. In Singapore, Facebook has started taking down problematic accounts ahead of Polling Day [14]. Twitter has taken steps globally to remove and disclose accounts linked to state-linked information operations, and to introduce labels to mark online falsehoods [15]. In principle, these initiatives are welcome, but they tend to lack transparency. There have been many reported cases globally of good-faith accounts being penalised even as extreme content continues to flourish [16].

Any attempt to tighten Singapore’s laws on online campaigning for future elections should take into account the need for credible and independent monitoring and enforcement. Given the highly partisan and potentially divisive quality of elections, demonstrably independent electoral commissions are particularly useful [17]. They can provide prompt but fair corrective action against behaviour that may undermine the political process. 

Even then, other countries’ experiences show that laws alone cannot ensure clean online campaigning. Whether or not it is technically possible, surveilling and policing all political communication is not desirable. Messaging apps such as WhatsApp need to remain private, even though we know they have become campaign battlegrounds [18].

Factchecking initiatives [19] and media and information literacy programmes [20] can help make up for the limitations of law. These, however, are not sufficiently developed in Singapore to deal with the threats we foresee to the integrity of the July 10 election. But instead of relying on state regulation, we hope that all political parties voluntarily pledge themselves immediately to responsible online behaviour in this election and beyond. Such pledges are not alien to democracies, even those that are more competitive [21].

Here are three commitments that political parties that are acting in good faith should not hesitate to adopt:

  1. Authenticity: Reject the use of cybertroopers, fake accounts, social bots and micro-targeting designed to create a false impression of public opinion and tilt debates.
  1. Civility: Abstain from abusive and manipulative communications, including messaging that pits people against one another; and repudiate errant followers [22]
  1. Transparency: Disclose sponsor identity, service provider/vendor identity, amounts spent, and targeting criteria of all forms of political advertising on digital platforms [23].

We encourage parties to build on these basic commitments and brand themselves by their values, instead of racing to the gutter. Independent expert observers, social media watchdogs  and other vigilant netizens are needed to monitor compliance. Their scrutiny can help discourage the covert use of online dirty tricks that the parties say they oppose. 

Even if these promises may be violated with impunity, such public sign-posting of norms and values is an important part of a holistic response to online harms. As George Yeo, one of the architect’s Singapore’s original “light touch” internet regulation framework, said of undesirable online content in 1999: “You can’t stop it. But do you condone it? Do you sanction it? … In the end, you must still have a sense of what is right and wrong.” [24]

The authors are researchers in the fields of media/internet studies, political science, and law.

For media: Are you interested in republishing this article? Please see our guidelines here.

NOTES

[1] Computational propaganda refers to “the use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks”. See Woolley, Samuel C. and Philip N. Howard. 2017. “Computational Propaganda Worldwide: Executive Summary.” Oxford Internet Institute Working Paper No. 2017.1.

Cybertroopers are government or political party actors tasked with manipulating public opinion online. See Bradshaw, Samantha and Philip Howard. 2019. “The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation.” Computational Propaganda Research Project. University of Oxford.

[2] On microtargeting: See IDEA. 2018. “Digital Microtargeting: Political Party Innovation Primer 1.” Strömsborg: International Institute for Democracy and Electoral Assistance.

[3] On online opinion manipulation, Law Minister K. Shanmugam said in the POFMA debate that trolls, bots and inauthentic accounts were being used in Singapore. “Such activity creates alternate realities. It manipulates perception, creates the impression that there are many voices, shouts down other viewpoints through fake accounts, shifts public opinion, erodes trust and undermines institutions.” He added: “Fake social media accounts manufactured to manipulate. Some of them cultivate persuasive online personas, gain followers, both real and fake, and used as fictitious leaders of public opinion, using falsehoods to sway minds, create impressions of public sentiment.” See Singapore Parliamentary Debates, Vol 94 (7 May 2019) (K Shanmugam, Minister for Home Affairs and Law).

On the dangers of political polarisation, Deputy Prime Minister Heng Swee Keat said in this year’s Budget Debate, “Technology has exacerbated these divisions, by enabling echo chambers, silos and fake news. Political polarisation is damaging because it pits people against one another and ultimately undermines the cohesion of a country.” Minister Grace Fu said, “When we drop the mindset of ‘us versus them’, and see ourselves as a collective ‘we’, Singaporeans with different points of view can sit together and work on issues important to our shared future, such as jobs and economy, the environment, the vulnerable and the disadvantaged.” See Singapore Parliamentary Debates, Vol 94 (28 February 2020) (Heng Swee Keat, Deputy Prime Minister and Minister for Finance); and (6 March 2020) (Grace Fu, Minister for Culture, Community, and Youth). 

[4] Bradshaw and Howard (see note 1.)

[5] Tan, Netina. 2020. “Electoral Management of Digital Campaigns and Disinformation in East and Southeast Asia.” Election Law Journal: Rules, Politics, and Policy 19 (2): 1–26. https://doi.org/10.1089/elj.2019.0599; Ong, Jonathan C., and Jason Vincent A. Cabañes. 2019. “When Disinformation Studies Meets Production Studies: Social Identities and Moral Justifications in the Political Trolling Industry.” International Journal of Communication 13(0). 0: 20. https://ijoc.org/index.php/ijoc/article/view/11417.

[6] See: Parliamentary Elections Act and Campaigning regulations; Protection from Online Falsehoods and Manipulation Act; Protection from Harassment Act; Personal Data Protection Act; Defamation Act; Sedition Act; Section 298 of the Penal Code; Broadcasting Act and Licensing Framework for Online News Sites.

[7] See International Commission of Jurists. 2019. “Legal Briefing: Protection from Online Falsehoods and Manipulation Bill No. 10/2019”; George, Cherian. 2020. “The Dogma behind POFMA” in Air-Conditioned Nation Revisited (Singapore: Ethos Books); POFMA, s 52.

[8] POHA, like POFMA, is designed to deal with discrete instances of abuse, such as when an identifiable individual or account engages in cyber-bullying. In most political campaigns, however, the communication work is distributed across many diverse actors, from party ideologues who decide on talking points and leaders who engage in “dog whistles”, to multiple anonymous accounts who circulate extreme opinions. These networks are designed to appear spontaneous and autonomous. It is usually not possible to prove that they are centrally coordinated. However, their cumulative effect is stronger than the kind of harms addressed by the law.

[9] See Kaiser, Shana. 2014. “Social Media: A Practical Guide for Electoral Management Bodies.” Stockholm: International Institute for Democracy and Electoral Assistance.

[10] On surveillance: “[W]hile no data breach or nefarious acts have been reported, the lack of transparency in the use of analytical tools to monitor online trends and “IB” raises surveillance and privacy issues.” Tan, 2020: 11 (see note 4 for full reference).

[11] On Cambridge Analytica, see Cadwalladr, Carole, and Emma Graham-Harrison. “Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach.” The Guardian. March 17, 2018. See also “Cambridge Analytica – The Singapore Connections.” The News Lens (Apr 24, 2018).

[12] See “Campaigning”, Elections Department website.

[13] See Tan, Netina. 2020. “Digital Learning and Extending Electoral Authoritarianism in Singapore.” Democratization (0) 0: 2. https://doi.org/10.1080/13510347.2020.1770731. (Published online 2 Jun 2020).

[14] On 28 June 2020, Facebook announced that it has taken down the pro-PAP page “Fabrications about PAP” for inauthentic behaviour, while declaring that the page has not been the only one it has removed in Singapore. The social media giant has stopped short of disclosing which other pages and accounts have been removed, although it has regularly published such information about other countries. 

[15] See Twitter statement on removal of state-linked operations.

[16] See “With Fact-Checks, Twitter Takes on a New Kind of Task.” The Straits Times. May 31, 2020; “Facebook Removes Trump Ad Over ‘Nazi Hate Symbol.’” BBC News, June 18, 2020. For an example of questionable relations between internet giants and governments, see: Dwoskin, Elizabeth, Craig Timberg, and Tony Romm. “Zuckerberg once wanted to sanction Trump. Then Facebook wrote rules that accommodated him.” Washington Post. June 29, 2020.

[17] On Singapore’s lack of an independent election commission, see ASEAN Parliamentarians for Human Rights. 2020. “In Singapore, an already unfair vote undermined by Covid-19”; Tan, Netina. 2015. “Pre-Electoral Malpractice, Gerrymandering and its Effects on Singapore’s 2015 GE” in Terence Lee & Kevin YL Tan (eds.) Change in Voting: Singapore’s 2015 General Election (Singapore: Ethos Books).

[18] On private messaging: Such forms of communication are influential because they are fueled by personal trust, in that people tend to be more willing to believe in and prioritise information conveyed to them by individuals whose views they value. See Wardle, Claire. 2019. “Closed groups, messaging apps, and online ads: The new battlegrounds of disinformation.” FirstDraft (November 8); Woolley, Samuel. 2020. Encrypted Messaging Apps are the Future of Propaganda. Brookings Institution Tech Stream (May 1); Evangelista, Rafael, and Fernanda Bruno. 2019. “WhatsApp and political instability in Brazil: targeted messages and political radicalisation”. Internet Policy Review 8 (4). DOI: 10.14763/2019.4.1434.

[19] On factchecking: Independent, non-partisan factchecking services can go over statements, speeches, videos, images, memes, and even live debates by politicians and parties, openly listing whether they are false, true, partially true, or out of context, and provide corroborating evidence for such evaluations for the public to access. The public should also be able to forward queries from their private messaging apps for these factchecking services to go over and receive responses that are forwardable on messaging apps. Such methods proved effective in Taiwan’s recently concluded Presidential and Legislative elections, which faced sustained disinformation campaigns from China. Han, Kirsten. 2018. Taiwanese Cofacts Fact-Checks Information on LINE. International Journalists Network (August 29); Huang, Hungyu. 2020. “Verifying the 2020 Presidential Elections: An Interview with the Taiwan Fact-Check Center.” Global Voices [Translated by Oiwan Lam] (February 25); Fairman, Connor. 2020. “When Election Interference Fails.” Note that these independent factchecking resources have also proven useful to address disinformation and misinformation during the COVID-19 pandemic. That said, factchecking alone is insufficient and needs public media literacy and other institutional changes in place to be more fully effective. See Barrera, Oscar, Sergei Guriev, and Ekaterina Zhuravskaya. 2019. Facts, Alternative Faces, and Fact Checking in the Times of Post-Truth Politics. Journal of Public Economics 182 (February); Brandt, Jessica and Torry Taussig. 2020. The Kremlin’s Disinformation Playbook Goes to Beijing. Brookings Institution (May 19); Fairman, Connor. 2020. When Election Interference Fails. Council on Foreign Relations (January 29); Repnikova, Maria. (2020) The Subtle Mudrackers of the Coronavirus Epidemic. New York Times (February 5); West, Darrell M. 2017. How to Combat Fake News and Disinformation. Brookings Institution (December 18)

[20] On media and information literacy: Such education has proved particularly useful for Finland in addressing Russian disinformation. See Jankowicz, Nina. 2018  “The Disinformation Vaccination.” The Wilson Quarterly 42(1); Reid, Standish. 2017. “Why is Finland able to Fend Off Putin’s Information War?Foreign Policy (March 1). Awareness about the need to treat information critically helps inoculate people against misinformation and disinformation. See Eberhart, George M. 2019. “Media Literacy in an Age of Fake News.” American Libraries (November 1).

[21] Precedents for voluntary codes: Britain’s Labour Party has a social media code of conduct that states, “Anonymous accounts or otherwise hiding one’s identity for the purpose of abusing others is never permissible.” It adds: “Trolling, or otherwise disrupting the ability of others to debate is not acceptable.” Germany’s Green Party has pledged not to use deceptive social bots, data-driven microtargeting and other non-transparent influence operations. In her campaign for the Democratic nomination for the United States presidential election, Elizabeth Warren pledged to fight disinformation targeting her opponents: “I’m sending a clear message to anyone associated with the Warren campaign: I will not tolerate the use of false information or false accounts to attack my opponents, promote my campaign, or undermine our elections. And I urge my fellow candidates to do the same.” 

[22] On abuse: “Abuse” is of course subjective. When judging whether your own conduct crosses a line, media ethicists recommend applying the age-old Golden Rule: to treat others the way you would like to be treated. 

[23] On transparency: This principle is in line with concerns raised by the 2019 Select Committee, which have not been fully addressed by POFMA. 

The Code of Practice for Transparency of Online Political Advertisements, issued under Section 48 of POFMA, requires digital advertising intermediaries and internet platforms to report information about political ads and other paid content to the POFMA Office. This approach is doubly problematic. First, it is too limited in scope: it does not cover the full range of ways in which parties could enter into commercial arrangements with platforms. Second, the Code requires disclosure of data that could reveal microtargeting practices (such as information about intended target audiences) only to the government-controlled POFMA Office, not to the wider public. Far from providing transparency, therefore, the Code can be used as a one-way mirror for the ruling party to gather intelligence on opponents’ online activities. 

The Select Committee’s recommendations, in contrast, called for public accounting. It made the following recommendation (para 468): “Given how digital advertising has facilitated the creation and spread of online falsehoods, there should also be full disclosure on the sponsor identity, amounts spent, and targeting criteria of all forms of digital advertising on the platforms of technology companies. In this regard, the UK Committee Interim Report has recommended that ‘paid-for political advertising data … [should be made] publicly accessible, [to] identify the source, explaining who uploaded it, who sponsored it, and its country of origin’. There is also a need to ensure the ‘full disclosure of targeting used as part of advert transparency’. The [Select Committee] agrees with these recommendations, to enable users to critically assess the information they access online. The [Select Committee] also agrees with Dr Wardle that these requirements should apply equally to all forms of digital advertising, as false information with serious consequences can and have been peddled by advertisements which are not targeted at political candidates, or a particular election.”

[24] Singapore’s internet regulation has always recognised the importance of symbolic statements of societal values: this is why local ISPs are required to block a small number of websites containing pornography and other content that the “community regards as offensive or harmful”. What Singapore’s first information and the arts minister, George Yeo, said of online pornography could apply equally to dirty politics: “You can’t stop it. But do you condone it? Do you sanction it? There is always a certain amount of hypocrisy involved in all of this. But then, as they say, hypocrisy is the compliment which vice pays to virtue. … We will always fall short of those standards, but we must have them. … In this way, the norms of a culture are established, in the tribe or the community — which is necessary. In the end, you must still have a sense of what is right and wrong.” (Quoted in the 2010 Censorship Review Committee Report, p.25)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.