In light of Singapore’s burgeoning data collection systems, Monamie Bhadra Haines (Assistant Professor of Sociology at NTU) and Hallam Stevens (Associate Professor of History at NTU and Associate Director (Academic) of the NTU Institute of Science and Technology for Humanity) argue that citizen technology assessments should be integrated into building the Smart Nation.
On 25 September, the BBC announced that Singapore would soon add yet another “first” to its armoury of technology to build and fortify the Smart Nation: “facial verification”. Linked to the government’s digital identity scheme (SingPass), facial verification will allow users to “authenticate” financial and legal transactions using their faces. Private companies, such as banks, will also be able to utilise the system.
Facial recognition has recently gotten a bad rap, largely due to its potential for bias and use in surveillance. In the wake of the killing of George Floyd in the United States, IBM pulled back from developing the technology. Facial verification, its proponents are quick to note, is supposed to be quite different. For one thing, while facial recognition can be directed at you as you pass a camera in a public street, facial verification requires your consent. You need, in fact, to ask for it.
But perhaps more importantly, an article from the BBC stresses, facial verification is different because “the user gets something in return”. Namely, you get access to your bank account or to an online or offline service. You get convenience. Speed. Efficiency. The user surrenders their data (and potentially their privacy), but this is “balanced out” or “traded off” by the benefits flowing back to them.
These notions of technologically-produced efficiency and convenience fit squarely within Singapore’s wider vision of its future: an economy and society driven by technology. The government justifies this “imperative” in terms of economic urgency, with the Smart Nation enhancing Singapore’s “magnetic pull” to entice foreigners to live and work in a place at the forefront of cosmopolitan capitals, not one that is “falling behind”. According to this plan, “businesses can be more productive and seize new opportunities” (p. 1) and “Singaporeans are empowered to maximize the opportunities and leverage conveniences of a digital society to lead meaningful lives” (p. 15). Digital technologies, like facial verification, are imagined as enablers of both national progress (the “next phase nation building”) and personal fulfillment.
No doubt facial verification and other digital technologies will provide higher levels of convenience and efficiency for Singaporeans. But these benefits should cause us to evaluate them more carefully, not less. The fact that most of us willingly give over troves of our personal data to Facebook or other social media sites in exchange for entertainment and keeping in touch with friends and loved ones does not make these transactions benign. The trade off we are making demands careful scrutiny.
And that scrutiny of social media has revealed how our personal data has flowed in all sorts of ways and directions that users were not aware of and did not anticipate. How our data has been aggregated, sold, brokered, re-sold, reused and repurposed has caused the greatest concern. A plethora of companies are now involved in not only collecting our data, but also mixing and recombining it in various ways to generate novel “insights” about us as consumers, citizens, healthy (or not) bodies, lovers, migrants or criminals. But data is not neutral. Its production and generation are shaped by existing institutions and the social and political dynamics within them. Data are often linked to reproducing existing societal inequalities. In the United States, for example, many kinds of data are being fed into algorithms for sentencing that have been shown to reinforce existing racial stereotypes.
Under the aegis of Smart Nationhood, and accelerated by the pandemic, Singapore is now in the process of implementing a wide range of data collection systems. Robot dogs, drones and CCTV capture images from public parks. Apps such as SafeEntry collect information about when we enter buildings and businesses. TraceTogether collects information about who we have spent time with or near. Traffic cameras, cars, handphones, streetlamps and cashless payments are all mobilised into this data economy. Most recently, this even includes tracking of pathogens in wastewater from homes and businesses, so that engineers even imagine linking diarrhoea outbreaks to data from food delivery apps! Was it last Tuesday’s pizza or yesterday’s dumplings that had you running to the bathroom? With the circulation of these intimate details, we now have parallel data selves, “data shadows” some have called them, haunting our existence.
All this suggests that we should not think about new technologies like facial verification in isolation. They are not individual technologies, but part of a growing constellation of data-collection methods. When we see a new technology like this, it is not enough to simply ask, “What can be done with my facial data?” Rather, the lessons of Facebook suggest that we need to think about facial verification within the context of the increasing volumes and variety of data that are being collected by both private and public institutions. What can, for example, be done with my face plus my credit card transactions plus my geo-location data plus Smart Meter data on energy usage? What can be done with data from smart coffee makers, my average driving speeds and medical reports? And how can all of this disparate data be protected from being hacked or hijacked for unsanctioned ends?
We need to know much more about how data will be collected, used and protected in Singapore. This is not just about the government sharing its data, but also about understanding where data comes from, where it goes, whether it is sufficiently secure, and what risks of data leakage Singaporeans are willing to tolerate.
The rule of experts
Who is asking and answering these questions in Singapore? For the most part, it seems that such assessments are the preserve of engineers and other technical experts. To take a recent example, in June, as the government was planning its roll-out of new TraceTogether tokens—wearable tracking devices for the purposes of contact tracing—GovTech organised a “technology teardown” on the device, billed as a form of “community engagement”. Four carefully selected locally-based engineers—self-proclaimed tech geeks—were invited to the GovTech offices to learn about the token and to examine its workings in some detail (although they weren’t allowed to completely take it apart). With GovTech encouragement, several then blogged about their experiences and findings.
At the conclusion, “the four tech experts were unanimous in their observations that the token would only perform what it was set out to do”. For the most part, the teardowns offered technical analysis of the inner workings of the token—its power source, its protocols, its components and its security. But their conclusions went far beyond this narrow technical focus. “Is the use of the token ok?” Roland Turner, a security expert, asked rhetorically. “Yes. It fills really important gaps in the ability of MoH to keep the virus contained and therefore to facilitate a return to sustainable economic activity while the epidemic continues…” But this is hardly a question that can be answered solely by the kind of technical analysis that Turner offered. Just because the token appears to be secure or because it would be difficult to use it for tracking individuals does not mean that using it is “ok”. Answering such a question requires a far broader kind of analysis of the social and political milieu within which it is deployed. Just because a device works as it should, doesn’t mean it should work as it does.
Bunnie Huang, a hardware hacker, also made a case for the hardware token on the grounds that it offers more privacy than a smartphone and also that “the government is, perhaps inadvertently, empowering citizens to rebel against the TraceTogether system: one can always crush their token and “opt-out” of the system”. Huang also argues that it would not be the government’s long-term interest to use tokens as location-tracking devices:
If the government gets caught scattering BLE [Bluetooth low-energy] receivers around the island, or an errant token is found containing suspicious circuitry, the government stands to lose not just the trust of the people, but also access to full-graph contact tracing as citizens and residents dispose of tokens en masse. This restores a certain balance of power, where the government can and will be held accountable to its social contract, even as we amass contact tracing data together as a whole.
But in invoking notions of trust and the balance of power between the state and citizens, such assessments go far beyond the technical. They require knowledge (or assumptions) about how citizens will act, about how laws are enforced and obeyed, about how the government operates, and to what extent they trust the government. Would people really “crush their tokens” and opt out of the system? Tokens remain the property of the government and so “crushing” them would technically be destroying government property—and likely a punishable offense. Would the government make the same cost-benefit calculation as Huang does? If they did “get caught” tracking people, would individuals then respond by “holding them accountable”? Understanding Singapore’s broader political culture—within which the mechanisms for holding state agencies or politicians to account are thin—is crucial here.
What would we need to know to answer all these questions? Certainly, tech expertise is not enough. Tech geeks can and should have opinions about these matters. However, they have no special claim to knowledge in these non-tech-related domains. Their opinions should not hold any special weight, as technology, the data it produces, and how it is recombined and consumed is suffused with moral, social and political values. Technologies such as the TraceTogether token demand assessment not just in terms of whether they “work” or not but also in terms of whether they are worth the money, whether they adequately protect our privacy, whether they are safe, whether they are equally accessible to all, whether they facilitate equity, and whether they are compatible with values such as fairness, dignity and respect. This requires all sorts of knowledge beyond the technical.
Citizen technology assessments
Singapore’s 1965 written constitution articulates governing philosophies, rules of conduct and the legal architecture that together form the “social contract” between the state and its citizens and denizens. Increasingly, the technologies that surround us resemble a supplementary “constitution” that governs and shapes our behaviour. Frequenting brick and mortar stores require SafeEntry and temperature checks; going to the park means accepting various forms of monitoring; using the toilet means providing data on one’s internal viral ecologies. All these place stricter and stricter limits on what we can do, with whom, and how we do it. This technological constitution is binding the state, citizens and other residents together in legal, cultural, even social psychological relationships. While such technologies are not yet mandatory per se, not using them requires dramatic abstinence from participation in cultural and economic life, and disconnecting from all the Smart Nation offers. This makes it even more critical that such technologies are subjected to critical scrutiny.
Rather than relying on technical experts, those best-placed to perform such assessment are likely to be users themselves—that is, those most directly affected by the technologies when they are rolled out. Several countries, especially in Europe, have now deployed “citizen technology assessments” or “participatory technology assessments”, in which groups of citizens (often in focus-group-like settings) provide feedback on technologies. These opinions are then fed back to designers and policy-makers to improve designs and implementations. This stands in direct contrast to the top-down and tech-centric assessment performed by GovTech.
Such assessments must be performed with respect and sensitivity to the specific social, cultural, and economic context of Singapore. The state is already undertaking some kinds of public engagement, often with the goal of experts educating the public, or building a coalition of individuals and industries invested in developing particular technologies, such as artificial intelligence or autonomous vehicles. Beyond these organised forms of engagement, recent social media interactions between government ministers and citizens regarding the privacy implications of TraceTogether, and the earlier dismay by food delivery riders on the ban on personal mobility devices, demonstrated the need for participation in discussions around technologies that will shape our lives.
Yet such interactions occurred after the technology had already been released; at this downstream point, the government felt the need to “explain” and “justify” the tech rather than engage in dialogue. Future events must go beyond mere performances of public participation, where a heavily-vetted group of people are selected to accept prevailing wisdom, to more substantive forms of deliberation that bring together a broader range of voices and diverse perspectives on technologies.
When thoughtfully orchestrated, such forms of deliberation strengthen relationships of trust and legitimacy, even if participants end up disagreeing with experts about the proper role of technology in their lives. Beyond contributing knowledge of users’ experiences with technology (would people really crush tokens?), citizen technology assessments bring to bear individuals’ and publics’ fears, hopes and imaginations of the future. They can knit together shared stories across widely divergent experiences of hawker stall owners, mothers, the elderly, taxi uncles or Grab delivery riders. But they are also places to encounter divergent experiences of digital technologies, experiences that may be shaped by people’s socioeconomic positions in society, as well as gender and ethnicity. They conjure up people’s worries about surveillance, Big Brother or Big Corporations. They allow individuals to articulate dreams of happier, wealthier or healthier living. Of course, such storytelling does not happen in a vacuum but forms in dialogue with one another and with experts. These should be part of our understanding and assessment of technologies, too.
We need this plurality of stories because they speak to lived experiences and contexts that experts simply do not have, and often overlook in expert-driven tech or engineering assessment. Take genetically modified foods, or GMOs, for example. These were created in the 1970s to create pest- and drought-resistant crops that could also “end world hunger” for a growing population—a claim that is highly debatable—by increasing nutritional content. Publics around the world have been assured by experts that GMOs are “safe”. By technical standards, they seem to be. Yet the publics outside the United States continue to be wary. The series of high profile controversies of recent decades, where the property rights of GMOs were linked to biopiracy, farmer suicides in India, the encroachment of GMO corn into areas seeded with indigenous varieties, possible biodiversity loss and possible safety issues, caused a reassessment of the societal uses and implications of GMOs. It wasn’t that experts were wrong and the public was right, but rather that the dangers GMOs posed were not part of the narrow, technical safety testing regimes. To take stock of whether or not GMOs ‘worked’ meant investigating their effects in the larger world, from the nameless farmer to consumption at the dinner table, and from funding to commercialisation and patenting.
As the building blocks of the Smart Nation are rapidly put into place, citizen assessments of technology—and the narratives they produce—are more necessary than ever. Technology is already a site for contestation and debate in Singapore, particularly when it breaks or malfunctions. During the MRT breakdowns of 2011 and 2016, Singaporeans were quick to express their discontent and to demand that their social contract be upheld and for the state to uphold its promise of a world class public infrastructure. As Faith Ng, a Singaporean playwright, explained in a Facebook comment: “We have been made to believe that there are small but daily sacrifices we each have to make, in order to maintain the smooth and taut fabric of society… So when our trains don’t arrive on time, … we go quite mad really—we kept our end of the bargain, why haven’t those in power kept theirs?” But technology should not need to break down in order to be called to public scrutiny. Even when technology is working perfectly well, perhaps even especially when technology is perfectly well, we should ask who it is working well for.
The outcomes of the 2020 General Election and its aftermath have released tremendous civic energies in Singapore, with citizens openly discussing usually hush-hush issues of race, the fairness of the legal system, the death penalty and the minimum wage. Yet in the midst of such civic engagement to reform entrenched systems, another system is being built, often with tacit acceptance. Created and idealised through the fiction of omniscience, apoliticism and neutrality, the Smart Nation seeks to fully know and understand us through algorithmic probabilities that anticipate and predict our behaviors. But the social and technical worlds of this new digital rationality of the Smart Nation are mostly hidden from public view, emerging beyond the perceptions of most citizens.
Citizen technology assessments would go a long way towards bringing these technologies into the open, demystifying the esoteric codes that govern more and more aspects of our lives. Of course, such demystification does not happen with one event, but rather the gradual strengthening of the capacities that publics have for critical thought. Unlike the educational activities of the state, public education should be focused on generating dialogue between diverse sources of expertise and the public, to educate future and existing citizens about digital technology and privacy, and its other social, ethical and political dimensions.
The recent misappropriation of footage from hacked in-home surveillance cameras for pornography by a group on Discord has once again called attention to some Singaporeans’ lack of cyber-security savvy, demonstrating the importance of a publicly-oriented educational programme designed to vet the value of technology and its implications. After many years of advocacy, environmental protection and stewardship has become a taken-for-granted part of the curriculum in primary school education. Similarly, integrating social and political analysis needs to become commonplace in any science and technology-oriented discussion.
With recent calls for openness, transparency and policy co-creation, there is a growing governmental desire for greater feedback from citizens. The June 2020 petition against TraceTogether shows that people are keen to have their voices heard on technology. The Smart Nation needs to have room for not only coders and geeks but also drivers, mothers, retirees, teachers, cooks, artists, cleaners, and gardeners. To achieve that, new technologies like facial verification need to be subjected to far wider public scrutiny and debate.
For media: Are you interested in republishing this article? Please see our guidelines here.