India’s digital response to COVID-19 risks inefficacy, exclusion and discrimination

Aarogya Setu is one among many uses of digital-surveillance technologies that the central and state governments are pervasively deploying in their efforts to stop the spread of COVID-19. The dangers it poses goes beyond concerns of privacy and data security. Utkarsh for The Caravan
19 April, 2020

When Prime Minister Narendra Modi addressed the nation on 14 April, he urged every Indian citizen to download the Aarogya Setu mobile application as one of seven steps that he identified to fight the novel coronavirus. By the next day, fifty million Indians had reportedly downloaded Aarogya Setu, within 13 days of its launch, setting records for the fastest app to reach those numbers in such a short span of time. They downloaded it, presumably, with the understanding that it is helping, or in any event, could do no harm to the efforts to improve public health. Modi had not explained why or how it would work. In fact, for an estimated two-thirds of the country that still lacks access to smartphones, there might still be confusion about how, if at all, they are expected to participate or benefit from this in the fight against COVID-19.

The government of India, and the prime minister in particular, have actively promoted and advocated the use of digital technologies such as Aarogya Setu to aid the national response to the COVID-19 pandemic. Aarogya Setu implements a form of digital contact-tracing, based on an individual’s health status, which is determined by information that users have to enter in the app, and the individual’s “social graph,” which identifies whether a user may have interacted with someone who could test positive by tracking their movement through their location and Bluetooth. Through this, the app determines whether a user risks infection from having been in contact with a carrier of the virus.

This is one among many uses of digital-surveillance technologies that the central and state governments are pervasively deploying in their efforts to stop the spread of COVID-19. There are already more than a dozen government applications that use a combination of features, such as GPS surveillance, facial recognition and thermal imaging, to identify potential  carriers of the virus and enforce quarantines and lockdowns. These apps are also used to assist public authorities in making more detailed policy decisions, such as allocating additional healthcare resources to virus hotspots.

Both in India and across the world, these technologies are acknowledged to be experimental and untested. In the midst of a public-health crisis, where expediency is the priority, these interventions do not benefit from the scrutiny of rigorous public consultation before they are introduced. But as the government rushes to expand and scale these tools, it is important to question whether they will even work, and who they might work against. When concerns of data privacy and security have been raised, the developers of these apps have made public assurances that the data is securely transmitted and minimally retained. Google and Apple, too, recently jumped into the fray to offer “privacy-preserving contact tracing” mechanisms. But the question of whether a privacy-preserving version of these tools can exist, however, might distract from more fundamental questions of their efficacy, exclusion, and discriminatory use as a punitive and policing mechanism.

These apps demand scrutiny into urgent gaps in medical and logistical infrastructure that they eventually rely on. For contract-tracing technologies to work, at least two assumptions must hold—first, that there is widespread or universal testing, and second, that the use of smartphone technologies such as Bluetooth or GPS give a reliable indication of how the disease may spread. Without this assurance, any efforts at contact tracing would be inadvertently limited by the available information and unable to reflect an accurate image of the spread of the virus. There is reason to be skeptical about how far these assumptions will hold in the Indian context, given that it has shown some of the lowest testing rates in the world, and that simple location-tracing does not provide important contextual information about how the virus can spread.

Most smartphone applications will only trigger an alarm based on evidence of a positive test. On the other hand, if these apps move to the self-assessment model, where users fill out a questionnaire to determine their risk of infection, there is the possibility of both under-reporting or a large number of false positives, which could create panic and driving many to testing facilities. This could, in turn, potentially overwhelm the healthcare systems and create its own risks. Even in countries such as Singapore and South Korea, where contact-tracing applications have been effective in detecting and slowing the spread of the virus, it has only been useful to the extent that it has supplemented, and not supplanted, on-ground contact-tracing efforts.

As India seeks both inspiration and legitimacy from these global counterparts, there are sobering reminders of how an over-reliance on data can lead to disastrous consequences if it is not supported with a robust digital health-information infrastructure. In 2014, as West Africa struggled to combat Ebola, headlines were quick to announce that “big data” could help stop the spread of Ebola, championing research that would use mobile phone data to track and eventually mitigate its spread. But the over-reliance on this data proved costly, and eventually led to error-prone and unhelpful policy recommendations because the models rested on faulty assumptions about mobile phone ownership and usage. This is also a powerful reminder of the opportunity costs to the hype that surrounds technological interventions in a moment of crisis. Given India’s low testing numbers and infrastructural limitations, contact-tracing apps will likely be limited in their efficacy, and make for unstable and inaccurate foundations to frame health policy.

The assumptions of widespread testing infrastructure and high smartphone penetration make the case for why lessons from these regions cannot be transplanted to justify a reliance on these tools in India. According to the India Internet 2019 report, by the telecom body Internet and Mobile Association of India and the global market-research firm Nielsen, only around 36 percent of the population has access to the internet. The report also revealed a wide disparity between urban areas, which showed 51 percent internet access, and rural areas, which showed only 27 percent. It also reflected disparities between states, with Delhi’s internet penetration as high as 69 percent, but Odisha’s at 25 percent.

While there are no official figures on smartphone usage in India, it is likely to be lower than the numbers on internet access. As a result, even if every Indian with access to a smartphone installed the app, as many as two-thirds could still be left out. Indian users are also typically extremely battery-conscious in their use of the phone, and especially so where access to electricity is limited. With Bluetooth and GPS location severely draining device battery, this is likely to be another major limitation on people’s ability and willingness to use the applications as required.

The government has also begun monitoring the data collected as a by-product of apps such as Aarogya Setu to see how it could inform policy decisions. For instance, Aarogya Setu’s privacy policy allows the use of personal data generated from the app “in anonymised, aggregated datasets for the purpose of generating reports, heat maps, and other statistical visualisations for the purpose of the management of COVID-19 in the country.” However, these technology-based responses to the pandemic obscure that the country still lacks the foundational infrastructure for analysing digital health information, from the digitisation of health records to making health information interoperable and usable across government systems.

While access to digitised information about the public-health system, such as the number of hospital beds, disease incidence and death tolls, would have been invaluable for government agencies making decisions about how to ration hospital resources and testing facilities, most of this data is not available for policy and planning authorities. Ambitious plans for improving digital health-infrastructure for the purpose of data analytics, from the National e-Health Authority to the most recent National Digital Health Blueprint, have not materialised. Additionally, the legal and institutional frameworks for these systems—such as the draft Digital Information Systems in Healthcare Act, which had been in the works since 2018 and could have gone some way in ensuring that government initiatives for sharing health data respect principles of data protection—have also fallen through owing to a lack of political will to implement privacy legislation promptly. In the absence of robust and qualitative health data, the utility of data gathered through apps such as Aarogya Setu will be severely limited.

The concerns that emerge from the central and state governments’ uncritical reliance on digital-surveillance technologies are compounded by the lack of institutional transparency and accountability in how these technologies have been developed and deployed, and what interests they may ultimately serve. In stark contrast to the technology-development efforts in other countries such as the United Kingdom and Singapore, which were largely spearheaded by public-health agencies, in India, the ministry of information technology—through its National Informatics Centre—has been at the forefront, with influential actors from the industry. However, there appears to be a conspicuous absence of public-health experts, or even the ministry of health. For instance, the Aarogya Setu app is widely reported to have been developed through NIC’s collaboration with NITI Aayog and a team of volunteers which included members of iSPIRT—a domestic industry association that has pushed for the use of Aadhaar and other government technologies for private commercial interests. But reports of the app’s development do not mention any public-health expert or health-ministry official as having been involved in the process.

In the wake of the pandemic and incidents of individuals not isolating themselves, many police agencies and private companies have also launched quarantine-monitoring applications. Vijna Labs and Pixxon AI are two Bangalore-based companies that boast of law-enforcement tools that aid in quarantine enforcement and contact tracing. Similarly, Innefu Labs is one of the vendors involved in building Delhi Police’s Automated Facial Recognition System which was used to monitor crowds during the recent violence in northeast Delhi. In the wake of the COVID-19 pandemic, the developers have announced a new contact-tracing app, and claimed that three police agencies have expressed eagerness to use it.

Narratives of technological efficiency also mask the discriminatory and punitive uses to which these tools can be put. The limitations of India’s digital health infrastrcuture, coupled with the legal and institutional context in which the digital surveillance is taking place, greatly increases the risk of discriminatory targeting. For instance, it can be used to disproportionately focus on specific communities and create a false impression of the nature and reasons for the spread of the virus.

In India, the use of these technologies has been rooted within security and policing apparatus, instead of its public-health systems. In part, this arises from the historical context of epidemic responses in India, particularly the reliance on the Epidemic Diseases Act, 1897—a pre-Constitution legislation that grants extraordinary powers to the executive to prevent the spread of disease. The EDA was hastily enacted by the British colonial administration to control the spread of a plague that swept Bombay in 1896, with an aim to identify and classify individuals to discipline and contain the movement of the Indians who were presumed to be “diseased.” More than a century later, we still do not have a rights-based legislation that relies on public-health authorities instead of police forces to respond to a pandemic.

The practices of identification and surveillance that facilitated discriminatory enforcement of restrictions on movement find clear parallels in current practices. This is most evidently reflected in the disproportionate focus on surveilling and policing members of the Tablighi Jamaat, an Islamic revivalist organisation, whose conference in the national capital had led to a cluster outbreak of COVID-19. In the notable absence of widespread testing, the infections associated with the Tablighi Jamaat resulted in additional, motivated and biased surveillance on anyone potentially associated with this sect. Surveillance based on such discriminatory profiling, instead of on accurate information gained from universal testing, only serves to reinforce such biases, while hiding the true extent of the spread of the virus.

With the same vendors developing both policing and health surveillance technology for the state, there is also a real possibility of the data collected for responding to COVID-19 being cross-utilised in police-surveillance systems more generally. This also implies that individuals and communities subjected to biased surveillance practices will be disproportionately affected, not only to forceful quarantining efforts, but also to police violence. It will likely be the most economically and socially vulnerable, such as migrant workers and Muslims, who are at greater risk of being surveilled and accused of violating the lockdown.

The overlap between the public-health system and the security apparatus, combined with the lack of adequate privacy laws, also engenders distrust from the very people who are expected to volunteer information and cooperate with government efforts to contain the pandemic. Even in the context of epidemics such as HIV/AIDS, human-rights and constitutional courts have recognised that such discriminatory and punitive practices violate the right to privacy, and are counterproductive to public-health efforts. These patterns of “policing” the pandemic are not unique to India. In Canada, multiple new laws have resulted in six-figure fines and prison time for those who dviolate physical-distancing rules. In the United States and the United Kingdom, too, two countries with deep historical legacies of discriminatory and arbitrary policing, there are concerns that police discretion in the enforcement of COVID-19 restrictions will hit communities of colour the hardest.

As India, and the world, move towards easing lockdown measures, discretionary decision making by police agencies will only increase. Whether it is determining hotspots or containment zones, or deciding who will be allowed to travel across city and state borders, we are likely to see data-driven tools aiding and informing these decisions. On 8 April, the Economic Times quoted an anonymous member of a government panel thatanalysed the use of drones during the lockdown, who said that on the basis of the data collected from such technology, the “population can be divided into various categories” for the purpose of enforcing differential social-distancing rules. Modi has stated that Aarogya Setu could also be used as an “e-pass” to facilitate mobility once the lockdown measures are relaxed.

There are already warning signs from China, which is using health-code apps to determine mobility across borders and for permissions to enter workplaces. Reports suggest multiple complaints of the app randomly flipping status—from green, which represents free to travel, to red, which restricts movement—leaving users unable to comprehend or question these outcomes. In Dubai, the police are now using an app powered by machine-learning algorithms to determine whether someone’s travel should be considered essential, with little information about the logic underlying these decisions.

Problems of non-transparency and discrimination are compounded by the use of such automated systems, which may use biased data or use unclear algorithmic logic to make decisions, and are often difficult to comprehend or challenge. Laws such as Europe’s General Data Protection Regulation, and legislative proposals, such as the Algorithmic Accountability Bill in the United States, seek to put standards of due process and transparency whenever automated systems are used by government and law-enforcement agencies to make decisions that impact people’s rights and entitlements. The need for these kinds of policy safeguards in India will become clearer and more urgent as we see the proliferation of these tools in the context of the pandemic and beyond.

Evidently, it would be false comfort to believe that these technologies can do no harm. The manner in which digital technologies are used may undermine and detract from the aim of ensuring universal and public-health oriented responses to this grave crisis. The social impacts of these surveillance measures go beyond concerns of data security and the misuse of the information gathered from these apps. They are being introduced and tested within a highly fragmented socio-economic and political landscape, with the potential to exacerbate discrimination and state violence against marginalised groups as well as obscure and deepen failures within our public health and governance systems. Ignoring this complexity and context not only risks violating individual and community rights, but also hampers the effectiveness of these tools to do the job they promise.