India’s digital response to COVID-19 risks inefficacy, exclusion and discrimination

19 April 2020
Aarogya Setu is one among many uses of digital-surveillance technologies that the central and state governments are pervasively deploying in their efforts to stop the spread of COVID-19. The dangers it poses goes beyond concerns of privacy and data security.
Utkarsh for The Caravan
Aarogya Setu is one among many uses of digital-surveillance technologies that the central and state governments are pervasively deploying in their efforts to stop the spread of COVID-19. The dangers it poses goes beyond concerns of privacy and data security.
Utkarsh for The Caravan

When Prime Minister Narendra Modi addressed the nation on 14 April, he urged every Indian citizen to download the Aarogya Setu mobile application as one of seven steps that he identified to fight the novel coronavirus. By the next day, fifty million Indians had reportedly downloaded Aarogya Setu, within 13 days of its launch, setting records for the fastest app to reach those numbers in such a short span of time. They downloaded it, presumably, with the understanding that it is helping, or in any event, could do no harm to the efforts to improve public health. Modi had not explained why or how it would work. In fact, for an estimated two-thirds of the country that still lacks access to smartphones, there might still be confusion about how, if at all, they are expected to participate or benefit from this in the fight against COVID-19.

The government of India, and the prime minister in particular, have actively promoted and advocated the use of digital technologies such as Aarogya Setu to aid the national response to the COVID-19 pandemic. Aarogya Setu implements a form of digital contact-tracing, based on an individual’s health status, which is determined by information that users have to enter in the app, and the individual’s “social graph,” which identifies whether a user may have interacted with someone who could test positive by tracking their movement through their location and Bluetooth. Through this, the app determines whether a user risks infection from having been in contact with a carrier of the virus.

This is one among many uses of digital-surveillance technologies that the central and state governments are pervasively deploying in their efforts to stop the spread of COVID-19. There are already more than a dozen government applications that use a combination of features, such as GPS surveillance, facial recognition and thermal imaging, to identify potential  carriers of the virus and enforce quarantines and lockdowns. These apps are also used to assist public authorities in making more detailed policy decisions, such as allocating additional healthcare resources to virus hotspots.

Both in India and across the world, these technologies are acknowledged to be experimental and untested. In the midst of a public-health crisis, where expediency is the priority, these interventions do not benefit from the scrutiny of rigorous public consultation before they are introduced. But as the government rushes to expand and scale these tools, it is important to question whether they will even work, and who they might work against. When concerns of data privacy and security have been raised, the developers of these apps have made public assurances that the data is securely transmitted and minimally retained. Google and Apple, too, recently jumped into the fray to offer “privacy-preserving contact tracing” mechanisms. But the question of whether a privacy-preserving version of these tools can exist, however, might distract from more fundamental questions of their efficacy, exclusion, and discriminatory use as a punitive and policing mechanism.

These apps demand scrutiny into urgent gaps in medical and logistical infrastructure that they eventually rely on. For contract-tracing technologies to work, at least two assumptions must hold—first, that there is widespread or universal testing, and second, that the use of smartphone technologies such as Bluetooth or GPS give a reliable indication of how the disease may spread. Without this assurance, any efforts at contact tracing would be inadvertently limited by the available information and unable to reflect an accurate image of the spread of the virus. There is reason to be skeptical about how far these assumptions will hold in the Indian context, given that it has shown some of the lowest testing rates in the world, and that simple location-tracing does not provide important contextual information about how the virus can spread.

Divij Joshi is a lawyer and researcher, working on technology policy in India as a Mozilla Tech Policy Fellow.

Amba Kak is the director of Global Programs at AI Now Institute, New York University and a fellow at the Engelberg Center, NYU School of Law.

Keywords: COVID-19 technology surveillance digital security religious exclusion
COMMENT