The dangers of facial-recognition technology in Indian policing

CCTV cameras and facial recognition systems are surveillance tools that carry on the legacy of analogue technologies in stereotyping and targeting specific communities. An image of Chander Mohan, a deputy commissioner of police, from the Gurugram Metropolitan Development Authority on 10 April 2020. Yogendra Kumar/Hindustan Times
26 May, 2023

“When we don’t trust the police, how can we trust their cameras? Even the British would have behaved better than the Delhi Police at some point of time.” Sitting on the first floor of an apartment in north-east Delhi, a young Muslim man said this while recounting to me how Hindu mobs perpetrated communal violence over three days in February 2020. “I had to throw children, one- or two-month-old babies, down from the second story, had to make women jump down two floors,” he said. “Somehow, we escaped.”

Along with Hindu mobs, Delhi Police personnel were also accused of being involved in attacking Muslims. The media reported how the police did not register first-information reports based on complaints made by Muslims which incriminated members of the Bharatiya Janata Party for leading the violence. Forty of the 53 dead were Muslim. The police charged Muslim men even in cases where the victims were from the same community.

On 12 March 2020, the union home minister, Amit Shah, told the Rajya Sabha that the Delhi Police had used facial-recognition technology to identify nearly two thousand individuals as instigators of violence. Over the next year, FRT systems led to 137 arrests. Even as there was no legal framework to regulate the use of the tool in the country, the infrastructure was already in place. A quarter million state-sponsored CCTV cameras had been installed in Delhi by 2019, while another three hundred thousand were slated to be added. Governments had begun automating the recognition and identification of individuals from CCTV footage via FRT. When the Internet Freedom Foundation, a digital-rights advocacy group, inquired, in a right-to-information application, about the legality of the Delhi Police’s use of the technology, the force cited a 2018 high-court judgment that directed it to use the tool for tracking missing children. The IFF called this a worrying “function creep.”

According to a working paper by the think tank Vidhi Centre for Legal Policy, as of August 2021, “given the fact that Muslims are represented more than the city average in the over-policed areas, and recognising historical systemic biases in policing Muslim communities in India in general and in Delhi in particular, we can reasonably state that any technological intervention that intensifies policing in Delhi will also aggravate this bias.” The use of FRT by the Delhi Police, it adds, “will almost inevitably disproportionately affect Muslims.” These findings are a cause for immense concern, especially in view of the fact that 126 FRT systems are in use across the country.


Nikhil Dharmaraj is a recent graduate of Harvard University, who is interested in situating contemporary AI systems within transnational structures of violence and disparity. For his senior thesis project on digital surveillance in India that aimed to trace AI Ethical complicities, Dharmaraj connected and conducted research with NGOs Karwan-e-Mohabbat, ASEEM India and the Internet Freedom Foundation.