“When we don’t trust the police, how can we trust their cameras? Even the British would have behaved better than the Delhi Police at some point of time.” Sitting on the first floor of an apartment in north-east Delhi, a young Muslim man said this while recounting to me how Hindu mobs perpetrated communal violence over three days in February 2020. “I had to throw children, one- or two-month-old babies, down from the second story, had to make women jump down two floors,” he said. “Somehow, we escaped.”
Along with Hindu mobs, Delhi Police personnel were also accused of being involved in attacking Muslims. The media reported how the police did not register first-information reports based on complaints made by Muslims which incriminated members of the Bharatiya Janata Party for leading the violence. Forty of the 53 dead were Muslim. The police charged Muslim men even in cases where the victims were from the same community.
On 12 March 2020, the union home minister, Amit Shah, told the Rajya Sabha that the Delhi Police had used facial-recognition technology to identify nearly two thousand individuals as instigators of violence. Over the next year, FRT systems led to 137 arrests. Even as there was no legal framework to regulate the use of the tool in the country, the infrastructure was already in place. A quarter million state-sponsored CCTV cameras had been installed in Delhi by 2019, while another three hundred thousand were slated to be added. Governments had begun automating the recognition and identification of individuals from CCTV footage via FRT. When the Internet Freedom Foundation, a digital-rights advocacy group, inquired, in a right-to-information application, about the legality of the Delhi Police’s use of the technology, the force cited a 2018 high-court judgment that directed it to use the tool for tracking missing children. The IFF called this a worrying “function creep.”