India has around 140 police personnel per 100,000 citizens, making it one of the most understaffed law enforcement countries in the world. Other than personnel, the Indian police also suffer from massive shortages in weaponry, vehicles, and other infrastructure. While these factors might make a very strong case for predictive policing in India, there have been worrying cases of institutional and social biases along with political influence that pose a serious threat.
Indian police have been criticised for being both casteist and communal. Around 50 per cent of respondents in a survey in 2019 involving 11,834 policemen across 21 states stated that Muslims and migrant workers were more likely to commit crimes. There is documented evidence of the use of facial recognition technology and predictive policing during the recent protests against CAA and NRC in Delhi where 44 per cent of the surveyed police personnel supported police punishment over legal trials. The video of an IPS officer in Maharashtra went viral in 2018, showing her talking about filing multiple false cases against Dalits. There are numerous instances of police brutality against specific sections of Indian society, like the Pardhi community in Madhya Pradesh.
A list of fabricated and biased first information reports on Habitual Offenders that was manually stored across states has now been digitised and geotagged in the Crime and Criminal Tracking Network & Systems. The CCTNS is a centralised database of FIRs and HOs with geotags. Many states are now adopting CCTNS to build applications for their own predictive policing programmes.
Formative predictive policing solutions in India, such as the Crime Mapping Analysis and Prediction System in Delhi, are built on historical crime data from FIR records in the CCTNS. However, Vidushi Marda and Shivangi Narayan illustrate how these systems are riddled with biases in their paper, “Data in New Delhi’s Predictive Policing System”. Data from CCTNS comprise historical information that is prejudiced against migrant workers and minority groups living in cities. This is further complicated by the arbitrary and non-standardised data-collection methods; a large number of distress calls are received from temporary settlement areas and vulnerable communities, leading to overpolicing; the geospatial maps are incomplete for a variety of reasons. Most of the calls are received from these areas, leading to inaccurate mapping of reported crimes.
The most crucial step in ensuring safer, ethical, and effective predictive policing systems is to build institutional mechanisms that will bring transparency to this opaque system. Unlike the US, the UK, Denmark, Germany, India has negligible evidence pertaining to any of its predictive policing programmes. Hyderabad has the dubious distinction of being one of the most surveilled cities in the world with over 35 CCTV cameras for every 1,000 citizens. The data from all this are intended to be used for multiple predictive policing initiatives that have raised serious questions regarding digital privacy. The overarching question, therefore, is the dizzying pace with which public technology solutions are being adopted without any evidence of these being efficient and ethical. Right to Information Act petitions pertaining to predictive policing solutions are also declined under the pretext of national security. The recently introduced criminal procedure (identification) bill amplifies the concern by giving more teeth to predictive policing systems without any understanding of their efficacy.
Greater transparency in these systems will enable researchers, scientists, lawmakers, and administrators to strengthen future predictive policing solutions.
(Sarthak Satapathy is a graduate student at the Fletcher School of Law and Diplomacy at Tufts University. Shuchi Purohit is a graduate student of International Law at The Fletcher School of Law and Diplomacy at Tufts University)