top of page

POTENTIAL AND DANGERS: A BRIEF EXPLORATION INTO AI USE IN PREDICTIVE POLICING AND CRIMINAL JUSTICE

As Artificial Intelligence (AI) technology advances, it is difficult to imagine a future where it does not reshape many institutions, including the legal field. Younger generations will likely witness these changes unfold in various ways, for better or worse. One key area of the legal system that may experience a significant impact from AI is criminal law. This discussion explores the role of AI in policing and post-arrest criminal justice decisions. 


As the gateway to the criminal justice system, police forces play a crucial role in society. Officers have traditionally relied on human judgment, risk assessment, personal biases, and intuition when deciding whether to issue a warning or enter an individual into the system. However, predictive algorithms powered by AI could fundamentally alter these processes, dramatically shifting how and when individuals enter the legal system. 


Predictive policing seeks to prevent crime by identifying likely perpetrators based on factors such as past arrests and criminal records (Lau, 2020). This is one area where AI adoption is increasing, with proponents arguing that AI can predict crime more precisely, efficiently, and objectively than humans (Lau, 2020). AI-driven predictive policing has the potential to enhance efficiency, reduce costs, and identify high-risk areas without the need for extensive human labor (Lau, 2020). Following arrests, AI could assist in assessing evidence, solving cases, and ideally offering objective insights. 


However, AI systems are only as good as the data they are trained on, and that data is deeply influenced by systemic biases. Human biases are prevalent in all fields, particularly in the legal sphere, making it unlikely that AI software will be free from them. Historical arrest data used to train AI often reflects entrenched systemic biases. For example, predictive policing algorithms identify “high-risk” neighborhoods based on past arrests, many of which occur in over-policed communities—often marginalized and racialized groups. As a result, AI may reinforce the false perception that crime is inherently more likely in these areas, perpetuating cycles of racial bias and over-policing (Christian, 2024). Additionally, because predictive policing relies on data where marginalized groups are disproportionately represented in arrest statistics, AI may disproportionately label individuals from these groups as high-risk, exacerbating existing disparities in the criminal justice system (Khursheed, 2023). This over-surveillance can lead to increased racial profiling, wrongful arrests, police violence, and further distrust in law enforcement. 


Facial recognition technology is another AI tool with significant ethical concerns. Buongiorno (2024) defines it as “a form of biometric technology that uses AI to identify people by comparing images or video of their faces—often captured by security cameras—with existing images in databases.” However, concerns about accuracy, reliability, and bias persist. According to University of Calgary Law Professor Dr. Gideon Christian, some facial recognition systems have over 99% accuracy in identifying white males but only 35% accuracy when assessing people of color, particularly Black women (Hassanin, 2023). The implications of this in policing are profound—misidentifications have led to wrongful arrests, particularly of Black individuals (Amnesty International, 2023). 


Faulty facial recognition technology contributes to wrongful arrests and could lead to wrongful convictions, exacerbating existing issues within the criminal justice system. For example, African Americans are seven times more likely to be wrongfully convicted of murder than white Americans, despite making up only 13.6% of the U.S. population (The Innocence Project, 2025). If AI policing tools disproportionately misidentify racialized individuals, they may further entrench discriminatory practices, fueling cycles of over-policing and legal injustice. 


The racial biases in AI extend beyond policing and into bail decisions. Bail determinations consider flight risk and recidivism, but AI-based risk assessment tools have historically over-predicted the likelihood of recidivism for Black individuals (Christian, 2024). If integrated into judicial decision-making, these biases could lead to more wrongful denials of bail for marginalized groups. 


Basic human dignity is a pillar of human rights. Allowing AI-driven predictions to shape a person’s criminal record, reputation, or legal fate without proper ethical oversight poses a direct threat to that dignity. While AI has the potential to improve policing efficiency, it is ultimately shaped by the data and guidelines humans provide. To mitigate its risks, responsible and transparent training of AI algorithms is essential. Limiting AI use to ethically justifiable areas, ensuring human oversight, and establishing robust appeal mechanisms for AI-driven decisions can help minimize harm. Including racialized voices in AI policy discussions is critical, as is raising public awareness of the risks AI poses in criminal justice. Education on AI’s potential and dangers will be key to shaping just policies in an era where AI is increasingly embedded in public life. 


WORKS CITED  

 

Bongiorno, J. “Facial recognition technology gains popularity with police, intensifying calls for regulation.” CBC News, 30 June 2024, https://www.cbc.ca/news/politics/facial-recognition-ai-police-canada-1.7251065

 

Christian, G. “Racial bias in AI should be the immediate concern.” Policy Options, 10 December, 2024, https://policyoptions.irpp.org/magazines/december-2024/ai-racial-bias/#:~:text=In%20law%20enforcement%2C%20predictive%20policing,areas%20as%20prone%20to%20crime.  

 

Hasssanin, Nada. “Law professor explored racial bias implications in facial recognition technology.” University of Calgary, 23 August 2023, https://ucalgary.ca/news/law-professor-explores-racial-bias-implications-facial-recognition-technology.   

 

Khursheed, Rebaz. Use of AI in Crime Prediction Algorithms: Ethical and Legal Implications. University of Waterloo, 2023, https://uwaterloo.ca/defence-security-foresight-group/sites/default/files/uploads/documents/khursheed_use-of-ai-crime-prediction-algorithms.pdf.  

 

 

The Innocence Project. “Race and Wrongful Conviction.”  21 March 2025 [Last Accessed], https://innocenceproject.org/race-and-wrongful-conviction/#:~:text=Indeed%2C%20a%202022%20report%20from,in%20the%20criminal%20legal%20system.  

Comments


bottom of page