AI and machine learning model applied to the U.S. legal system. What could possibly go wrong? By highlighting the disadvantaged groups within the system, it raises questions about the ethical implications of using AI models in such sensitive and crucial aspects of our society.
Social Challenge
AI's pervasive impact on daily life, from education to finance, raises questions about its true capabilities and trustworthiness. At the core is Machine Learning (ML), enabling computers to learn from data.
However, commercial AI applications reveal biases, like gender and skin-type biases exposed by researchers like Joy Buolamwini and Timnit Gebru. Amazon Prime's algorithm overlooked African-American neighborhoods, perpetuating inequalities.
Fair AI is vital, meaning systems shouldn't show prejudice based on traits. Biases often seep in during data collection and training.
Efforts, including ethical guidelines from organizations like Google, address these issues. Yet, public awareness remains essential. My design project aims to educate non-experts, fostering tech-savviness and mindfulness in AI interactions.
Unmasking COMPAS: Battling Bias in Justice
In the realm of criminal justice, COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) reveals a troubling truth—discrimination. COMPAS assesses the risk of an individual's return to crime, a vital factor for judges and parole boards. Yet, it's marred by bias.
Entering the criminal justice system means taking a COMPAS survey. This data feeds the COMPAS software, generating scores predicting "Recidivism Risk" and "Recidivism Risk of Violence." However, a study unearthed a disconcerting pattern: COMPAS shows higher false positives for African-Americans compared to Caucasians [10], perpetuating systemic injustice.
Our project leverages this revelation. ProPublica, a nonprofit watchdog, offers a comprehensive database of over 10,000 samples and a fairness analysis. As we navigate this intersection of technology and justice, our goal is clear—to spotlight these systemic issues, fostering a dialogue that advances fairness and equality.
The story of the installation
**Designing with Purpose:** Our journey combines determination and technology, weaving connections between tools, data, and users to create an engaging experience. At its core, our design aims to communicate essential insights extracted from a meticulous SHAP analysis, leveraging ProPublica's rich dataset. While the data is complex, we carefully curate the most relevant information.
On a global scale, we offer users a comprehensive perspective, emphasizing the model's accuracy, hovering at approximately 61%. This metric, though not a direct measure of fairness, speaks to the model's reliability in addressing significant societal issues.
Zooming in locally, users explore individual narratives, unveiling the specific attributes that shape labels and their relative influence. This provides a nuanced understanding of the intricate dynamics at play in the determination process.
In this journey of data, design, and interaction, our goal is to empower users with a profound grasp of information, fostering informed decisions and sparking meaningful conversations about fairness and technology's enduring impact on our society.