Weapons of Math Destruction
by Cathy O'Neil
Key Concepts
WDM Criteria
Models are destructive when opaque, unregulated, and unfair, causing widespread damage.
Algorithmic Bias
Algorithms reflect and amplify human biases present in their training data, leading to discriminatory outcomes.
Feedback Loops
Models can create self-fulfilling prophecies, trapping individuals in negative cycles based on their predictions.
Proxy Variables
Seemingly neutral data points often stand in for sensitive, discriminatory characteristics, perpetuating bias.
Opacity Problem
The lack of transparency in complex algorithmic decisions prevents accountability and challenge, hiding their impact.
Action Items
Demand transparency and explainability in all algorithmic systems.
Rigorously audit models for bias and fairness before and after deployment.
Implement robust human oversight and appeal mechanisms for automated decisions.
Actively challenge the use of discriminatory proxy variables in model design.
Advocate for strong ethical guidelines and regulatory frameworks for AI.
Core Thesis
Unchecked algorithms, fueled by big data, can become Weapons of Math Destruction that perpetuate and amplify societal inequality.
Mindset Shift
Data science is not inherently objective; it is a powerful tool that can amplify existing injustices if not ethically designed and critically scrutinized.