syllabus.md
Syllabus
Session 1: What went wrong with this case? - Introduction
Mandatory readings
Optional watch
- Benjamin, R. (2023). Race to the future?. Video. Opening keynote at the Public Spaces Conference in Amsterdam.
Session 2: Avoiding tech pitfalls - errors, choices, biases, (justice?)
Mandatory readings
- Corbett-Davies, S., Pierson, E., Feller, A., Goel, S. (2016, October 17). A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear. The Washington Post.
- Dressel, J. and Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances.
- Leufer, D. (2020). Myth: AI can be objective or unbiased. AI Myths.
- Ofqual (2020). Executive summary, Student-level equalities analyses for GCSE and A level, Summer 2020. pp. 5-8.
- O’Neil, C. (2016). Introduction and Chapter 1: “Bomb parts: What is a model?”. In Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.
- Ziosi, M. and Pruss, D. (2024). Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool. In The 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ‘24), June 3-6, 2024, Rio de Janeiro, Brazil. ACM, New York, NY, USA.
Optional readings
- Angwin, J., Larson, J., Mattu, S., Kirchner, L. (2016, 23 May). Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica.
- Balayn, A., Gürses, S. (2021). Beyond Debiasing: Regulating AI and its inequalities. EDRi.
- Bennett, S. H. (2020, 20 August). On A-Levels, Ofqual and Algorithms. Sophie Bennett’s blog.
- D’Ignazio, C. and Klein, L. (2020). 2. Collect, Analyze, Imagine, Teach. In Data Feminism. MIT Press.
- Narayan, A. (2018). Tutorial: 21 Definitions of Fairness and their politics. Video from the 2018 conference on Fairness, Accountability and Transparency of Machine Learning.
- Ofqual. (2020, 15 April). Equality Impact Assessment.
- Selbst, A., boyd, d., Friedler, S., Venkatasubramanian, S., Vertesi, J. (2019). Fairness and Abstraction in Sociotechnical Systems. Proceedings from the 2019 conference on Fairness, Accountability and Transparency of Machine Learning, Atlanta.
- Stoyanovich, J. and Arif Khan, F. (2021). All about that Bias. We are AI Comics, Vol 4.
- Suresh, H. and Guttag, J. (2020). A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. Proceedings from the 2020 conference on Fairness, Accountability and Transparency of Machine Learning, Barcelona.
- Wang, A., Kapoor, S., Barocas, S., Narayanan, A. (2023). Against Predictive Optimization:
On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy.
Session 3: Assessing the impacts of an algorithmic system
- Canada’s Algorithmic Impact Assessment - the world’s first institutionalized algorithmic impact assessment in the public sector. See for instance the impact assessment for the “advanced analytics triage of overseas temporary resident visa applications”.
- Dutch Data Protection Authority - Department for the Coordination of Algorithmic Oversight (DCA). (2024). AI & Algorithmic Risks Report Netherlands - for an example of how national authorities can practice “algorithmic oversight”.
- Eticas, BID. (2021). Robot Laura Auditoría Algorítmica. Another applied example of an audit.
- Supreme Audit Institutions of Finland, Germany, the Netherlands, Norway and the UK, Auditing machine learning algorithms: A white paper for public auditors (2023) - for an example of how Courts of Audits can tackle the issue.
- US Department of State. (July 25, 2024). Risk Management Profile for Artificial Intelligence and Human Rights - with a focus on human rights.
- New Zealand’s Algorithmic Charter (for context, see this presentation page).
- Mantelero, A., Esposito, M. S. ( July 2021). An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems. Computer Law & Security Review.
Mandatory readings
- Ada Lovelace Institute. (2020, 29 April). Examining the Black Box: Tools for assessing algorithmic systems.
- Ada Lovelace Institute, AI Now Institute and Open Government Partnership. (2021). Executive Summary. Algorithmic Accountability for the Public Sector..
- Costanza-Chock, S., Raji, I. D., Buolamwini, J. (2022). Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem. FAccT ‘22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency.
- Groves, L., Metcalf, J., Kennedy, A., Vecchione, B., & Strait, A. (2024). Auditing work: Exploring the New York City algorithmic bias audit regime. In Proceedings of the Association for Computing Machinery. Association for Computing Machinery.
- Constantaras, E., Geiger, G., Braun, J.-C., Mehortra, D., Aung, H. (2023, 6 March). Inside the Suspicion Machine. Wired.
Optional readings
- Braun, J.-C., Constataras, E., Haung, H., Geiger, G., Mehrotra, D., Howden, D. (2023). Suspicion Machines Methodology: A detailed explainer on what we did and how we did it. LightHouse Reports.
- Chowdhury, R., and Williams, J. (2021, July 30). Introducing Twitter’s first algorithmic bias bounty challenge.
- Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profil, Police, and Punish the Poor. St Martin’s Press.
- Gender Shades. How well do IBM, Microsoft, and Face++ AI services guess the gender of a face?.
- Marda, V., and Narayan, S. (2020). Data in New Delhi’s Predictive Policing System. Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency.
- Tapasya, Sambhav., K., Joshi, D. (2024). (2024, January 24). “How an algorithm denied food to thousands of poor in India’s Telangana”. AlJazeera.
- Valdivia, A., Hyde-Vaamonde, C., García Marcos, J. (2024). Judging the algorithm: Algorithmic accountability on the risk assessment tool for intimate partner violence in the Basque Country. AI & Society.
- Varon, J. and Peña, P. (2022). Not My A.I.
Towards Critical Feminist Frameworks To Resist Oppressive A.I. Systems. Carr Center for Human Rights Policy, Harvard Kennedy School, Harvard University.
Optional general reads on policy
Session 4: Building Accountabiilty: transparency, appeals, public procurement
Registers
Procurement
Mandatory readings
- Ananny M, Crawford K. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society. 2018;20(3):973-989.
- Green, B., Kak, A. (2021, 15 June). The False Comfort of Human Oversight as an Antidote to A.I. Harm. Slate.
- Jansen, F., Cath, C. (2021). Just Do It: on the limits of governance through AI registers. In AI Snake Oil, Pseudoscience and Hype, edited by Frederike Kaltheuner. Meat Space Press.
- Kolkman, D. (2020, 16 August). F**ck the algorithm? What the world can learn from the UK A-level grading algorithm fiasco. LSE Impact Blog.
- Riley, S. (2024). Overriding (in)justice: Pretrial risk assessment administration on the frontlines. In Proceedings of the Association for Computing Machinery. Association for Computing Machinery.
Optional readings
- Elish, M. C. (2020, August 7). Sepsis Watch in Practice: The labor of disruption and repair in healthcare. Data & Society: Points.
- Feathers, T. (2023, 1 April). It takes a small miracle to learn basic facts about government algorithms. The Markup.
- Wright, L., Muenster, R. M., Vecchione, B., Qu, T., Cai, P. (S.), Smith, A., Comm 2450 Student Investigators, Metcalf, J., & Matias, J. N. (2024). Null compliance: NYC Local Law 144 and the challenges of algorithm accountability. In Proceedings of the Association for Computing Machinery. Association for Computing Machinery.
Session 5: Designing participatory algorithmic governance
Practical Guidance & Frameworks
Mandatory readings
- Blair Attard-Frost’s work on AI Countergovernance, either as a written piece or as a podcast.
- Costanza-Chock, S. (2020). Design Practices: “Nothing about Us without Us.”. In Design Justice (1st ed.).
- Hu, W.and Singh, R. (2024). Enrolling Citizens: A Primer on Archetypes of Democratic Engagement with AI. Data & Society.
- Ofqual. (2020). Analysis of Consulation Responses: Exceptional arrangements for exam grading and assessment in 2020.
- Robinson, D. G. (2022). “Chapter 2: Democracy on the Drawing Board”. Voices in the Code. Russell Sage Foundation.
- Sloane, M., Moss, E., Awomolo, O., Forlano, L. (2020). Participation is not a Design Fix for Machine Learning.
Optional readings
- Carollo, M., Tanen, B. (2023, 21 March). How a Group of Health Executives Transformed the Liver Transplant System. The Markup.
- Cardullo, P., Kitchin, Rob. (2019). Being a ‘citizen’ in the smart city: up and down the scaffold of smart citizen participation in Dublin, Ireland. GeoJournal.
- Office for Statistics Regulation Authority. (2021, 2 March). Ensuring statistical models command public confidence: Learning lessons from the approach to developing models for awarding grades in the UK in 2020, Executive summary.
- Singh, R. (2023, 18 August). Can We Red Team Our Way to AI Accountability?. Tech Policy Press.
- Wylie, B. (2018, 13 August). Searching for the Smart City’s Democratic Future. Centre for International Governance Innovation.
Optional - Examples of campaigns
Session 6 : Taking down a system and managing the aftermath - Conclusion
Examples of campaigning & redress
Mandatory readings
- Ehsan, U., Singh, R., Metcalf, J., & Riedl, M. (2022). The algorithmic imprint. In Proceedings of the Association for Computing Machinery. Association for Computing Machinery.
- Foxglove. (2020, 17 August). We put a stop to the A Level grading algorithm!.
- Leufer, D. (2020). Myth: AI has agency: headline rephraser tool. AI Myths.
- Ofqual. (2021). Decisions on how GCSE, AS and A- level grades will be determined in summer 2021.
- Poole, S. (2020, September 3). Steven Poole’s word of the day: ‘Mutant algorithm’: boring B-movie or another excuse from Boris Johnson?. The Guardian.
- Redden, J. (2022, September 21). Government’s use of automated decision-making systems reflects systemic issues of injustice and inequality. The Conversation.
Optional readings