Bias,
Fairness,
Explainability in AI
Accenture: Building digital trust: The role of data ethics in the digital age, (2016)
Accenture: Informed Consent and Data in Motion, (2016)
​
Adadi, Amina and Mohammed Berrada. “Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI).” IEEE Access 6 (2018): 52138-52160.
​
Ada Lovelace Institute and DataKind UK: Examining the Black Box: Tools for assessing algorithmic systems, 2020
​
AI Governance in 2019, A Year in Review: Observations from 50 Global Experts
​
​​AI Now Institute: AI Now 2017 Report
AI Now Institute: AI Now 2018 Report
AI Now Institute: AI Now 2019 Report
AI Now Institute: Disability, Bias, and AI
​
AI for Good: AI for Good Global Summit Summary, (2017)
​
AI for Peace: AI Explained: Non-technical Guide for Decision-Makers, (2020)
​
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, Raja Chatila, Francisco Herrera: Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI, 2019
​
AlgorithmWatch, Automating Society Report 2020
​​
Anderson, C.W. (2012). “Towards a Sociology of Computational and Algorithmic Journalism.” New Media & Society.
​
Angwin, J. (2014). “Hacked,” In: J. Angwin, Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance, pp. 1-20. New York: Henry Holt & Co.
​
Anna Lauren Hoffmann (2019) Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse, Information, Communication & Society, 22:7, 900-915
​
Barocas, Solon. Data Mining and the Discourse on Discrimination
​
Barocas, Solon, Hardt,Moritz and Narayanan, Arvind (2020) Fairness and machine learning: Limitations and Opportunities
​
Barocas, S., Guo, A., Kamar, E., Krones, J., Morris, M., Vaughan, J.W., Wadsworth, D., & Wallach, H. (2021). Designing Disaggregated Evaluations of AI Systems: Choices, Considerations, and Tradeoffs
​
Bathaee, Yavar: The Artificial Intelligence Black Box and the Failure of Intent and Causation, Harvard Journal of Law & Technology Volume 31, Number 2, (Spring 2018)
​
Baxter, Kathy: is an Architect of Ethical AI Practice at Salesforce. She has an incredible and constantly updated list of ethics in AI research papers and articles in Salesforce's Einstein.ai blog.
​
B. C. Stahl, D. Wright, Ethics and privacy in ai and big data: Implementing responsible researchand innovation, IEEE Security & Privacy 16 (3) (2018)
​​
Birhane, A., Prabhu, V.U., & Kahembwe, E. Multimodal datasets: misogyny, pornography, and malignant stereotypes, (2021)
​
boyd, d. and K. Crawford. “Critical Questions for Big Data.” Information, Communication & Society 15, no. 5: 662-679.
​
Bozdag, E. Bias in algorithmic filtering and personalization. Ethics Inf Technol 15, (2013)
​
Brent Mittelstadt, Chris Russell, and Sandra Wachter. 2019. Explaining Explanations in AI. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ’19). Association for Computing Machinery, New York, NY, US
​
Bruno Lepri, Jacopo Staiano, David Sangokoya, Emmanuel Letouzé, Nuria Oliver. The Tyranny of Data? The Bright and Dark Sides of Data-Driven Decision-Making for Social Good
​
Brown, Shea, Ryan Carrier, Merve Hickok, and Adam L. Smith. 2021. “Bias Mitigation in Data Sets”
​
Burrell, J.: How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, (2016)
​
Capgemini: Why Addressing Ethical Questions in AI will Benefit Organizations
Capgemini: AI & Ethical Conundrum
​​​
Calders, Toon & Zliobaite, Indre. (2013). Why Unbiased Computational Processes Can Lead to Discriminative Decision Procedures. 10.1007/978-3-642-30487-3_3.
​
Caliskan, A., Bryson, J.J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356
​
Calo, Ryan: Artificial Intelligence Policy: A Primer and Roadmap, (2017)
​​
Carusi, Annamaria. (2008). Beyond Anonymity: Data as representation in e-research ethics. International Journal of Internet Research Ethics. 1. 37-65.
​
The Center for Critical Race and Digital Studies (CR+DS): Publications & Public Works
​
Centre for Data Ethics and Innovation (UK Government): Landscape Summary: Bias in Algorithmic Decision-Making, 2020
Centre for Data Ethics and Innovation (UK Government): AI Barometer, 2020
Centre for Data Ethics and Innovation (UK Government): Bias identification and mitigation indecision-making algorithms
​
Chakraborti, Tathagata & Sreedharan, Sarath & Kambhampati, Subbarao. The Emerging Landscape of Explainable AI Planning and Decision Making, (2020)
​
Chignard,Simon and Penicaud, Soizic: “With great power comes great responsibility”: Keeping public sector algorithms accountable
​​
The Color of Surveillance: Monitoring of Poor and Working People
​
Council of Europe: Discrimination, artificial intelligence, and algorithmic decision-making, (2018)
​​
Crawford, Kate & Schultz, Jason. Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms, 55B.C.L. Rev.93(2014)
​
Crawford, K., “The Hidden Biases in Big Data”
​
Christoph Molnar. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable.
​
Custers, Bart and La Fors, Karolina and Jozwiak, Magdalena and Esther, Keymolen and Bachlechner, Daniel and Friedewald, Michael and Aguzzi, Stefania, Lists of Ethical, Legal, Societal and Economic Issues of Big Data Technologies (August 31, 2017)
​
Custers, Bart, The Power of Knowledge Ethical, Legal and Technological Aspects of Data Mining and Group Profiling in Epidemiology (October 22, 2004). Custers B.H.M. (2004)
​
Custers, Bart; Calders,Toon;Schermer, Bart and Zarsky, Tal (Eds.) Discrimination and Privacy in the Information Society Data Mining and Profiling in Large Databases. Springer. ISBN 978-3-642-30486-6
​
Danilevsky, Marina et al. “A Survey of the State of Explainable AI for Natural Language Processing.” (2020)
​
DARPA: Explainable Artificial Intelligence (XAI), (2016)
​​
David Danks and Alex John London. 2017. Algorithmic bias in autonomous systems. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI’17). AAAI Press
​
de Fine Licht, K., de Fine Licht, J. Artificial intelligence, transparency, and public decision-making. AI & Soc (2020).
​
Deloitte: Trustworthy AI, Bridging the ethics gap surrounding AI
​
D. Leslie, Understanding artificial intelligence ethics and safety (2019)
D. Leslie, Understanding bias in facial recognition technologies: an explainer.The Alan Turing Institute (2021)
​
​​Dwork, Cynthia; Hardt, Moritz; Pitassi, Toniann; Reingold, Omer; Zemel, Rich: Fairness Through Awareness, (2011)
​​
Edwards, Lilian and Veale, Michael, Enslaving the Algorithm: From a ‘Right to an Explanation’ to a ‘Right to Better Decisions’? (2018). IEEE Security & Privacy (2018) 16(3)
​
Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. InConference on Fairness, Accountability, and Trans-parency (FAccT ’21) ACM
​
Etlinger, Susan (Altimeter): The Foundation of Responsible Artificial Intelligence
​
European Commission: Policy and Investment Recommendations for Trustworthy AI, (2019)
European Commission: JRC Report on Robustness & Explainability are the critical pillars of Trustworthy AI, (2020)
European Group on Ethics in Science & New Technologies: Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems, (2018)
​​
European Parliament: The ethics of artificial intelligence: Issues and initiatives, (2020)
European Parliament: AI in Healthcare: Applications, risks, and ethical and societal impact (2022)
​
European Union: How Normal Am I?
​
Friedman, Batya and Nissenbaum, Helen. Bias in Computer Systems
Future of Privacy Forum: Unfairness by Algorithm: Distilling the Harms of Automated Decision-Making, (2017)
​
F. Rossi, AI Ethics for Enterprise AI (2019)
​
Grgic-Hlaca, Nina et al. “The Case for Process Fairness in Learning: Feature Selection for Fair Decision Making.” (2016).
​
Hagerty, Alexa; Rubinov, Igor: Global AI Ethics: A Review of the Social Impacts and Ethical Implications of Artificial Intelligence, (2019)
​
Hannak, A., Sapiezynski, P., Kakhki, A. M., Krishnamurthy, B., Lazer, D., Mislove, A., Wilson, C. (2013). Measuring Personalization of Web Search. (WWW ’13).
​
Harini Suresh, John V. Guttag. A Framework for Understanding Unintended Consequences of Machine Learning
​
Hasselbalch, Gry. "Making sense of data ethics. The powers behind the data ethics debate in European policymaking". Internet Policy Review 8.2 (2019)
​
Hayes, P., van de Poel, I. & Steen, M. Algorithms and values in justice and security. AI & Soc (2020)
​​
Hecht, B., Wilcox, L., Bigham, J.P., Schöning, J., Hoque, E., Ernst, J., Bisk, Y., De Russis, L., Yarosh, L., Anjum, B., Contractor, D. and Wu, C. 2018. It’s Time to Do Something: Mitigating the Negative Impacts of Computing Through a Change to the Peer Review Process. ACM Future of Computing Blog
​
Hildebrandt, Mireille. Defining Profiling: A New Type of Knowledge?. 10.1007/978-1-4020-6914-7_2. (2008).
​
House of Lords Select Committee on Artificial Intelligence: AI in the UK: ready, willing and able?, (2017)
​
Information Risk Research Initiative (IRRI): Atlas of Information Risk Maps
​
Introna, L. and H. Nissenbaum. (2000). “Shaping the Web: Why the Politics of Search Engines Matters.” The Information Society 16, no. 3: 1-17.
​
Israelsen, Brett W & Ahmed, Nisar R. "Dave...I can assure you...that it's going to be all right..." -- A definition, case for, and survey of algorithmic assurances in human-autonomy trust relationships, (2017)

Jacob Metcalf, Emanuel Moss, and danahboyd, “Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics,” Data&Society, (2019)
​
J. Fjeld, H. Hilligoss, N. Achten, M. L. Daniel, J. Feldman, S. Kagay, Principled artificial intelli-gence: A map of ethical and rights-based approaches (2019)
​​
J. Kulshrestha, M. Eslami, J. Messias, M. B. Zafar, S. Ghosh, K. Gummadi, and K. Karahalios. Quantifying Search Bias: Investigating Sources of Bias for Political Searches in Social Media (CSCW 2017)
​
Jonathan Dodge, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, Casey Dugan: Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment, 2019
​
J. Zhu, A. Liapis, S. Risi, R. Bidarra, G. M. Youngblood, Explainable AI for designers: A human-centered perspective on mixed-initiative co-creation, 2018 IEEE Conference on Computational Intelligence and Games (CIG) (2018)
​
Hall,Patrick and Gill, Navdeep. An Introduction to Machine Learning Interpretability: An Applied Perspective on Fairness,Accountability, Transparency,and Explainable AI, 2nd Edition, O’Reilly Media, Inc, (2019)
​
Hanna, A., Denton, E.L., Smart, A., & Smith-Loud, J. (2020). Towards a critical race methodology in algorithmic fairness. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency.
​
KenSci, FairML Health Tool
​​
Kleinberg, Jon and Ludwig, Jens and Mullainathan, Sendhil and Sunstein, Cass R.: Discrimination in the Age of Algorithms, (2019)
​​Kliegr,Tomáš; Bahník, ŠtÄ›pán; Fürnkranz, Johannes : A review of possible effects of cognitive biases on interpretation of rule-based machine learning models, (2019)
​
KPMG: Controlling AI: The imperative for transparency and explainability, (2019)
​
Kroll, Joshua Alexander: Accountable Algorithms, (2015)
​
Landers, R. N., & Behrend, T. S. Auditing the AI auditors: A framework for evaluating fairness and bias in high stakes AI predictive models. American Psychologist.(2022)
​
Le Chen, Alan Mislove, and Christo Wilson. An Empirical Analysis of Algorithmic Pricing on Amazon Marketplace. (WWW ’16)
​
Lee, Michelle Seng Ah, and Luciano Floridi. “Algorithmic Fairness in Mortgage Lending: from Absolute Conditions to Relational Trade-Offs.” SSRN (2020)
​​
Madden, Mary and Gilman, Michele E. and Levy, Karen and Marwick, Alice E., Privacy, Poverty and Big Data: A Matrix of Vulnerabilities for Poor Americans (March 9, 2017). 95 Washington University Law Review 53 (2017)
​
McKinsey & Company: Controlling Machine-learning Algorithms and Their Biases, (2017)
​
McKinsey Global Institute: Notes from the AI frontier: Tackling bias in AI (and in humans), (2019)
​
Megan Randall, Alena Stern, Yipeng Su: Five Ethical Risks to Consider before Filling Missing Race and Ethnicity Data Workshop Findings on the Ethics of Data Imputation and Related Methods (2021)
​​​
Mitchell, S., Eric Potash, Solon Barocas, Alexander D'Amour and K. Lum. “Prediction-Based Decisions and Fairness: A Catalogue of Choices, Assumptions, and Definitions.” arXiv: Applications (2018)
​
Mittelstadt,Brent Daniel: Allo,Patrick; Taddeo, Mariarosaria; ,Wachter, Sandra; Floridi, Luciano: The Ethics of Algorithms: Mapping the Debate, (2016)
​
Mohamed, S., Png, MT. & Isaac, W. Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philos. Technol. 33, (2020)
​​
Molnar, Christoph. Interpretable Machine Learning, A Guide for Making Black Box Models Explainable, (2020)
​
Müller, Vincent C.: Ethics of artificial intelligence and robotics, in Edward N. Zalta (ed.), Stanford Encyclopedia of Philosophy (2020)
​
NIST: Four Principles of Explainable Artificial Intelligence
NIST: Towards a Standard for Identifying and Managing Bias in Artificial Intelligence
NIST: Building the NIST AI Risk Management Framework: Workshop #2 Recordings
​
Obermeyer, Ziad, Powers, Brian, Vogeli, Christine, Mullainathan, Sendhil: Dissecting racial bias in an algorithm used to manage the health of populations (2019)
​
Olteanu Alexandra, Castillo Carlos, Diaz Fernando, Kıcıman Emre, Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries, Frontiers in Big Data (2019)
​
Osoba, Osonde A., Benjamin Boudreaux, Jessica Saunders, J. Luke Irwin, Pam A. Mueller, and Samantha Cherney, Algorithmic Equity: A Framework for Social Applications. Santa Monica, CA: RAND Corporation, (2019)
​
Osoba, Osonde A. and William Welser IV, An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. Santa Monica, CA: RAND Corporation (2017)
​
Oxford Insights, Racial Bias in Natural Language Processing (2019)
​
Parasuraman, R., & Riley, V. (1997). Humans and Automation: Use, Misuse, Disuse, Abuse. Human Factors, 39(2)
​​
Pasquale, F. (2011). “Restoring Transparency to Automated Authority.” Journal on Telecommunications and High Technology Law 9, no. 235: 235-254.
​
Prabhu, Vinay & Birhane, Abeba. (2020). Large image datasets: A pyrrhic win for computer vision?
​
PWC: Ethical AI: Tensions and trade-offs, (2019)
​
Rahwan, I., Cebrian, M., Obradovich, N. et al: Machine behaviour, Nature 568, (2019)
RAND: The Risks of Bias and Errors in Artificial Intelligence, (2017)
​
R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, D. Pedreschi, A survey of methodsfor explaining black box models, ACM Computing Surveys 51 (5) (2018)
​​
Rob Kitchin, Thinking critically about and researching algorithms, Information, Communication & Society, 20:1, (2017)
​
Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1, (2019)
​
Ruha Benjamin. Assessing risk, automating racism, Science 25 Oct 2019: Vol. 366, Issue 6464
Ruha Benjamin, Race After Technology discussion guide
​
Russell, Stuart: Provably Beneficial Artificial Intelligence
​​
R. Zemel, Y. Wu, K. Swersky, T. Pitassi, C. Dwork, Learning fair representations, in: InternationalConference on Machine Learning, 2013
​
Sam Corbett-Davies, Sharad Goel. The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning
​
Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, Aziz Huq. Algorithmic decision making and the cost of fairness
​
Saurwein, Florian et al. “Governance of Algorithms: Options and Limitations.” (2015)
​
Schwartz, Paul: Data Processing and GovernmentAdministration: The Failure of the American Legal Response to the Computer. Hastings Law Journal, Vol.43, (1991)
​
Selbst, Andrew D. and Boyd, Danah and Friedler, Sorelle and Venkatasubramanian, Suresh and Vertesi, Janet, Fairness and Abstraction in Sociotechnical Systems (August 23, 2018). 2019 ACM Conference on Fairness, Accountability, and Transparency (FAT*)
​
Åžerife Wong and the Center for the Advanced Study in the Behavioral Sciences (CASBS) at Stanford University. “Fluxus Landscape: An Expansive View of AI Ethics and Governance,” Kumu, (2019)
​
Sfetcu, Nicolae. "Big Data Ethics in Research", (2019)
​
Shubham Sharma, Jette Henderson, and Joydeep Ghosh. 2020. CERTIFAI: A Common Framework to Provide Explanations and Analyse the Fairness and Robustness of Black-box Models. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES '20). Association for Computing Machinery, New York, NY, USA
​​
Shum, Harry: Removing AI Bias, (2020)
​
Singapore Computer Society, AI Ethics & Governance Body of Knowledge (BoK), 2020
​
Sorelle A. Friedler and Carlos Scheidegger and Suresh Venkatasubramanian and Sonam Choudhary and Evan P. Hamilton and Derek Roth. A comparative study of fairness-enhancing interventions in machine learning. 2018
​
Stanford, Bias in the Vision and Language of Artificial Intelligence
​
S. T. Shane T. Mueller, R. R. Hoffman, W. Clancey, G. Klein, Explanation in Human-AI Systems: A Literature Meta-Review Synopsis of Key Ideas and Publications and Bibliography for ExplainableAI, Tech. rep., Defense Advanced Research Projects Agency (DARPA) XAI Program (2019)
​
Suresh, H. and J. Guttag. “A Framework for Understanding Unintended Consequences of Machine Learning.” (2019)
​​
Sweeney, L. Discrimination in Online Ad Delivery. CACM 56(5): 44-54.
​
Trewin, Shari: AI Fairness for People with Disabilities: Point of View, (2018)
​​
Tufekci, Zeynep. (2014). “Engineering the Public: Big Data, Surveillance and Computational Politics.” First Monday 19, no. 7.
​
UK Government: Interim report: Review into bias in algorithmic decision-making, (2019)
​
Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José M. F. Moura, Peter Eckersley: Explainable Machine Learning in Deployment, (ACM FAT* 2020)
​
UNESCO: Artificial intelligence and gender equality: key findings of UNESCO’s Global Dialogue, 2020
​​
Upturn: Civil Rights, Big Data,and Our Algorithmic Future
Upturn: Data Ethics, Investing Wisely in Data at Scale
Upturn and Omidyar Network Report: Public Scrutiny of Automated Decisions:Early Lessons and Emerging Methods
​
Verma, Sahil; Rubin, Julia: Fairness Definitions Explained, published in 2018 IEEE/ACM International Workshop on Software Fairness (FairWare)
​​
Wachter, Sandra, Brent Mittelstadt, and Chris Russell. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” Harvard Journal of Law & Technology 31.2 (2018).
​
Wachter, Sandra and Mittelstadt, Brent and Russell, Chris, Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI (March 3, 2020)
​
Watson, David, and Luciano Floridi. “The Explanation Game: A Formal Framework for Interpretable Machine Learning.” (2020)
​
Williams, B., Brooks, C., & Shmargad, Y.: How Algorithms Discriminate Based on Data They Lack: Challenges, Solutions, and Policy Implications, Journal of Information Policy, 8, (2018)
​​
Zarsky, Tal Z. 2012. ‘‘Governmental Data Mining and its Alternatives.’’ Pennsylva-nia State Law Review116 (2)
Zarsky, Tal Z. 2013b. ‘‘Mining the Networked Self.’’Jerusalem Review of Legal Studies6(1):120-136.
Zarsky, Tal Z. 2014. ‘‘Understanding Discrimination in the Scored Society.’’ Washington Law Review89 (4): 1375
​
Zevenbergen, Bendert and Mittelstadt, Brent and Véliz, Carissa and Detweiler, Christian and Cath, Corinne and Savulescu, Julian and Whittaker, Meredith: Philosophy Meets Internet Engineering: Ethics in Networked Systems Research. (GTC Workshop Outcomes Paper) (2015)
​​
​