Skip to Main Content

The AI Syllabus

This guide is designed to provide access to the concepts and resources comprising the AI Syllabus by Anne Kingsley and Emily Moss.

Machine Learning & Algorithmic Bias

Critical Inquiry: 

How does deep learning reinforce and perpetuate biases?

How do these biases impact culture, learning, and system building?

Sources

Machine Learning and Algorithmic Bias

Sources: 

 

Brown, S. (2021, April 21). Machine learning, explained. MIT Management Sloan School.

  • This explainer article describes what qualifies as machine learning, how it works, who is using it and why, and the ethical concerns surrounding its design and implementation. #Practical

 

AssemblyAI. (2023, July 5). A complete look at large language models [Video]. Youtube. 

  • This 11 min explainer video explores the main concepts related to building and using LLMs. “ChatGPT belongs to a class of AI systems called Large Language Models, which can perform an outstanding variety of cognitive tasks involving natural language.” #Practical

 

Amini, A. (2021, March 6). MIT 6. S191 AI bias and fairness [Video]. Youtube.

  • This 45 min video lecture with Ava Soleimany from the MIT course, Intro to Deep Learning, defines algorithmic bias in terms of object classification and shows connection to income and geography, explores different manifestations of these biases as well as strategies for mitigating biases. Lecture outline with timestamps linked in video description. #Practical 

 

Guo, L.N., et al. (2021). Bias in, bias out: Underreporting and underrepresentation of diverse skin types in machine learning research for skin cancer detection—a scoping review. Journal of the American Academy of Dermatology, 87(1), 157-159.

  • This short form journal article reviews data sets for classifying malignant or premalignant skin lesions to show how machine learning can create racial bias in medical diagnostic tools. #Practical

 

Broussard, M. (2023). More than a glitch: Confronting race, gender, and ability bias in tech. The MIT Press. 

Book cover

  • From the publisher: "In this book, the author argues that the structural inequalities reproduced in algorithmic systems are no glitch. They are part of the system design. This book shows how everyday technologies embody racist, sexist, and ableist ideas; how they produce discriminatory and harmful outcomes, and how this can be challenged and changed." [On order for DVC PH Library.] #Practical #Philosophical

 

Doyle-Burke, D. and Smith, J.J. (Hosts). (2023, March 22). More than a glitch, technochauvanism, and algorithmic accountability with Meredith Broussard [Audio podcast episode]. In The Radical AI Podcast. 

  • A 60 min conversation with digital scholar Meredith Broussard about her book and biases that are prevalent in technology including technochauvanism as a bias toward technology i.e. solving tech problems with more tech. The episode show notes include several related sources. #Practical #Philosophical 

 

Noble, S.U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press. 

  • This book explores data discrimination and racial biases in data sets in search engines such as Google and digital media platforms, demonstrates how misrepresentation can lead to oppression and marginalization by amplifying some voices and silencing others, and critiques outcomes of monopolistic search engines and their impact on women of color. [Available in the DVC PH Library as hard copy for check out and unlimited use e-book.] #Practical #Philosophical

 

PdF Youtube. (2016, June 15). Challenging the algorithms of oppression [Video]. Youtube. 

  • This 12 min video lecture by Safiya Noble explains how major search engines create racial biases and reinforce oppressive narratives. #Practical

 

Gebru, T., et al. (2022, October 6). A human rights-based approach to responsible AI [Conference paper]. 

  • This academic paper argues for a human rights framework in considering the impact of AI focusing less on what the machines do and more on who is harmed by their design and output. #Practical #Philosophical

 

O’Neil, L. (2023, August 12). These women tried to warn us about AI. Rolling Stone.