Ethics And Technology
This is a course assignment to demonstrate potential biases encoded in algorithms (this can be linked more specifically to natural language processing, machine learning, or artificial intelligence) using the Word Embedding Association Test. In lab, students will work with programs that demonstrate the usefulness of word embedding algorithms in finding relationships between words. Then, students will use an implementation of the algorithm in "Semantics derived automatically from language corpora contain human-like biases" by Caliskan et al. to detect gender and racial bias encoded in word embeddings. The assignment has students design and run an experiment using the WEAT algorithm to detect some other form of bias (e.g., religion, nationality, age). The assignment also presents a real-world scenario that uses natural language systems in a health care settings. Students must apply what they learned in this module to discuss ethical concerns with the proposed system.
It is advised to use the slides to present the material and goals of the assignment. The code and instructions for the software are publicly available on GitHub.
Swarthmore College Provost Office
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 4.0 License.
Ameet Soni and Krista Karbowski Thomason.
"Lab Practicum For Bias In Algorithms".
Ethics And Technology.