Document Type

Assignment

Publication Date

Spring 2019

Published In

Ethics And Technology

Abstract

This is a course assignment to demonstrate potential biases encoded in algorithms (this can be linked more specifically to natural language processing, machine learning, or artificial intelligence) using the Word Embedding Association Test. In lab, students will work with programs that demonstrate the usefulness of word embedding algorithms in finding relationships between words. Then, students will use an implementation of the algorithm in "Semantics derived automatically from language corpora contain human-like biases" by Caliskan et al. to detect gender and racial bias encoded in word embeddings. The assignment has students design and run an experiment using the WEAT algorithm to detect some other form of bias (e.g., religion, nationality, age). The assignment also presents a real-world scenario that uses natural language systems in a health care settings. Students must apply what they learned in this module to discuss ethical concerns with the proposed system.
It is advised to use the slides to present the material and goals of the assignment. The code and instructions for the software are publicly available on GitHub.

Funding Agency

Swarthmore College Provost Office

Creative Commons License

Creative Commons Attribution-Noncommercial 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 4.0 License.

Comments

Professors Ameet Soni and Krista Thomason were awarded a Digital Humanities Curricular Grant from the Provost's Office for use in their spring 2019 course, FYS: Ethics and Technology (PHIL 07/CPSC 15). The course syllabus, assignment instructions, and supplemental materials are made freely available here courtesy of the authors. The code and instructions for the assignment are publicly available on GitHub.

Share

COinS