Document Type

Conference Proceeding

Publication Date

11-13-2024

Published In

CSCW Companion '24: Companion Publication Of The 2024 Conference On Computer-Supported Cooperative Work And Social Computing

Abstract

Scholars, politicians, and journalists have raised alarm over the potential for AI-generated photos, video, and audio—often referred to as deepfakes—to reduce trust in one another and our institutions. Despite these clarion calls, little empirical work exists on how deepfakes are being used to harm individuals outside of non-consensual intimate imagery (NCII). This research provides a preliminary analysis of 50 wide-ranging incidents of deepfake harm. We find that the most common types of harm are relational, systemic, financial, and emotional. Apart from AI-generated NCII, the most prevalent uses of deepfakes to cause harm were instances of mis- and disinformation, fraud, and misrepresentation of or stereotyping about marginalized groups (e.g., women and racial minorities). We concluded with recommendations for future work and discuss potential challenges in identifying, quantifying, and preventing harm caused by deepfakes both online and off.

Published By

Association for Computing Machinery

Conference

CSCW '24: The 27th ACM Conference On Computer-Supported Cooperative Work And Social Computing

Conference Dates

November 9-13, 2024

Conference Location

San José, Costa Rica

Creative Commons License

Creative Commons Attribution-Share Alike 4.0 International License
This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.

Plum Print visual indicator of research metrics
PlumX Metrics
  • Usage
    • Downloads: 32
    • Abstract Views: 11
  • Captures
    • Readers: 2
see details

Share

COinS