UC Law SF Communications and Entertainment Journal
Abstract
Nearly all user-generated content contains personal data. Yet, Big Tech companies can exploit the user-generated content you posted because under current legal frameworks, the information is already public (no matter how many sordid details you’ve shared in that social media post). In the age of the great generative AI arms race between OpenAI, Google, and Meta, technology companies are collecting mass amounts of user-generated content on their platform for training AI models. This Note argues that current data privacy practices with respect to user-generated content is anti-consumer, because it fails to take today’s technological advancements, business practices, and online norms into account. The scale at which user-generated content can be collected and processed has changed drastically. Exploitation of user-generated content for out of scope uses, especially in AI training, creates unique economic and individualistic harms that can only be addressed with data privacy rights. When platforms capitalize on user-generated content without the user’s affirmative knowledge or consent, that feeling of nakedness as a person’s content is freely posted in one context but exploited in a different context, is a fundamental violation of privacy no matter how legal the practice is in current jurisprudence. This forced dilemma—to accept the Terms of Service and enable your content to be used for AI training, or to cease your use of the online service altogether—is a scheme that consolidates power for platforms while shifting harm onto individuals.
Recommended Citation
Millicent Wong,
Exploitation of User Generated Content For Generative Ai: Making a Case for Data Privacy Rights in Your Social Media Posts,
47 UC Law SF Comm. & Ent. L.J. 141
(2025).
Available at: https://repository.uclawsf.edu/hastings_comm_ent_law_journal/vol47/iss2/4
Included in
Communications Law Commons, Entertainment, Arts, and Sports Law Commons, Intellectual Property Law Commons