•  
  •  
 

UC Law Science and Technology Journal

Authors

Claudia Philipp

Abstract

This paper examines the legality of model distillation in the context of large language models (LLMs), where smaller “student” models are trained by mimicking the outputs of larger, proprietary “teacher” models. As artificial intelligence continues to advance rapidly, the legal framework surrounding patent, and contract law is being tested. Specifically, the paper explores whether current U.S. copyright law offers sufficient protection for frontier LLM developers and whether practices like unauthorized distillation amount to infringement. By analyzing model architecture, training data, behavioral mimicry, and reverse engineering under prevailing legal doctrines—including fair use, terms of use enforcement, and recent litigation—the paper finds that model distillation is unlikely to constitute copyright infringement under existing law. Nevertheless, the broader implications for innovation, proprietary model protection, and ethics of open sourcing suggest that a reevaluation of intellectual property norms in AI development is both necessary and imminent.

Share

COinS