UC Law Journal
Abstract
Many scholars have observed that an empirical study is only valid to the extent it is reliable. Yet assessments of the reliability of empirical legal studies are rare. The closest most scholarship comes is to compare the results of their studies to those of others. As a result, in many legal fields, including intellectual property law, scholars lack a grounded understanding of how valid or reliable empirical legal studies really are.
This Article examines the reliability of empirical studies of judicial decisions by closely comparing two recent studies of the patent law doctrine of nonobviousness. We find these studies provide robust results despite differences in the cases selected to include in each dataset. However, the amount of agreement varied for some data fields more than others. Particularly, there was more inter-study variability for fields that examined judicial reasoning than fields for decision outcomes. This finding provides some validation for the use of macro-level studies of judicial decision-making. To the best of our knowledge, this is the first analysis to directly compare the actual coding (as opposed to just the outcomes) of two different studies examining the same patent law doctrine.
Building on the existing data, we also make an original contribution to the literature on nonobviousness by extending the time studied to the present. In contrast with studies examining the immediate period after the Supreme Court’s decision in KSR v. Teleflex, we find (1) a substantial decline in the number of 35 U.S.C. § 103 district court cases appealed to the Federal Circuit, (2) a higher rate at which courts deem the patent nonobvious, and (3) a high affirmance rate for district court determinations of both “obvious” and “nonobvious.”
Recommended Citation
Jason Rantanen, Lindsay Kriz, and Abigail A. Matthews,
Studying Nonobviousness,
73 Hastings L.J. 667
(2022).
Available at: https://repository.uclawsf.edu/hastings_law_journal/vol73/iss3/3