UC Law Journal


While algorithmic decision-making has proven to be a challenge for traditional antidiscrimination law, there is an opportunity to regulate algorithms through the information that they are fed. But blocking information about protected categories will rarely protect these groups effectively because other information will act as proxies. To avoid disparate treatment, the protected category attributes cannot be considered; but to avoid disparate impact, they must be considered. This leads to a paradox in regulating information to prevent algorithmic discrimination. This Article addresses this problem. It suggests that, instead of ineffectively blocking or passively allowing attributes in training data, we should modify them. We should use existing pre-processing techniques to alter the data that is fed to algorithms to prevent disparate impact outcomes. This presents a number of doctrinal and policy benefits and can be implemented also where other legal approaches cannot.

Included in

Law Commons