•  
  •  
 

UC Law Science and Technology Journal

Authors

Lorin Brennan

Abstract

One response to concerns about AI systems has been to espouse “ethical AI,” that is, to elucidate ethical norms and then impose a legal requirement that AI systems comport with these norms. But will it work? More precisely, does there exist an effective procedure by which an AI system developer, or regulator, can determine in advance whether an AI system, once put into operation, will consistently generate output that conforms to a desired ethical norm? This paper argues “no.” The Halting Problem shows that there is no algorithm that can reliably do so for all AI systems running any allowed input. It is possible to decide compliance for some AI systems running some input, just not all the time. Can the legal system “fill the gap” when computational methods fail? This paper suggests that possibility, at least so far, is not positive. There are questions about what legal rules should assess liability for AI system use or misuse and how those rules should operate in concrete cases. Current legal proposals may themselves fail to meet the ethical norm of explicability. Ineffectiveness becomes a concern when considered against an AI Superintelligence unrestrained by any ethical norms.

Share

COinS