Thomas Wischmeyer
Regulierung intelligenter Systeme
Published in German.
- article PDF
- available
- 10.1628/aoer-2018-0002
Summary
Authors/Editors
Reviews
Summary
Increasingly, intelligent machines are deployed, where humans used to act in the past. Self-learning computers master autonomous driving, predict human behavior and constantly improve their speech recognition. The deployment of intelligent systems by public and private actors affects numerous existing laws ranging from the Constitution over anti-discrimination and privacy laws to product liability law. The question is therefore not whether artificial intelligence (AI) should be regulated, but how AI is already regulated and whether the existing regulations need to be adapted. Against this backdrop, the paper has two objectives: Firstly, it provides an overview of the complex discussions on AI regulation, examines the legal challenges raised by recent technological developments and develops a comprehensive set of guidelines for future AI regulation. In a second step, the paper focuses on what is widely considered to be one of the most pressing problems for AI regulation: the lack of transparency of AI systems. Most intelligent systems are black boxes, because it is almost impossible to understand which factors are responsible for a specific decision. In light of the lack of transparency, many argue for a legal »right to explanation« or even think that the EU's General Data Protection Regulation already recognizes such a right. However, human decision-making processes are obscure and almost never fully explainable, too. Drawing on well-established principles of constitutional and administrative law, the paper argues that a subjective right to explanation, as conceived by its proponents, is neither feasible nor desireable. Rather, it is necessary to develop objective organizational structures and procedural mechanisms, in which machine decisions can be effectively reviewed by competent authorities and courts.