Criminal Liability of AI: Can an Algorithm Go to Jail?
Criminal law is based on voluntary human action. But what happens when the 'action' is performed by an autonomous artificial intelligence? If a driverless vehicle runs a red light and causes a fatal accident, if an algorithmic trading algorithm manipulates the market, or if an AI medical diagnostic system makes a fatal error, we face a legal vacuum: we cannot imprison a line of code.
The 'Actus Reus' Problem: The Absence of a Direct Human Author
The core of the problem is that a crime requires a human act (actus reus) and a guilty mind (mens rea). An AI, however advanced, lacks consciousness and will in the human sense. It does not 'decide' to commit a crime. It simply executes an incredibly complex code based on probabilities and training data. Therefore, direct imputation to the machine is, for now, a dogmatic impossibility in our legal system.
The Principle of Culpability
There is no punishment without intent or negligence. Attributing 'intent' to a neural network is a legal fiction that clashes head-on with the basic principles of liberal criminal law.
Cascading Liability: Finding the Human Behind the Code
Given the impossibility of blaming the AI, the prosecution seeks human culprits in a 'cascade of liability':
- The Programmer/Developer: They can be charged with manslaughter or injury by gross negligence if it is proven that the accident was due to a foreseeable and avoidable programming error. The defense here is to prove that the code complied with the state of the art ('lex artis') and that the failure was an unpredictable 'edge case'.
- The Manufacturer/Company: As a legal entity, it can be criminally liable if it did not implement a Compliance model that audited the risks of the AI.
- The User/Owner: If the driver of the autonomous car did not supervise the system as required or ignored warnings, the negligence can be imputed to them, by omission of the duty of control.
The Future: Digital Legal Personality and Economic Sanctions?
Some theorists propose creating an 'electronic personhood' for the most advanced AIs, similar to that of commercial companies. This would not allow them to be imprisoned, but it would allow them to be sanctioned with adapted 'penalties': fines against their assets (if they had any), limitations on their capabilities, or even their definitive 'disconnection' (the 'digital death penalty'). However, this path poses enormous ethical dilemmas and is, for now, in the realm of legal science fiction.