top of page

Brain Disorders, AI Failures, and Criminal Responsibility

Ethan Bai

Brain abnormalities are exactly as they sound - abnormal.

          In a world where the law is universally applied to millions of individuals, deciding whether people who don't exhibit societally "normal" brain function should be judged by the same standards as those without these abnormalities is challenging. It might not seem fair to judge someone with impaired decision-making with the same yardstick used for those without such impairments, but the decision-making process behind one's actions doesn't directly align with the individual's responsibility.

          For example, consider an individual with significant frontal lobe brain damage. Their inability to control emotions and rationalize decisions leads them to kill their neighbor over a minor argument. When brought to court, their lawyer argues that they shouldn't be held responsible because they didn't have the capability to think rationally, and thus the decision wasn't their fault. While this might be true, their actions are undeniable - their neighbor is dead, and the individual intended to kill. Nothing can bring the victim back from the dead, and it's impossible to shift the blame onto the neighbor as the one who caused their death. So, where else can we place the responsibility, if not on the defendant?  In terms of legal responsibility, individuals engaging in criminal behavior should not receive leniency or exemptions under the law, even when brain abnormalities are involved. The fact that a person's brain dysfunction might lead them to perceive stealing as acceptable doesn't provide them with a free pass to commit theft. Ignorance or misunderstanding of the law—whether due to brain anomalies or not—cannot serve as a valid justification. It is fundamentally unethical and unjust to deny victims the justice they would receive in any other circumstance. Extending beyond human actions, AI has recently emerged as a major topic in legal disputes. In cases such as a car accident where people are injured or even killed, who should bear the responsibility? Is it the car owner, the company head, the individual or team who wrote the software, or solely the AI that caused the error? 

          At first glance, it may seem most straightforward to assign responsibility to those who directly programmed the AI, leading to the accident. However, this approach is nearly unworkable in today's world, as hundreds or even thousands of people might contribute to bringing a single project to fruition. Even if a single individual could be identified hypothetically, determining their responsibility would be challenging as they are part of a larger organization, bringing into question the rights of employees within a company. Additionally, such a blame-game could potentially deter many from contributing to these vital innovations and industries, as the fear of being held accountable for others’ lives constitutes a substantial responsibility. Considering all these complex variables in the realm of AI, it's crucial to establish a foundational set of rules before moving forward. The initial step of defining these rules provides much-needed flexibility for further development and refinement. Some rules might be violated because they're too stringent, or additions might be necessary if they prove too lenient. Thus, without a process of trial and error in this burgeoning field, it's nearly impossible to formulate an effective legal framework to govern AI.

references:

(Required)

bottom of page