If a robot or an algorithm causes the death of a person, who is legally responsible?
When do we start jailing robots? This was one of the topics up for discussion at a recent technology conference in Auckland.
Who or what is to blame when something goes wrong?
Is it the person who created the robot? Or do we treat the AI as a legal person?
Let’s look at an imaginary robot lawyer who gives incorrect legal advice. If the technology has malfunctioned, it makes sense that the manufacturer is to blame.
If the hardware is not at fault it becomes harder.
Is it the builder of the decision-making algorithm?
Or is it the regulators who allowed this technology to be used in this way?
Maybe it could be the lawyer or law firm that adopted the technology?
Or is the AI itself responsible for giving incorrect advice?
Can AI have a guilty mind? Is Jailing Robots even legal?
Would an AI have the same legal status as a person? Or would a new category need to be made?
For criminal liability another question was raised. The current legal system is based on the idea that a physical act doesn’t make somebody guilty unless the mind is guilty.
As we develop our robot overlords, or robot slaves Anandanayagam wants technologists to keep these questions in mind.
Insight EDS are across the latest and greatest security technologies