Indeed, one could argue, this is where the ethical boundary currently lies, and a key reason together with legal compliance why the use of autonomous weapon systems to date has generally been constrained to specific tasks—anti-materiel targeting of incoming projectiles, vehicles, aircraft, ships or other objects—in highly constrained scenarios and operating environments. What is clear is that—from both ethical and legal perspectives—we must place the role of the human at the centre of international policy discussions. This is in contrast to most other restrictions or prohibitions on weapons, where the focus has been on specific categories of weapons and their observed or foreseeable effects.
The major reason for this—aside from the opaque trajectories of military applications of robotics and AI in weapon systems—is that autonomy in targeting is a feature that could, in theory, be applied to any weapon system. Ultimately, it is human obligations and responsibilities in the use of force—which cannot, by definition, be transferred to machines, algorithms or weapon systems—that will determine where internationally agreed limits on autonomy in weapon systems must be placed.
Ethical considerations will have an important role to play in these policy responses, which— with rapid military technology development —are becoming increasingly urgent. The international community and the UN Security Council should give more attention to this subject. How can some governments dare to develop weapons, which are not constantly under human control??? International criminal law should provide for punishment in case of contravention.
What is your opinion on this subject? Click here to cancel reply. Your email address will not be made public.
Your comment. The requirement for human control The risks of functionally delegating complex tasks—and associated decisions—to sensors and data-driven algorithms is one of the central issues of our time, with serious implications across sectors and societies.
Nowhere are these more acute than in relation to decisions to kill, injure and destroy. Ethical debates Ethical questions have often appeared something of an afterthought in these discussions. Human agency Foremost among these is the importance of retaining human agency—and intent—in decisions to kill, injure and destroy.
Who Is Responsible When Robots Kill?
Human-centred approach What is clear is that—from both ethical and legal perspectives—we must place the role of the human at the centre of international policy discussions. Technology can be a downright scary thing; particularly when its new, powerful, and comes with lots of question marks. But films like Terminator and shows like Westworld are more than just entertainment, they are a glimpse into the world we might inherit, or at least into how we are conceiving potential futures for ourselves.
Among the many things that must now be considered is what role and function the law will play. Expert opinions differ wildly on the likelihood and imminence of a future where sufficiently advanced robots walk among us, but we must confront the fact that autonomous technology with the capacity to cause harm is already around. But who do we blame when a robot does wrong? This proposition can easily be dismissed as something too abstract to worry about.
While problems like this are certainly peculiar, history has a lot to teach us. For instance, little thought was given to who owned the sky before the Wright Brothers took the Kitty Hawk for a joyride. Time and time again, the law is presented with these novel challenges. And despite initial overreaction, it got there in the end. Simply put: law evolves. If you get mugged, you expect the mugger to be charged with a crime and punished accordingly.
But the law also has expectations of us; we must comply with it to the fullest extent our consciences allow. As humans we can generally do that. To varying extents, companies are endowed with legal personhood , too. It grants them certain economic and legal rights, but more importantly it also confers responsibilities on them. So, if Company X builds an autonomous machine, then that company has a corresponding legal duty.
The problem arises when the machines themselves can make decisions of their own accord. As impressive as intelligent assistants, Alexa, Siri or Cortana are, they fall far short of the threshold for legal personhood.
But what happens when their more advanced descendants begin causing real harm? The criminal law has two critical concepts. First, it contains the idea that liability for harm arises whenever harm has been or is likely to be caused by a certain act or omission. Second, criminal law requires that an accused is culpable for their actions. The idea behind mens rea is to ensure that the accused both completed the action of assaulting someone and had the intention of harming them, or knew harm was a likely consequence of their action.
So if an advanced autonomous machine commits a crime of its own accord, how should it be treated by the law? Can this be done be referring to and adapting existing legal principles?
- Legal Dangers!
- LawTech Resources - Law, technology and innovation - Library guides at Monash University?
- Dog Eat Dog?
- Almost True Sex Tales; a comedy (Diamond, Club, Heart, Spade).
- Rencontre Du 3° Type Au Moyen Age (French Edition).
Take driverless cars. Cars drive on roads and there are regulatory frameworks in place to assure that there is a human behind the wheel at least to some extent. However, once fully autonomous cars arrive there will need to be extensive adjustments to laws and regulations that account for the new types of interactions that will happen between human and machine on the road. As AI technology evolves, it will eventually reach a state of sophistication that will allow it to bypass human control.
As the bypassing of human control becomes more widespread, then the questions about harm, risk, fault and punishment will become more important. So can robots commit crime? In short: yes. If a robot kills someone, then it has committed a crime actus reus , but technically only half a crime, as it would be far harder to determine mens rea.
When robots kill : artificial intelligence under criminal law
How do we know the robot intended to do what it did? But even a few short hops in AI research could produce an autonomous machine that could unleash all manner of legal mischief. Financial and discriminatory algorithmic mischief already abounds.