by Israel Rafalovich, Journalist, Brussels
Defence strategists and planners are confronted to a rapidly approaching future with a new war fighting regime where unmanned and autonomous systems play central roles. For politicians and military strategists with tight budgets, robots are popular especially with military interventions in foreign countries becoming less popular. Military experts see robotics as part of an asymmetric warfare in which an opponent whose overall capabilities are regarded as technologically inferior can defeat a superior one. We should begin to prepare now for this not so distant future of war in the robotic age.
Man versus machine?
A warfare regime based on unmanned and autonomous systems will change our basic concepts of defence strategy. It might be a constraint on the ability of democratic states to use lethal autonomous weapon systems, but authoritarian peer adversaries may not face similar constraints, equipped with autonomous weapons and willing to use them in an unconstrained manner. The military advantage might shift to our opponent.
Furthermore, systems have already profoundly reshaped strategy and procurement priorities and are growing increasingly important in armed forces worldwide. Unmanned systems have been employed extensively in Iraq, Afghanistan and elsewhere. These largely remotely piloted air and ground vehicles will soon be replaced by increasingly autonomous systems across the full range of military operations.
Increasingly autonomous systems will be able to take on roles humans simply cannot, such as undertaking more dangerous missions or reacting with greater speed, precision and coordination than humans are capable of: autonomous cargo drones could drop off supplies to the front line, self-driving machines could remove land mines, and artificial intelligence can be used to develop precision models. These characteristics will make robots of all shapes and capabilities more and more attractive to force designers, and more central to tactics and operations.
Future lethal autonomous weapon systems will be capable of both independently identifying, engaging and destroying a target without manual human control. We should be careful before we relinquish such moral decision-making to machines. Even if they had the sophistication, relinquishing the decision to kill to machines crosses a fundamental moral line.
Internationally accepted ethical standards
Technologies such as these are no longer confined to the realm of science fiction. They have reached a point where the development of lethal autonomous weapons systems (LAWS) is feasible within years, not decades.
If the international community does not take steps to regulate the critical functions of LAWS, then regulation will continue to lag behind the rapid technological advances in the field of robotics, artificial intelligence and information technology. Countries with vested interest in the development of LAWS like the US, the UK, Israel, China and Russia have shown little interest in establishing binding regulations. Weapon development should meet internationally accepted standards of ethics, attenuating an individual soldier’s ability to misuse a weapon for an immoral act.
Technology does not make war more clinical – it makes it more deadly. Lethal autonomous weapons once developed will permit armed conflicts to be fought at scales greater than ever, and at time scales faster than humans comprehend. Nothing about technology or robots alters the fact that war is a human endeavour, with decidedly deadly consequences for troops and civilians once the forces of war are unleashed.
A war between robots no longer an illusion war planning. It will become a reality in the near future, and some are already on the battlefield. Pandora’s box is already open, and it will be hard to close it, if even possible. There is a significant and legal dilemma that emerges as a result. The concept of roboethics (also known as machine ethics) brings up fundamental ethical reflection that is related to practical issues and moral dilemmas.
Roboethics will become increasingly important as we enter an era where artificial general intelligence (AGI) is becoming an integral part of robots. The objective measure for ethics is in the ability of an autonomous system to perform a task as compared to the same act involving a human. A realistic comparison between the human and the machine is therefore necessary.
Can robots be moral?
With steady advances in computing and artificial intelligence, future systems will be capable of acting with increasing autonomy and replicating the performance of humans in many situations. So, should we consider machines as humans, animals, or inanimate objects? One question in particular demands our attention: should robots be regarded as moral machines or moral agents with responsibility delegated to them directly rather than to their human designers or minder?
Mankind has struggled to define moral values throughout history. If we even cannot agree on what makes a moral human, how could we design moral robots? Artificial intelligence researchers and ethicists need to formulate ethical values as a base for qualified parameters and engineers need to collect enough data on explicit ethical measures to appropriately train artificial intelligence algorithms. A debate has to be held on developing trusted autonomy in future systems and defining how far to go in allowing fully autonomous weapons and platforms:
1) Should robots be regarded as moral machines or moral agents with responsibility delegated to them directly rather than to their human designers or minder?
2) How would we design a robot to know the difference between what is legal and what is right? And how would we even begin to write down those rules ahead of time, without a human to interpret in the battlefield?
3) Does international humanitarian law imply that humans must make every individual life-or-death decision?
4) Can we program robots with something similar to the Geneva Convention war rules, prohibiting, for example, the deliberate killing of civilians?
Human machine interaction is central to the judical and ethical questions of whether fully autonomous weapons are capable of abiding by the principals of international humanitarian law. Artificial intelligence developers are representatives of future humanity.
But autonomous weapon systems create challenges beyond compliance with humanitarian law. Most importantly, their development and use could create military competition and cause strategic instability.
We should be worried of the widening gap between knowledge and the morality of mankind. As the world is past the point of considering whether robots should be used in war, the goal is to examine how autonomous systems can be used ethically. There is a high probability that it will be a relationship of man and machine collaboratively living and working together.