Trusted autonomous systems, public safety and ethical development

On moral grounds, some may contend that military operations should not be subject to the progress of automation and artificial intelligence evident in other areas of society. There have been calls in the media to ban ‘lethal autonomous weapons’, ‘killer robots’ and ‘the weaponisation of AI’. Conversely, there is also a case for understanding Defence and National Security applications of this technology, now and into the future.

The inclusion of the term ‘trust’ in Defence’s autonomy science and technology programs reflects the primacy that Defence attaches to human control over autonomous systems. The emphasis of Defence research is on how to build trust into a system that will be reliable, operate with integrity, and dependably execute a mission or tasks in dynamic environments.

In Defence, humans and machines working together to achieve operational missions and goals means augmenting the workforce, not replacing it. In the future, AI decision-making and human decision-making will need to be highly integrated, with each assessed equally on its merits. This includes the option of allowing the machine to, at times, override the human. The introduction of ‘rules of engagement’ components (essentially legal expert systems) within weapons and weapon systems illustrate this point. The resulting ‘ethical weapons’ would have the ability to assess and decline targeting requests when the rules of engagement violations are deduced. Decisions to override these ‘ethical weapons’ could be logged for subsequent review. Research towards ‘ethical weapons’ is planned for the Defence science and technology program.

Defence programs seek to advance technologies and policies for trusted autonomous systems that are safe, secure, compliant with international laws of armed conflict, rules of engagement and in concert with the ethical standards of Australian society.