Killer Robots A Few Years Away Says Top UNSW Researcher

0

Most likely the dinosaurs didn't sit around debating what would end their dominion of earth. Asteroid impact? Supervolcanos? The next ice age?

One of the marvels of mankind is that we *can* have this debate. And uniquely in the history of the earth, we also have the tools, if not the will, to change the outcome.

Sadly Australia's most recent contribution to such debate is the $4 million to be spent on one of climate change's least favourite pundits, Bjorn Lomborg. Australia could, however, play a much more constructive role in the debate about existential threats to mankind.

Many respected scientists and technologists have been warning about the threat that Artificial Intelligence (AI) poses. Warnings have come from Stephen Hawking, Elon Musk, Bill Gates and most recently, Apple co-founder and Australian resident, Steve Wozniak (aka "The Woz").

In response to this, hundreds of researchers recently signed an open letter acknowledging the threat that AI poses to mankind, and calling for research on how to make AI systems robust and beneficial. http://futureoflife.org/misc/open_letter

To understand what form these threats might take, we can look to Hollywood. Think Hal in 2001. Arnold Schwarzenegger as the Terminator CSM-101. And more recently Johnny Depp as an AI researcher in the movie Transcendence.

Whilst AI's threat to mankind cannot be ignored, it is not something to lose sleep over tonight. As someone who has worked in the field all my adult life, I know that most of these threats are still some way off. The majority of AI researchers say that it will take 30 or 40 more years before we have truly intelligent computers.

One reaction to any sort of existential threat is simply to impose a complete ban. However, this would throw the baby out with the bathwater. If we can get AI right, it has great promise to tackle many of the deep problems facing us like poverty, disease and inequality.

One imminent threat coming out of AI is that from lethal autonomous weapons. Or, as they are more graphically called, "killer robots". We need to act now on this issue as killer robots are only a few years away at best. And if we can get this one right, we will start the larger debate on a beneficial future for AI on the right foot.

In 2014, more than 20 Nobel Peace Laureates called for a preemptive ban on killer robots. In January I organized a debate at the Annual Conference of the Association for the Advancement of Artificial Intelligence considering the many legal, ethical and technical reasons behind such a ban.

And last week, the UN hosted the second multilateral Meeting of Experts on Lethal Autonomous Weapons Systems. The eventual goal is to get a pre-emptive ban on killer robots to go alongside similar bans on other dangerous technologies like blinding lasers.

Australia has sent experts to contribute to this debate. But as Australia punches well above its weight in AI and robotics, we could do more.

The World Cup of robotics is RoboCup, an annual competition held since 1997 where the best robotic teams come together to play soccer and face other challenges like finding survivors in a simulated earthquake.

Australia has performed exceptionally well in past RoboCups. 45 countries competed in the 2014 competition. And a team from my own university, UNSW won the blue chip soccer competition beating their German opponents in the final 5-0. Take that, Fritz!

Sadly the Australian delegation to the UN has given rather lukewarm support to discussions about a ban on killer robots. Other countries have been much more positive.

France has gone on record that it does not possess and does not intend to acquire robotic weapon systems with the capacity to fire independently, and that the full responsibility of military and political leaders is always required in the decision to use armed force. Such strong positions are required to balance opposition from countries like the UK, which have gone on the record that they will not support a ban.

Killer robots will lower the barriers of entry to war. They will also make weapon systems far more efficient at killing humans. For these sorts of reasons, I fully support a ban.

I urge our government to put its weight behind a ban. Australia can take a lead here to do good based on its position as one of the leading countries for AI and robotics research.

* Professor Toby Walsh is Research Group Leader at NICTA and Professor of Artificial Intelligence at the Department of Computer Science and Engineering at the University of New South Wales. He will appear on Insight, 8.30pm on SBS ONE tonight to explore the ethical, moral and economic impacts of AI.

New Matilda is independent journalism at its finest. The site has been publishing intelligent coverage of Australian and international politics, media and culture since 2004.

[fbcomments]