Artificial intelligence and singularity could mean demise of human control
Date
November 9, 2014
Sam de Brit
Illustration michaelmucci.com
One of humanity's great conceits is thinking we are evolution's finished product.
It's an easy hubris to indulge in considering anatomically modern humans appeared 200,000 years ago and we've ruled the roost since. I doubt when we puny-skulled, slightly built types turned up with our crude jewellery and cave paintings, Neanderthals were too fussed. And look where that got them.
It makes you wonder whether the complacency we display about the technology that serves us today might be our ultimate undoing; we underestimate the challenger. We giggle at Siri's mistakes, roll our eyes when Pandora suggests a dud song and pause Call of Duty to go pee.
Rarely, however, do we pause to consider the beachhead artificial intelligence (AI) has already won in our lives.
AI has been getting attention lately thanks to Tesla founder Elon Muskand physicist Stephen Hawking who've both warned of the dangers of this genie escaping its bottle. Director James Cameron joined the fray saying 'Skynet' – the malevolent AI network in his 1984 film Terminator – has won.
"Everyone is already wired to their computers," he said.
True AI, however, is still a ways off and the much-hyped "singularity", where computers become so advanced they can simulate life itself, remains the province of Cameron's movies or more recent efforts like Johnny Depp's Transcendence and the upcoming Ex Machina.
What's certain is artificial and machine intelligence will soon inhabit far more than your iPhone and XBox and this has some of our biggest brains excited and anxious.
Facebook and Google are pouring hundreds of millions into AI research, while Apple's co-founder Steve Wozniak recently joined Sydney's University of Technology as an adjunct professor in robotics and AI.
A series of recent surveys show around 90 per cent of experts in the field of AI expect "human-level machine intelligence" (HLMI) to be developed by 2100, this being defined as "one that can carry out most human professions at least as well as a typical human".
Oxford University philosopher Nick Bostrom points out in his new book Superintelligence that it is imperative we "understand the challenge presented by the prospect of superintelligence, and how we might best respond."
"This is quite possibly the most important and most daunting challenge humanity has ever faced. And – whether we succeed or fail – it is probably the last challenge we will ever face.
"If some day we build machine brains that surpass human brains in general intelligence, then this new superintelligence could become very powerful. And, as the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species would depend on the actions of the machine superintelligence," writes Bostrom.
It might seem strange worrying about the ethics of AI before it's been truly realised but Bostrom – as do Musk and Hawking – points out: "We will only get one chance [to control it]. Once unfriendly superintelligence exists, it would prevent us from replacing it. Our fate would be sealed."
I doubt Neanderthal man foresaw his demise so we'd do well to remember evolution never stops, there have to forks ahead in mankind's family tree and Homo sapiens could well be a dead end.
According to The New York Times, "the combined level of robotic chatter on the world's wireless networks … is likely soon to exceed that generated by the sum of all human voice conversations taking place on wireless grids".
We'd be wise to stay abreast of that conversation.
Read more: http://www.theage.com.au/comment/artificial-intelligence-and-singularity-could-mean-demise-of-human-control-20141106-11fhvh.html#ixzz3IdrOsvuP
- Forums
- General
- No need to apply very soon
No need to apply very soon, page-2
-
- There are more pages in this discussion • 1 more message in this thread...
You’re viewing a single post only. To view the entire thread just sign in or Join Now (FREE)