At the moment, based on our technological prowess at least, it would be fair to say that humans represent the summit of the evolutionary process here on Earth. But the tricky thing about evolution is that by its nature it isn’t a stationary process. This rule certainly applies to humans, who as the centuries have rolled by, have grown taller, and brains, bigger. This trend is predicted to continue with average heights heading towards seven feet over the next thousand years – no doubt complete with enormous craniums for our super-sized brains. But what if, and it’s a big if, something evolves to overtake us to become the dominant species? And what if it’s not an animal but actually something that we have created – or to be more precise, our computers. And that’s what the idea of the technological singularity is all about.
We can only make wild guesses at how things might work out and there are plenty of ideas floating around for the nightmare scenarios. But how real is this threat and should we be busy unplugging our computers and throwing them in the trash right now? Maybe not just yet… Part of the reason, that at least for now, we can be reasonably relaxed is because computer hardware is only part of the singularity picture – the ground-breaking software required to develop true AI is still a very long way away. But exactly how far?
The famous Turing test is a measure of our progress towards such an AI system. This test centres around a computer being able to fool someone who's talking to it from another terminal, that they are actually talking to a real life person on the other end. Recently headlines were made at Reading University, with claims that a software system called Eugene, passed the test for very the first time. However, before you get too excited, there are a lot of experts who claim that Eugene barely scraped through with an F on this test, and point out that all Eugene's software actually does is to ape a person, rather than show any true measure of intelligence such as problem solving. So what about a system more akin to the famous HAL 9000 computer from the film 2001, that could, in theory, pass the Turing test with flying colours? The best current guess is around fifty years for this to be achieved, so maybe note down 2064 in your diaries for the beginning of the next major technological revolution.
But what happens when a true AI system does take its first virtual breath? That depends if they are given autonomy to start building and designing even better machines. Because if they do, potentially the exponential speed of development goes through the roof from that moment and it would only be a matter of time before our machines overtake us intellectually. And what does that mean for humans? Technically, we’ve become a redundant species at that point, as our computers become the next step of evolution. Welcome to the world of the singularity.
In the worse case scenario, these super machines decide it’s time for us to step aside and take over the show from this point, and we’re all exhibited in cages as curios of a by-gone era… or something much worse.
So how could we avoid this less than ideal outcome for the human race? Cited as one possible solution are Isaac Asimov's famous three laws of robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.[1]
So scrub out the word robot and insert AI, and maybe you have a set of programming rules that could be hard-wired into these superhuman computers, as a sort of internal moral compass.
But again Vinge foresaw this solution and pointed out that if these computers were so smart in the first place, wouldn’t they simply be able to bypass any rules we’d wired into them?
But maybe also thinking of these super-intelligent systems as some sort of human exterminator, is a fairly bleak assessment of something that’s deemed to be intelligent. After all why should a technological singularity necessarily be a bad thing for humans?
Surely, part of any measure of true intelligence is empathy and if these systems are empathetic, wouldn’t they understand why we are the way we are as a species? With their greater intelligence of a caring child turned parent, wouldn’t they want us to overcome and grow beyond our own short-comings like our propensity for violence and war? If humans build the software, maybe part of our humanity, the best of us, may rub off on them. And then aren’t we more likely to have a symbiotic relationship with these new AIs, who work with us in harmony, rather than as our masters, and help us to realise our true potential as a species?
Certainly today, rather than be afraid, we should embrace the incredible potential that our rapid developments in computing have brought us – from modelling the path of tornados, to working out a cure for cancer. And as fun as it is looking into the crystal ball at what a technological singularity might look like, let’s not lose sight that we live in one of the most astonishing epochs our species has ever known. Also one should keep in mind that the technological singularity is a prediction, not a certainty and there are many who argue it will never happen. But if it does, maybe if approached with our eyes wide open, rather than be feared, it could turn out to be a blessing not a curse for all humanity.
"Certainly today, rather than be afraid, we should embrace the incredible potential that our rapid developments in computing have brought us – from modelling the path of tornadoes, to working out a cure for cancer. And as fun as it is looking into the crystal ball at what a technological singularity might look like, let’s not lose sight that we live in the one of the most astonishing epochs our species has ever known."
ReplyDeleteHere! Here! Let not our computerized reality be the drug that lulls us into complacency. Let not scientific advances be the excuse for ignoring the application of the scientific method.
I'm really looking forward to July 9th. (And I hope I can, to some degree, be excused for asserting my own interpretation into Nick's thoughts, should that be the case.)
Regards,
Jim McGinn
www.solvingtornadoes.com
Look forward to chatting to you then, Jim!
DeleteGreat post, Nick. A good summary of the state of play with AI. Personally, I think the predictions are overblown. Turing thought his test would be passed by 2000. Apart from some interesting developments like Deep Blue's defeat of Kasparov in 1995, which was, let's face it, a case of superior number crunching, there's precious little evidence of AI developing anything remotely like human intelligence, nor any reason to believe why increased computer power is likely to generate it. Our intelligence - emotional and intellectual - isn't just a result of cerebral computer power anyway, but millions of years of evolution, which isn't something you can replicate with a bunch of algorithms. I don't think so anyway. More likely in my view than the technological singularity you speak of is a gradual merging of humans and machines. It's Ray Kurzweil stuff, I know, but I don't think it'll happen on anything like the short timescale he's speaking of. Maybe over hundreds of years, our human bodies will gradually be enhanced and augmented to make us stronger, faster, better at coping with stress and disease, etc, gradually turning us into gods.
ReplyDeleteOh now you've hit on a rich topic there, Alex! I do believe the boundaries between man and machine will start to blur. We will augment ourselves at a increasingly break neck speed. I think Iain M Banks Culture series explored all sorts of fascinating ideas in this area. Life span may be greatly extended for example, but will throw up all sorts of resource issues. We are evolving and our technology is part of that evolutionary process. And that includes computers.
DeleteFascinating to read this just after watching the film Her which explores these issues and posits an mostly benign outcome. It's also an unusual film in the way that it presents an AI with highly developed emotional intelligence in the manner you propose (though there's no discussion about how this might actually be achieved!)
ReplyDeleteOh I must check that film out, Nick! I think the dystopian view is far too entrenched in science fiction. It's the same school that always portrays aliens turning up to steal our planet's resources and exterminate us in the process. The small flaw in this pessimistic view is that the universe is actually extremely rich in resources. Why would they bother? Seriously? I think humanity has a knack for projecting the worst of ourselves onto things... be it AIs... be it aliens. Enough with the pessimism already! ;o)
DeleteLovely piece Nick - and I would definetely agree with your assessment. Vinge did indeed blow my mind with the singularity idea, and its fundamental unknowability. My two cents worth start from the idea that such an intelligence (when it emerges, not if) should definetely be benign. In a way it would have developed without the fight-or-flight instinct that is typical of carbon-based beings on earth, so hopefully...
ReplyDeleteNow that's a philosophy I can totally agree with, Luca. I think what amuses as I mentioned in another comment, is how popular fiction likes to project the worse of our human traits onto these future AIs. Seriously? Why should they be megalomaniacs? Terminator has a lot to answer for! ;o) Surely a true intelligence will look a lot more enlightened than that?
ReplyDeleteThis comment has been removed by the author.
ReplyDelete