At the moment, based on our technological prowess at least, it would be fair to say that humans represent the summit of the evolutionary process here on Earth. But the tricky thing about evolution is that by its nature it isn’t a stationary process. This rule certainly applies to humans, who as the centuries have rolled by, have grown taller, and brains, bigger. This trend is predicted to continue with average heights heading towards seven feet over the next thousand years – no doubt complete with enormous craniums for our super-sized brains. But what if, and it’s a big if, something evolves to overtake us to become the dominant species? And what if it’s not an animal but actually something that we have created – or to be more precise, our computers. And that’s what the idea of the technological singularity is all about.
We can only make wild guesses at how things might work out and there are plenty of ideas floating around for the nightmare scenarios. But how real is this threat and should we be busy unplugging our computers and throwing them in the trash right now? Maybe not just yet… Part of the reason, that at least for now, we can be reasonably relaxed is because computer hardware is only part of the singularity picture – the ground-breaking software required to develop true AI is still a very long way away. But exactly how far?
The famous Turing test is a measure of our progress towards such an AI system. This test centres around a computer being able to fool someone who's talking to it from another terminal, that they are actually talking to a real life person on the other end. Recently headlines were made at Reading University, with claims that a software system called Eugene, passed the test for very the first time. However, before you get too excited, there are a lot of experts who claim that Eugene barely scraped through with an F on this test, and point out that all Eugene's software actually does is to ape a person, rather than show any true measure of intelligence such as problem solving. So what about a system more akin to the famous HAL 9000 computer from the film 2001, that could, in theory, pass the Turing test with flying colours? The best current guess is around fifty years for this to be achieved, so maybe note down 2064 in your diaries for the beginning of the next major technological revolution.
But what happens when a true AI system does take its first virtual breath? That depends if they are given autonomy to start building and designing even better machines. Because if they do, potentially the exponential speed of development goes through the roof from that moment and it would only be a matter of time before our machines overtake us intellectually. And what does that mean for humans? Technically, we’ve become a redundant species at that point, as our computers become the next step of evolution. Welcome to the world of the singularity.
In the worse case scenario, these super machines decide it’s time for us to step aside and take over the show from this point, and we’re all exhibited in cages as curios of a by-gone era… or something much worse.
So how could we avoid this less than ideal outcome for the human race? Cited as one possible solution are Isaac Asimov's famous three laws of robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.[1]
So scrub out the word robot and insert AI, and maybe you have a set of programming rules that could be hard-wired into these superhuman computers, as a sort of internal moral compass.
But again Vinge foresaw this solution and pointed out that if these computers were so smart in the first place, wouldn’t they simply be able to bypass any rules we’d wired into them?
But maybe also thinking of these super-intelligent systems as some sort of human exterminator, is a fairly bleak assessment of something that’s deemed to be intelligent. After all why should a technological singularity necessarily be a bad thing for humans?
Surely, part of any measure of true intelligence is empathy and if these systems are empathetic, wouldn’t they understand why we are the way we are as a species? With their greater intelligence of a caring child turned parent, wouldn’t they want us to overcome and grow beyond our own short-comings like our propensity for violence and war? If humans build the software, maybe part of our humanity, the best of us, may rub off on them. And then aren’t we more likely to have a symbiotic relationship with these new AIs, who work with us in harmony, rather than as our masters, and help us to realise our true potential as a species?
Certainly today, rather than be afraid, we should embrace the incredible potential that our rapid developments in computing have brought us – from modelling the path of tornados, to working out a cure for cancer. And as fun as it is looking into the crystal ball at what a technological singularity might look like, let’s not lose sight that we live in one of the most astonishing epochs our species has ever known. Also one should keep in mind that the technological singularity is a prediction, not a certainty and there are many who argue it will never happen. But if it does, maybe if approached with our eyes wide open, rather than be feared, it could turn out to be a blessing not a curse for all humanity.