Picture a world where machines have long since passed the Turing Test. When people can’t tell the difference between artificial and human intelligence. In this world, will humans become obsolete, like the floppy disk? Or is it a world where artificial and human intelligence merge to usher in the next step of our evolution? That’s the technological singularity, the theoretical time in the future where technological development outpaces our abilities to maintain control over it. For better or worse, it’s undoubtedly something that will result in never seen changes to our world.
When will we reach technological singularity?
Join any discussion about business technology, and it won’t be long before someone mentions artificial intelligence (AI). The amount of digital data in the world has already surpassed our abilities to manage it, a development, which has made clear the need for algorithms to do the job for us.
But algorithms and AI aren’t the same things. They’re not even close. An algorithm is a computer program that parses data from sets too large for human interpretation, whereas true AI is capable of thinking for itself, making its own decisions, and having free will.
True artificial intelligence doesn’t exist. Yet.
Once computers can learn for themselves without being taught and trained with data that we have collected, we can expect profound societal changes on a scale never before seen.
The explosion of AI has long been a popular trope of science fiction, but the reality is edging ever closer. Ray Kurzweil, world-renowned futurist and Google’s Director of Engineering, believes technological singularity is going to happen before 2045. Take note: so far, an impressive 86 percent of his predictions made since the ’90s have become a reality. But even if that sounds optimistic (or pessimistic, depending on how you look at it), it’s the logical next step in the evolution of technology. Once computers can learn for themselves without being taught and trained with data that we’ve collected, we can expect profound society changes on a scale never before seen.
How can we keep up with technology?
The emergence of an artificial superintelligence would bring us greater inventive and problem-solving skills than humans are capable of. This might lead to the creation of a new ‘species,’ one which might not necessarily have human interests at its heart. But so far, we’re nowhere near developing machines with human-level intelligence and the ability to make decisions independently.
Now we can enhance human intelligence and capabilities with tools and technology. Indeed, that’s precisely what we’ve been doing for hundreds of thousands of years, ever since our ancestors figured out how to light fires. But today, the stakes are much higher with projects like brain-to-computer interfaces as active areas of research. This may, in turn, lead to radical developments like mind uploading, in which you entire consciousness can be uploaded to and stored on a machine. At least, it could be useful for those millennia-long interstellar voyages.
Such developments might sound dystopian since they have the potential to transform what it means to be human ultimately. At the same time, the convergence of technology and humanity, an area of study known as transhumanism, appears to be the only way we can keep up with the constant pace of change. It’s either that or the machines eventually outpace our natural evolution and take over.
The survival instinct
If we can’t keep up, and technological evolution ends up outpacing our capability of keeping control over it, then we could face profound risks to security. Each of the seven characteristics that define every biological organism comes together to form a singular goal – survival. Technological singularity occurs when AI matches our desire and capability to survive as a species, and there’s little reason to think that AI would have any evolutionary motivation to be friendly to humans. Today, machines are programmed by us to create the outcomes we want. But what happens when they’re capable of programming themselves?
Time for Humanity 2.0?
Imagine you’re an expert on machine learning, working on an artificial intelligence algorithm that will be able to create other AIs by itself. Are you merely training your replacement, or have you become an architect of your obsolescence?
Welcome to Humanity 2.0, where the human condition is no longer about our biological form. It’s a time when machines become an integral part not just of our societies, but also ourselves. Conversely, this could also lead to newer and deeper social class divisions than we’ve ever seen, as people are separated into those who are augmented by technology and those who are not. There’s a safety benefit of not being physically augmented by technology – no one will be able to hack into your mind.
How humans could remain our biggest threat
Perhaps it isn’t machines that we should be worrying about at all. Assuming we’ll always be able to control artificial intelligence, we could end up transforming it into a deadly weapon. It’s already happening. The proliferation of social media, for example, is one of the most significant security and privacy concerns this century. That’s not so much down to vulnerabilities in the technology, but because it’s a highly effective medium for conducting social engineering attacks. Coupled with the rise of AI, cybercriminals will be better equipped to analyze social media conversations to emulate users’ writing styles and craft more convincing messages to their victims.
Even before we reach the era of true AI, the ability of algorithms to parse enormous amounts of data is already misused. On the one hand, algorithms can perform many useful tasks to improve our world, such as climate modeling and disaster prediction, while things like neural interfaces can help victims of spinal cord injuries. On the other, the ability of humans to interfere with information systems makes them a more significant threat than the technology itself. Perhaps it would be better if we just let the machines take over in the hope they’d nurture us like the irresponsible children we are.
To fear or not to fear
Ray Kurzweil has a more positive outlook than most about the singularity. He claims it’s an opportunity for humankind to improve by making us smarter and better at all the things we value. Other science and technology leaders, including Elon Musk, Bill Gates and the late Stephen Hawking, are not so sure. But one thing’s for sure – there will come a time when our augmented descendants look back on us as rather quaint and exotic creatures.
While we’re still decades away from the singularity, it presents itself as the next big step in the evolution of technology. Emerging tech, like autonomous vehicles and facial recognition, are already paving the way.
True artificial intelligence is coming sooner or later, which is why it’s time for the business world to start taking AI ethics and security issues seriously. It’s no longer the concern of generations far in the future.
This article reflects the opinion of its author.