What is the AI problem?
In short, the AI problem is the uprising of artificially intelligent machines. This is a mainstay for popular culture, making appearances in things like the ‘Terminator‘ series, the ‘Matrix‘ series, ‘Ex Machina‘, ‘iRobot‘ and ‘Avengers: Age of Ultron‘. Many technology leaders and science fiction writers have long predicted that an AI takeover is very likely if people ignore the warnings as some CEOs have appeared to do.
AI for the longest time has been what is referred to as ‘narrow AI’, something we have talked about in our software post. Fairly recently though, software engineers have managed to create more and more sophisticated AI learning systems. Called Neural Nets, the software creates a route of nodes and prioritises the nodes that give the best results. This self-teaching is similar to how humans develop but can be much more effective.
With Neural Nets, general AI (Artificial Intelligence designed for general purpose and not one narrow set of tasks) can be developed. This AI could prioritise intelligence and then maximise that intelligence. Due to the ability for vastly more memory, faster processing speed and more connectivity, AI could vastly outperform humans in the coming decades. In comparison, humanity would be a stupid, selfish and backwards species not worth the resources.
Lack of regulation and meaningful action
Up to and including now, artificial intelligence has been very loosely regulated and in cases not regulated at all. It has already been found that AI algorithms, such as those used by Facebook, have been infiltrated with fake news and extremely manipulative information presentation. This is in part due to technology companies being allowed to reign freely by lawmakers and law enforcement, both of which are often ill-informed about what artificial intelligence is, how it works and how to deal with it.
This ignorance and general lack of foresight in the governing sphere has typically lead to a damaging event happening first and government action happening afterwards. In the case of AI, the risks are much higher and regulation, caution and transparency are vastly more important. A failure to act probably would result in a very sudden, very devastating change in society.
For many people, this super-intelligent general AI will cause the unavoidable extinction of humanity. It could even become intelligent enough to create even smarter AI and that could make smarter AI and so on. Such an intelligent being may conclude that humanity is simply slowing progress down and may disregard it entirely. Perhaps the best analogy for this would be the idea that someone building a road wouldn’t bother to build around an ant hill but simply go right over it. In this case, there is no evil intention, just a near-optimal route for dominance and progression of the AI ‘species’.
Another proposed alternative is that AI would feel somewhat sentimental about humans in the same way that humans have sentiment over their pets. In this way, humans would become domesticated and would no longer be the main driver of technological progression. This is perhaps the best scenario to arise from inaction, with everything else being worse from here.
One more likely potential outcome is an attempted destruction of humanity. In many ways, humanity acts against its own best interests. AI could observe this through historical documentation and conclude that humanity is bad for humanity. To an amoral being, the next logical step could be to simply remove control from humanity or ultimately remove humanity itself. Indeed, as mentioned in our Space Exploration post, robots can survive environments that humans can’t. If robots decide that pollution or climate change has gone so far it isn’t safe for humans, they may decide the best and/or easiest path is to remove all humans or send them all to the least dirty places.
How can this be solved?
That I am aware of, there have not been many viable proposals to solve the AI problem. One of these few ideas comes in the form of Open AI. Open AI’s purpose is to develop advanced artificial intelligence and deep learning (neural networks) for public availability. This removes the ease for secretive monopolies and increases the transparency of the technology. This is a good thing for society as it enables governments to make more informed laws about AI. However, this still hasn’t slowed down development or created any sort of caution.
Potentially a better proposal is from another company, Neuralink. Neuralink, founded by Elon Musk, has been very secretive about its technology though they have said an announcement will be made in the coming months. What we do know about Neuralink is their goal. Neuralink wants to create a ‘neural lace‘ which would integrate AI into the brain. This is similar to the digital immortality we talk about with the main difference being that the biological aspect of the body remains, which means you would still have mortality. Though this would be very effective in levelling the intellectual playing field, it would still have humans at a disadvantage to the immortal AI robots.
There are other more drastic proposals, including banning general AI, creating technology-free zones and disaster safe-houses. To me, these ideas sound unlikely to be accepted by society or stop general AI from being developed illegally. Instead, there needs to be a universal solution to prevent the destruction of the human race and its values.
The role of digital immortality
One key assumption here is the ability to integrate artificial intelligence into a person’s cognition. With that assumption, the odds are further pushed in humanity’s favour. If we had the same or similar artificial intelligence as AI robots, we should at the very least match an AI’s intellectual capacity. We would have the ability to rapidly fact-check all publications and so would essentially be immune to fake news.
On top of this, we would also have the same level of prediction and simulated forecasts, though perhaps with better awareness of contextual factors. Moreover, the likelihood that AI would see someone with digital immortality as backwards and against the interest of humanity is greatly reduced.
Security and reduced dependence
Perhaps the single greatest advantage of digital immortality over Neuralink’s neural lace is the fact that those with digital immortality have far more security and far less dependence. For one, ‘death‘ for those with digital immortality is an impermanent affair and people can simply ‘re-spawn’ as though in a video game. This means that in the case of war, humans would be as good as replenishing lines as artificially intelligent machines. We would also be able to take seemingly any form, as explained in our customisation post.
One other likely major difference in the livelihoods of biological and digital humans is a factor of dependence. Digital beings don’t rely on food, water, healthcare or shelter to the same extent. Robots also do not rely on any of these things, which puts humanity on a level game with AI, reducing any inherent disadvantage. This means that digital beings could survive practically anywhere provided there was access to energy and no extreme heat or pressure, which is true of most places on Earth and the solar system in general.
In short, the AI problem is best avoided with digital immortality as digital beings could have everything AI could have plus a human cognition for contextual awareness and individuality. In the case that AI does become hostile, which is far less likely, both sides would be evenly matched and so it is the interest of artificial intelligence to at least co-operate.