A (brief) overview of conflict
Why do we have conflicts?
There is generally a magnitude of reasons for conflict, though the psychology behind it is fairly well understood and is observed in many places. The largest psychological effect involved here is called tribalism, the tendency to follow a group mentality and create an ‘us vs them’ culture that can lead to military conflict. This is most commonly seen in religion, the largest contributor to war, politics, ethnicity and the usual reaction to drastic change.
The last of these is increasingly more prevalent with technologies like electric cars, autonomy and general clean technology being fought by corporations, media and often also by ill-educated and misinformed citizens. Though not technically military conflict, this is the same mentality of aggression so responsible for starting wars. This is not exclusively human, however, as many animal species compete with other tribes for resources despite shared resource allocation being a favourable option.
Conflicts are also started for ethical reasons and political gain, with things like World War 2 arguably being started to oppose a very unethical regime. Ultimately though conflicts are started either because of tribalism, short-term personal gain and/or bad, oppressive ideas or to oppose said ideas, with few exceptions.
The new military
As with all technologies, there are people who see the benefits and there are people who oppose the change. This has long been the case with notable examples in the industrial revolution, early automotive days, early smartphone days and many more. This is famously categorised with the quote ‘first they ignore you, then they laugh at you, then they fight you, then they win’. As an aside, this is often misquoted as having originated from Gandhi despite being no record of this. It is instead more likely that the quote following from union leader Nicholas Klein in 1914 inspired the modern version:
This is a very real possibility for many people and already some people are dismissive or negative about the idea. Not only would this pose the greatest change in human existence but it also competes directly with many people’s personal beliefs. Religions, for example, are still followed by about five-sixths of the population and are in most cases synonymous with the idea of a spiritual afterlife or eternity. Digital immortality directly challenges this with secular immortality, which would likely catalyse the societal trend away from belief in the supernatural and would be a figurative nail in the coffin for religion.
Perhaps out of fear or desperation, the religious five-sixths may be inclined to crusade against digital beings, not wanting to admit defeat. This military desire would probably not be exclusive to religious populations as many other groups would be opposed to the technology. Sadly though this would not be a fair fight as digital beings have so many more inherent advantages, something we will look at.
One potential factor is the different mentalities of digital beings and biological beings. Digital Beings would likely be far more intelligent than biological people due to software enhancements and would have far easier fact-checking and awareness of cognitive biases. This makes digital beings far less susceptible to propaganda and misinformation. Sadly biological humans would not have these abilities and a populist movement could easily manipulate the majority of the population.
Furthermore, many things like socioeconomic and political issues would likely be viewed differently. This usually creates conflicts such as the recent Brexit/Remain divide but now digital beings could become the face of the opposition. Of course, this is a radical oversimplification but if left to fester these negative attitudes could result in hostility towards digital beings.
This is the other major plausible enemy in a conflict. Made famous by franchises like The Terminator, super-intelligent AI could pose one of the largest threats to human existence yet. Thankfully, digital beings have many of the inherent advantages that super-intelligent AI would have, something discussed at length in our artificial intelligence post. This means that any conflict between AI and digital beings would at least be evenly matched from a capability perspective.
Due to the vastly different dependencies and abilities of digital beings and biological humans, the tactics and battle style would be vastly different. So too are the vulnerabilities, and digital beings are far better in this regard.
Many of these advantages are true of both digital beings and artificially intelligent robots. First of all, digital beings do not rely on many of the things humans do. Things like water, food, healthcare and to an extent shelter are not needed by digital beings. This means there are far fewer dependencies on supply lines. The only real things needed are energy and a stable internet connection, two things which shouldn’t be a problem by the time the technology becomes possible.
Another huge inherent advantage is the customisation, software and hardware enhancements of digital beings. Not only could digital beings integrate practically any sensor, but they could also take many form factors provided they can fit the few vital components such as the central processing unit, memory storage and any coolant, structural stability or other components required to support those systems. For example, there is no reason why a digital being could not and would not take the physical form of a battle tank, passenger vehicle or something yet unknown. This allows for digital beings to have superior detection and disguise, something humans simply will not beat.
The last major advantage for digital beings is the ability to fight much more effectively in practically any environment the other side can. As explained in our space exploration post, digital beings can survive practically anywhere with a sufficient energy supply. This means biological warfare is futile against digital beings. Not only that but unlike humans those with artificial bodies do not tire, meaning they can remain effective and focused for the whole duration of the battle.
Perhaps the single biggest advantage in a conflict is the ability to simply respawn. Of course, some resources would need to be used to enable this, but that would seem negligible compared to the birth, upbringing, education, support and training of a human soldier. Even if digital beings were pathetically bad at warfare this ability alone would give digital beings the advantage in any conflict.
This does, of course, come with a few weaknesses of course, though far less significant than the weaknesses humans have. The main weakness comes from the need for centralised data and processing plants such as those needed to store backup digital brains, which would pose a huge target. There is also the risk of viruses and cyber warfare. Digital beings are at a higher risk with this but would probably have multiple mechanisms in place to prevent this form of attack from causing significant damage.
The only other kind of disadvantage becomes relevant when in conflict with AI. Due to the necessity of regulation demanding that only one active copy of a cognition be allowed at one time, digital beings can only ever be as populous as to begin with. AI, on the other hand, would not obey any human regulations if it didn’t want to so could make practically unlimited copies of itself. Of course, digital beings could potentially operate many things like drones at once, but this number is far lower compared to the copy/paste AI software. Thankfully though this homogeneity (homogeneous = the same type) of the software means that any weakness would be universal and exploitable.
The best way to avoid these risks would be to put servers and storage facilities in secure locations such as far underground. They would also need to have very regular checks open to the public whilst physical access be limited to a central organisation put in place and governed by multiple countries. No other facilities should be allowed and the actual technology should be tightly regulated in order to protect digital beings.
There would also need to be many software checks and security from the start to prevent any form of cyber warfare from tampering with the backup digital brains. This could take the form of every 1 or 0 in the binary code having multiple pairs so if one or two of them are different they can be corrected. This idea should also be true of the entire digital brain in case one of the storage facilities is damaged or loses all power. Potentially when other planetary bodies are colonised this could also be made true of other planets and moons with local facilities for convenient use and other facilities in case that planet or moon is destroyed somehow.
The worst and best case scenarios
Ideally, there would be no conflict at all. People who are afraid or sceptical of the technology would keep those opinions confined to vocal discourse and not any physical conflict. They would eventually see the benefits of digital immortality and cease to hold beliefs without and/or in spite of evidence. Also, misinformation and ignorance would be challenged and all people would learn how to think critically and ultimately decide to do what is best for humanity.
In the absolute worst case biological humans would attempt to destroy digital beings, spurred on by inaccurate propaganda, tribalism and misinformation campaigns. If this post has been at all successful it is also clear that humans simply can not win this fight and consequently many decent people on bad paths would die for a cause they needn’t be a part of.
How to can ensure a good outcome
To ensure no major conflict or something like it occurs, social intercourse would need to be kept open and healthy, unlike now when some controversial opinions are not challenged openly but are forced underground. This creates an easy way to claim oppression and creates a martyr image when consequences are enforced. Only true freedom of speech combined with significant legitimate challenges to opinions has any chance to allow for this best-case scenario.
Regulation would also play a key role in this struggle for a peaceful and beneficial outcome. Not only would digital immortality need to be governed properly but artificial intelligence would also need to be developed responsibly and under good legal guidance. If all these things are done the odds of a good outcome will be good and conflict would be avoided.