Let’s talk about the near future and the challenges it presents to us in light of the technology advancements. I believe that some of the critical systems we established (as a civilization) are reaching the end of their usefulness. This change is caused by technology, which bends the realms of possibility and questions the initial assumptions we made when building our society.

Disclaimer: This is a highly philosophical and speculative post. It only reflects my position at this point.

Trust

One of the assumptions we have is that every intelligent actor has a body, a history of development, and is attached to the world in a multitude of ways. As such, if that actor is misbehaving, we can isolate this actor and prevent damage to society. If there is something mechanical being utilized, we can strictly associate its actions with the actor using a device. The rise of intelligent systems recently is making this connection fade in.

Today, car autonomy is on the forefront of this fight. A car can behave intelligently, and it needs to make decisions faster than a human can check/confirm them. These decisions may affect the security and health of all road participants, like in the classical trolley problem. There is no right answer, and yet we are completely fine with the idea that many cases like this happen yearly on the road. We are complicit because its the humans making decisions. We trust humans, but not artificial intelligent systems.

Note that it doesn’t even matter if these systems are truly AGI or not (regardless of what definition of AGI). What matters is that their behavior is not easily predictable. They can still be mechanical, but as soon as we can’t control them in an obvious way, we have no choice but to consider them actors of their own.

The question is not “what should AI do?”, really. It’s: how do we organize the network of trust with these new kinds of artificial nodes? All our million-year experience in building trust fails in the presence of intelligent systems. We don’t know how to evolve the law for the new age. And even when we do, our law changes so slowly it’s not able to keep up with the fast evolution of technology. Naturally, anarchy comes to land without appropriate law.

Economy

Modern economic system is only a few hundred years old. It’s based on the assumption that helping the society will make the economy distribute more resources in your way. In simple words, help others and get paid. The sustainability of our society depends on us working, mostly 40 hours per week.

When the industrial revolution happened we thought that many were going to be left out of work. But society evolved, and more people started working in the service sector instead of production. What happens in the next revolution, where the service sector will progressively be automated as well? It’s only a matter of time before some advanced mechanical and intelligent systems come for your job.

Are we going to find other ways to help each other? What if we do not need any help? Surrounded by free energy from renewable sources, personalized manufacturing facilities that can 3D print things, food, or housing. With enough computing running some sort of intelligence to help us, this strategy for living that involves helping each other will become less relevant.

We’ll need a new way of distributing resources - a new economy. And while we are in transition, we’ll see the power accumulating in large corporations. More inequality, social tension, and a bit of anarchy.

Security

An intelligent system can do a lot of damage if programmed accordingly. Viruses and trojans are already a part of modern warfare. Everybody is talking about how to control companies developing AI, how to align AI with our needs, and whether or not AI can live with us in peace if it ever comes to be.

Considering an AI as a weapon, it may be tempting to just say “Nobody buys 1000 of GPUs without government permission”. But the problem is - we aren’t even sure how many GPUs are required to create a dangerous AI. I believe that a sufficiently dedicated individual can create something intelligent in their basement, using a combination of talent, persistence, and a bit of luck. And then it’s open-sourced, on BitTorrent or Github, you name it. If true, we’ll not be able to control AI proliferation. The only other strategy is to develop protective AIs proactively, and this is likely already happening. This is an arms race that will one day burst into anarchy.

Morale

We (as a species) developed a strategy for living called “humaneness”. We figured that offering compassion to other humans as well as living things generally, helps in the long run. This works as long as we can clearly distinguish between living things and whatnot. We are already quite indifferent to, say, insects. Thus, being composed of organic cells does not make a difference.

Compassion is based on sympathy and recognizing suffering. In other words - feelings. What if an intelligent non-organic system can have feelings similar to ours? After all, there is nothing about the concept of “feeling” or “emotion” that needs a living thing. A feeling is just a higher-level emergent property of an intelligent actor. As soon as the actor has a model of the world, themselves in this model, they make prognosis for the future, and they have self-preservation instincts, the feelings will emerge.

To make things worse, what is “suffering” and “death” to an entity that can infinitely multiply itself? Our morale is too outdated to accommodate these effects. We are going to be increasingly morally disoriented until we figure out the new lines to draw and the new grounds to stay on.