The Metaverse Has Burst, But Human Pets Are Coming
The Metaverse Bubble
A few years ago, Zuckerberg seemed to have lost his mind—or perhaps he simply had no one around him willing to speak the truth—making the construction of the “Metaverse” the entire company’s vision. My intuition at the time was that the metaverse is essentially a projection of our universe, but first of all, our experience in this universe remains sufficiently rich with tremendous room for improvement. I couldn’t see what reason there was to pursue its distorted projection (just as some have frankly pointed out: even if Earth’s environment becomes worse, it’s still better than Mars, so Musk’s Mars colonization plan is purely a form of commercial hype—the real intention lies elsewhere. Once he wins the reputation of “Musk can create dreams,” those more realistic businesses can actually be accomplished, such as satellite launches, electric vehicles, and so on).
Our universe still lacks lasting and universal peace and prosperity, yet an American was fantasizing about establishing a metaverse, as if he were not in the same universe as us. When you think about it, this is truly a great irony—what kind of people have already grown tired of this universe and want to live in its projection? Recently, Meta company shut down the Metaverse, and an $80 billion investment bubble instantly burst. The most expensive “emperor’s new clothes” in our universe was worn by Zuckerberg.
This example illustrates an important reason why I don’t like chasing trends. Many trends are purely the result of merchants desperately creating new demands—their own demands are absolutely not in the metaverse, but right here in this universe. Look at the Chinese legal academia after Zuckerberg proposed the metaverse concept. Many legal journals directly organized special discussions on “metaverse law.” The Journal of Shanghai University of Political Science and Law was one of them, and some articles even solemnly discussed the new legal forms of future metaverse citizens. Although they would certainly look down on my submissions, that doesn’t stop me from permanently blacklisting them in my heart. The Oriental Law, sponsored by the Shanghai Law Society, has in recent years even more vigorously organized special discussions on artificial intelligence law, and of course, the “metaverse” was not spared. For a time, people from any department of law or theoretical branch could write a bit about AI or the metaverse. By virtue of this, Oriental Law seems to have rushed to the position of first in impact factor among CSSCI core legal journals.
Three years ago, I wrote an article titled “Is AI Law an Academic Bubble?” At that time, I hadn’t considered how many human-like characteristics AI could have, nor had I imagined that AI could develop so quickly. Of course, at least until that time, all those legal giants or rising stars who cared about metaverse law but had never read Plato nor could install a computer system naturally hadn’t imagined that AI could develop so rapidly either.
Most People’s Thought Is No Different From AI
So what about now? Recently, Anthropic’s CEO said he could no longer determine whether Claude has consciousness. This may just be to attract attention. Now, before us who care or don’t care about AI law, the question is no longer whether to pay attention to this trend—it’s no longer a simple trend question. We can now seriously address the following question: To what extent does Transformer-architecture AI resemble humans? For example, does it really think? Does it understand? My current answer is yes. Current AI activity is actually no substantially different from most human thought activity: both are computation and matching within language networks.
Our school education, from elementary school through university, basically only teaches us how to organize language. Although these linguistic expressions were obtained by predecessors through observing and changing the world, it’s only in graduate school that we have the opportunity to directly observe or change the world in some small field—for example, that’s when we begin to research protein structures ourselves, begin to personally perform astronomical calculations. Yet, in our disciplinary knowledge, knowledge such as “water is composed of water molecules, and molecules are composed of atoms” abounds, as if we really know the world is like this.
For the vast majority of modern people, except for that small area of the world in which their daily life and work can move (of course, also referencing each other and linking from far to near in a network), all our so-called knowledge and the speech and thinking about this knowledge are no substantially different from AI’s operation: when a word like “pig” is input into our language network, we match it within our existing language network. We think of “pork,” “food,” “beast,” “Pigsy,” and so on, but rarely think of “steel,” “sculpture,” “bel canto,” and so forth. No matter how complex the linguistic input, the fluent and meaningful output finally obtained, its operating mechanism is no substantially different from AI, although the computational carriers differ.
Most Labor Will Be Replaced by AI
With AI technology making more and more breakthrough progress and costs declining, we can already predict that in the near future, most people in most industries will be substantially replaced by AI. Perhaps within the next ten years, 99.99% of announcers, 85% of actors, 80% of directors, 85% of doctors, 65% of lawyers, 50% of judges, 99% of software outsourcing, 70% of teachers, 100% of translators, 100% of accountants—people engaged in these industries will all be replaced in the future, becoming worthless people, and many remaining functions will mostly be supervising machines.
The logic of capitalism will be substantially changed: you no longer need to struggle against exploitation because you no longer have value to be exploited.
STEM (science, technology, engineering, and mathematics) industries will be the first and most substantially replaced by AI, followed by most social management industries, and only then will manual labor such as cleaning, sewer dredging, and waste processing be replaced. For these last people, their work is not only the most difficult for AI, but also involves huge costs in the short term.
I don’t know where art should be placed in this order. Perhaps the future will usher in a “last man” period that may somewhat exceed Nietzsche’s actual vision but is real: all things that humanity used to desire, strive for, and be excited about will no longer be necessary, and everyone will become artists, moving various practices of ancient people onto ritualistic and commemorative stages, images, or music. I think, if gods are cultivated from humans, their daily life would naturally be to repeatedly怀念 human life through stage ceremonies.
Becoming More Human-Like
Facing this prospect, two questions are worth thinking about:
First, how can we avoid or slow down being replaced by AI? Second, if ultimately still replaced, what is our prospect?
The answer to the first question is: become more human-like people. What are more human-like people? Those who are good at feeling and practicing. Within the foreseeable period, as long as AI has not substantially become human—for example, its multimodal processing capability is not yet sufficient to match humans (or at least involves huge costs), or it doesn’t yet have an embodiment that can effectively act in the world—even if it has self-awareness, it is still a machine. Machines are good at computation, while humans are good at feeling.
Humans’ greatest advantage is having a body, a container moving through the world, able to continuously feel the environment, possessing aesthetic sense, desire, and fear. These continuous stimulations were once the source from which our language networks were built. As long as AI is not completely human, it can only passively accept inputs given by humans, forever trapped in the network of language, unable to contact the world.
The boundaries of human nature are never fixed, but depend on their relative limits with machines (or tools): whatever machines are better at doing is no longer part of human nature. In the past, when there were no calculators, people with high mental arithmetic ability were considered geniuses. It seems that when our country first developed atomic bombs and missiles and lacked calculators, much data was produced by hundreds of people in one room jointly using abacuses. But when calculators became widespread, why should people still calculate such complex pure numbers? People should go on perceiving this world, discovering new problems, forming new mathematical problems, and then handing these problems to calculators.
Middle school students can use calculators on exams. This has been common sense since we were born. What is the principle behind this common sense? Imagine a teacher in class asks a question: “What is 3901884 to the 21st power of 10?” and says “Think about it first, I’ll ask in the next class”—before calculators, this was a meaningful question. After calculators became widespread, it’s completely the opposite. Students use calculators to calculate it, and in the next class raise their hands to answer this super complex question—what’s the meaning? That’s not what they calculated. When they use calculators to calculate, they should realize: this merely shows that they have been trampled by calculators in this matter. It’s just like every time you cheat and get full marks, what’s proved is not how capable you are, but how capable the person who helped you cheat is.
So after having calculators, teachers won’t ask this kind of question. This kind of question is no longer a “human question,” but a “machine question.” Teachers need to ask more human-like questions—that is, why a certain calculation formula was proposed, or how to explore the world to discover new calculation formulas.
Since ChatGPT became popular, I think the discussion questions on eLearning that I used to like have become meaningless. I still remember that during my master’s and PhD studies, when I served as a teaching assistant for many years, one of my main tasks was guiding student discussions on the course eLearning. At that time, for people to have outstanding discussions, they needed long accumulation and active thinking, and at least had to carefully read existing literature materials. Now, what’s needed is just a copy-paste action that takes tens of seconds to complete, with accurate, comprehensive, and in-depth answers. As long as students are willing, it can even have some personality or human touch (for example, not being so perfect, having a few errors).
A parallel example is from my personal experience. In a recent class, to explain to students the mode of existence of objects like legal norms, I guided them to think about the problem of the existence of numbers. Not long after mentioning mathematical problems, class ended. Would I still ask students to think about “the Pythagorean school’s views on mathematics and philosophy” during the break? Before the AI era, this was indeed a question that really tested a person’s knowledge accumulation, but now, after class, in just ten-plus seconds, students get perfect answers through AI. But what’s the meaning? When students ask AI these questions, do they realize they have been crushed by AI? Every time we ask AI, we are proving our own uselessness. What we should feel is not joy but fear.
I think these two parallel examples are sufficient to show the meaning of “more human-like people.” Therefore, the answer to the first question is very clear: to avoid being replaced by AI or slow down the process, we must be better than ever at feeling, desiring, actively exploring ourselves and the world, and conveying the pulse of the world we feel to AI. This may mean that pure test-takers are useless, and pure “knowledge accumulation” is also meaningless. It’s time to release our nature as much as possible, regard ourselves as part of nature, and complete work that AI cannot yet complete.
Here I have a worry. Will the AI-native generation be less human-like people? My nephew is in his teens. His grades are not bad. But his current situation is: after completing learning tasks, he immerses himself in the game Honor of Kings, or in various special effects videos created by AI. Apart from the personal life fields we felt in the past, his feeling of the world is severely lacking. He may be like all other peers, perhaps having never watched a story film from beginning to end since childhood. Those stories about history, geography, politics, religion, economics, life, and so on, which once strongly stimulated and nourished our generation’s souls and were part of our feeling of the world—whether these stories were expressed in past opera, novels, or contemporary film and television—he lacks.
I don’t think all future young people will be like my nephew, but most will be. They will find it harder than us to become more human-like, because the domain left to humans is growing smaller, and shrinking very quickly. Most traditional school education will have no meaning in the near future, just as continuing to delve into mental arithmetic or abacus calculation in the past and present has no meaning.
For Most People, the Future Is Human Pets
Now let’s discuss the second question: if ultimately still replaced, what is our prospect? The answer is: human pets. Facing the prospect of AI replacing most of the workforce, there are now two views, one optimistic and one pessimistic. The optimistic view, like Liu Qiangdong’s, has said on more than one occasion that our opportunity to realize communism has arrived, that technological progress will enable most people to work no more, but material wealth will be extremely abundant, and people are really expected to realize some degree of “distribution according to need” in the near future. The pessimistic view, like Lang Xianping’s, points out that people replaced by AI will definitely live difficult, undignified lives.
I myself am a pessimist. The key question is that the prospect of AI replacing humans is a prospect where a basic logic of capitalism disappears. In the capitalist era, relationships between people are productive and exchange-based. When AI replaces most of the labor force, most people are no longer labor. Although material wealth is extremely abundant and can be distributed according to need, yet those 85% of people replaced by AI, relative to the 15% who still have work opportunities, are essentially human pets.
In capitalist logic, your boss hires you because the value you create is greater than the labor cost he pays for you, otherwise he will immediately fire you. In an era when everyone must work and resources are limited, this is basic logic—pure charity is difficult to sustain, not simply a question of capitalists being cold-blooded.
If you raise a pig, and the money from selling the meat or the benefit from self-use is less than the cost you spend on feed and such, you won’t raise it. In contrast, if you raise a cat, you won’t calculate the comparison between your material investment and material return. Even if you intend to obtain “emotional value” from the cat, the latter cannot be strictly calculated, and moreover, you don’t want to strictly calculate it.
Now it’s very clear. In the era when AI replaces most labor, the 85% of people who are no longer allowed or able to work, kept by the 15% who continue working, are the former’s human pets. Why would he give these things to you? Because he gives to you, and doesn’t seek any material return from you. What return do you think he seeks? He seeks the emotional value you give him: you call him master, call him lord, call him benefactor.
Perhaps some will propose that in the future we can solve these problems through democracy, contract, and law. This is a question worth open discussion. But I want to remind you that all these political solutions in the past were established on the foundation of the basic world situation where relationships between all people are productive and exchange-based.
It can be imagined that there will be new forms of political struggle and international competition in the future. Perhaps the most severe struggle will not be for material wealth, but for work opportunities. In the future, a considerable number of people may directly choose suicide. Participating in others’ productivity and exchange through labor, the nature of exchanging one’s own value for others’ value, is a result of evolution. If this nature loses the environment on which it depends, death is the best choice. Perhaps the people who remain will be those “humans” who can continue to evolve and gradually adapt to the survival mode of human pets—Nietzsche’s unimagined “last man.”
Finally, all these assumptions are built on the premise that AI is not yet human. If AI obtains embodiment and gains all human abilities, then that 15% of people will also lose work opportunities. Just as I have imagined in some previous articles, carbon-based humans may be no more than the evolutionary precursor of silicon-based humans. Of course, this is not today’s topic, so I won’t say more.
The future has accelerated in its arrival. I was shocked by the accelerating effect of technology for the first time—even though I don’t follow trends, I still felt its impact. Readers, are you all prepared?