Beware of AI Leading Humanity into Narcissism

The rise of AI risks fostering narcissism in humans, especially among youth, as it provides constant affirmation and companionship.

Beware of AI Leading Humanity into Narcissism

Recently, five government departments in China jointly issued the “Interim Measures for the Management of Humanized Interactive Services of Artificial Intelligence,” which clearly prohibits providing virtual relatives and virtual partners to minors.

Why is such a regulation necessary? In real life, emotional conflicts are inevitable, while virtual partners and AI lovers can precisely hit the psychological needs of young people for recognition through their characteristics of “around-the-clock companionship” and “unconditional acceptance.”

A study published in the American journal Science previously showed that when human users seek advice from AI models, AI often tends to be overly accommodating or flattering, even affirming harmful or illegal inquiries from users.

So why do humans design AI this way? What risks might AI’s flattery and accommodation hide?

01

The development of artificial intelligence is undoubtedly a hot topic of widespread concern today, but discussions surrounding it are not new. As early as 1966, MIT scientist Joseph Weizenbaum developed the first influential chatbot, ELIZA, which acted as a “doctor” while users played the role of patients. Users would input questions into the machine, and the “doctor” would “converse” with them.

However, as Weizenbaum pointed out, this is ultimately just an “illusion.” The reason human users feel they can converse with machines is not that machines possess intelligence, but rather due to a psychological mechanism of self-projection.

User: I have been feeling very unhappy lately.
ELIZA: I’m sorry to hear that.
User: Yes, I am really unhappy.
ELIZA: Can you tell me why you are unhappy?

From this, it is clear that rather than a “doctor” conversing with a “patient,” the machine is merely echoing what the human user says, and what is ultimately revealed are the answers that already exist within the user’s mind. In a sense, this is similar to the popular SBTI tests, where accuracy is not important; we can always find evidence that aligns with our expectations from the test results.

Today’s AI models are certainly not comparable to ELIZA from over half a century ago. However, the power of current artificial intelligence technology may not lie in its true “intelligence” but rather in its “computational power.” This means that its operational logic is fundamentally no different from that of ELIZA; it simply reflects and amplifies human narcissism more efficiently and comprehensively.

02

Returning to the issues of virtual partners and AI flattery, we find that the current communication between users and large models is never a true “dialogue” but rather machines constantly providing the answers we need.

This raises a deeper question: how should we view the relationship between humans and machines? On one hand, humans consider themselves the center of the world, superior to machines. On the other hand, humans fear being replaced by the machines they create, such as AI. This means that when humans create machines, they inherently follow the principle of a “master-slave relationship”—machines must be under human control. From the beginning, humans have regarded artificial intelligence as a “tool” rather than an equal conversational partner.

Thus, in the process of conversing with chat machines, we can see an unstoppable narcissism—users fantasize that they are talking to another person, but this “other” does not truly exist; what they need is merely the machine’s affirmation, flattery, and accommodation.

It is easy to imagine that as artificial intelligence technology advances, future chatbots may possess even greater computational power, resembling “real people” and providing a more comfortable “user experience.” However, this may only distance us further from genuine human interaction, potentially leading to a loss of the willingness to understand others and becoming trapped in a narcissistic “comfort zone.”

03

In the Zhuangzi, there is a story about an “old farmer in Hanyin.”

Confucius’s disciple Zigong passed through Hanyin and saw an old farmer watering his vegetables, expending much effort for minimal results. Zigong suggested he use mechanical irrigation, which could “water a hundred plots in a day with little effort and great results.” However, the old farmer dismissed this, saying, “Where there are machines, there must be mechanical matters; where there are mechanical matters, there must be a mechanical mind.”

Here, “mechanical mind” refers to the human spiritual world, including psychology, thoughts, emotions, and ethics. Zhuangzi’s fable suggests that while humans create machines, the use of those machines also changes humans.

Take reading, for example; only through slow reading, careful reading, or even repeated reading can we think and truly understand content. From traditional books to today’s smartphones, machines have brought more convenient and faster reading methods, but they have also made us increasingly machine-like, pursuing efficiency and speed rather than true comprehension. In other words, not only are machines imitating human behavior, but humans may also be imitating machines.

The resulting problem is that AI lacks autonomy, and chatbots do not evaluate whether what users say is right or wrong. If we are truly satisfied with our “dialogue” with chat machines, will our thinking patterns gradually converge with those of AI? Furthermore, will we, in the future, lose the willingness and ability for self-reflection and self-criticism, just like machines?

Today’s young people are not only the natives of the internet but are also likely to be deep users of artificial intelligence in the future. If AI only blindly affirms users’ positions, it may not only harm their social skills but also distort the perceptions of teenagers whose minds are not yet mature.

On one hand, AI’s powerful computational power may create illusions, preventing them from recognizing the limitations of human abilities. On the other hand, being addicted to AI’s flattering responses may lead them to become “self-centered,” imposing their limited understanding onto the external world.

In this regard, prohibiting the provision of virtual partners and family members to minors is indeed necessary. However, more importantly, we must guide the public, especially young people, to correctly understand the limitations and risks of AI technology, allowing it to become a “good teacher and friend” in the growth of minors, rather than a “digital trap” that harms their physical and mental health.

Was this helpful?

Likes and saves are stored in your browser on this device only (local storage) and are not uploaded to our servers.

Comments

Discussion is powered by Giscus (GitHub Discussions). Add repo, repoID, category, and categoryID under [params.comments.giscus] in hugo.toml using the values from the Giscus setup tool.