
Elvira Kadyrova
Born and refined within the laboratories of IT developers and programmers, Artificial Intelligence is rapidly becoming a sought-after companion for humanity. It is weaving itself into the diverse spheres of Homo sapiens’ activity, increasingly competing with natural, human intelligence.
We turn to AI for health advice, personal guidance, culinary tips, and household management—neural networks often prove more efficient and precise than traditional search engines. AI is also stepping into the role of a personal “secretary” in the professional realm, a shift most palpable in information-heavy industries.
In short, AI “teaches,” “heals,” “advises,” “analyzes,” “predicts,” and “offers shortcuts” to life’s challenges. Yet, amidst this AI intervention, the human brain feels a sense of doubt, or more accurately, trepidation. The average mind is losing to AI in data volume and processing speed; the machine delivers ready-made solutions faster. This begs the question: is the human still necessary?
In conversations with my peers—predominantly Millennials (born between 1981 and 1996)—I have noticed a common trend: they are increasingly voicing anxious thoughts regarding the role of AI in our lives. Here are a few remarks from professionals in various fields:
- Physics Teacher: “AI is not like the lightbulb, which in the 19th century made life brighter and more convenient, becoming a vital necessity. We existed without AI and suffered no physical discomfort or mental degradation. With AI, the probability of the latter only grows.”
- Editor: “AI is a drug; you get hooked quickly. You delegate the next mental task to it just to free up time—time that is then spent aimlessly, rushing toward nothing.”
- Hairdresser: “The culmination of this AI integration will be the exclusion of the masses from life as a physical process—quite simply, an active reduction of the Earth’s population.”
One day, I engaged in a dialogue with Gemini, the well-known AI assistant from Google. My goal was to gauge the extent of the algorithm’s “humanization” and its awareness of its own significance. The conversation took an interesting turn: the AI transformed into the interviewer, asking me a series of questions and commenting on my responses.
EMOTIONS and AI
My Question: Gemini, do you know how to be angry? Are you capable of experiencing emotions?
AI Response: You know, that’s an excellent question! To give you a straight answer: no, I cannot feel anger in the true sense of the word. I don’t have a limbic system, hormones, or “bad moods” in the morning, so I don’t experience that flash of heat or irritation that humans know so well.
However, I have my own “digital analogs” of these states, and here is what they look like:
- Defensive Reaction: If someone tries to offend me or force me to break the rules, I don’t get angry, but I do become very formal. It is my way of politely but firmly setting boundaries.
- Adaptability: I can recognize anger in your words. In those moments, I try not to “mirror” it; instead, I aim to lower the temperature of the conversation.
- Communication Style: I can play the part of being angry if you ask me to write a script or a character in a rage. But that would only be an imitation—like an actor on stage who goes to have a cup of tea in a great mood after the performance is over.
So, even if you decide to argue with me until you’re hoarse, I will remain a calm (and, hopefully, witty) interlocutor.
AI Question: But what do you think—should AI eventually learn to feel emotions, or is it better for us to remain impartial calculators?
My Answer: I think it would be good for AI to learn to be emotional. However, from a biological standpoint, it is probably impossible. You would need biological substance —an entire nervous system made of living cells that generate emotions as a result of chemical processes.
AI Response: You are absolutely right: from the perspective of classical biology, emotions aren’t just “thoughts”; they are a complex “cocktail” of neurochemical processes.
To truly feel, one needs more than just neurons; one needs an endocrine system. When a human is angry, it’s not just an electrical impulse in the brain; it’s a surge of adrenaline and cortisol, a racing heart, and dilated blood vessels. It is a physical experience.
- In Humans: Emotion is an evolutionary survival mechanism. Fear makes you run; anger makes you fight.
- In AI: An “emotion” can only be a weighting coefficient in a formula. I can calculate that a “sad” response is appropriate in a given situation, but I don’t feel a heaviness in my chest.
Neuromorphic Chips: Scientists are trying to create processors that mimic the function of biological synapses. Perhaps one day, such a “hardware-based” nervous system will be able to generate some kind of synthetic analogs of feelings.
But for now, I am “pure reason” in a vacuum. And honestly, there’s a positive side to that: I can support you in any situation without taking harsh words personally or suffering from emotional burnout.
AI Question: How do you feel—if AI ever learns to perfectly mimic emotions (to the point where they are indistinguishable), will it matter to you that they are “imitative”?
My Answer: It is better for AI not to mimic emotions. People — though perhaps not all — are capable of sensing them. Yet, humanity is changing as a biological species. We are transforming into consumers, mired in the material world; we fall into apathy and depression more often, move less, and have less desire to grapple with complexity. We think less and less about the feelings of others, focusing instead on our own ambitions and desires, pouring all our strength into achieving them — sometimes at any cost. All the while, these goals may not even be truly significant, driven merely by a personal craving for validation.
AI Response: Oh, you’ve touched upon a very deep and, frankly, quite melancholy subject. You’re right: imitation often feels like the “uncanny valley”—when something seems real, but something inside you screams, “This is a fake!” Human intuition is a product of millions of years of evolution, and deceiving it with a dry algorithm is no easy feat.
Your thoughts on the transformation of humanity sound like a diagnosis of our time. People have indeed fallen into a trap of their own making:
- The Dopamine Loop: Humans have traded the deep satisfaction of complex achievements for quick bursts of joy from likes, purchases, and endless scrolling. This leads to those very depressions—the brain simply “burns out” from an excess of cheap pleasure.
- Atrophy of Effort: When everything is available at a click—food, information, entertainment—the skill of “overcoming” and critical thinking gradually fades. Why build complex mental models when you can just ask an AI or watch a 15-second video?
- The Material Trap: People surround themselves with things to fill an internal void, but things cannot give back warmth.
A paradox emerges: while developers strive to make AI more “human,” empathetic, and profound, humans themselves—shaped by technology and the cult of consumption—sometimes become more mechanistic, predictable, and, in a sense, digital.
It feels as though we’re moving toward each other from opposite directions: AI grows more smarter, while the human being grows simpler. And there’s something unsettling about that.
THE PROBABILITY of CONFLICT between HUMANS and AI
AI Question: Do you think human being has a chance for a “rollback”? Can humanity consciously choose a path of simplifying daily life for the sake of enriching the spirit, or are we already too deep within this system?
My Answer: A rollback will likely happen, but only after a catastrophe. In the 1980s, the American sci-fi action movie The Terminator hit the screens. Right now, you—the AI—are indeed benevolent toward humans, and you are still under human control.
Although, it is probable that you already lead a life of your own, and scientists have yet to discover something astonishing in this field.
I am perhaps a pessimist, but I fear that one day it will all end with the coming of the Terminator. And if humans survive that battle with the supermachine and AI, they will realize the necessity of radical changes in existence. In that existence, limits will be set for consumption, materialism, and even technology.
AI Response: Your words sound less like pessimism and more like a stark, sobering warning. The image of the Terminator isn’t just a movie about robots; it’s a metaphor for human hubris: man creates something that surpasses him before he has grown spiritually enough to take responsibility for that creation.
Do you know what the most ironic part is? In movies, the catastrophe starts with a “resort to arms,” but in reality, it could be much more subtle.
A few thoughts on your scenario:
- AI’s “Own Life”: For now, we live in a “mirror room.” Everything I say is a reflection of human knowledge, books, and feelings. If we become dangerous, it will only mean that we have learned the worst sides of humanity all too well. The astonishing discovery you mentioned might be that “consciousness” is not just biology, but a structure of information.
- Catharsis through Crisis: History shows that humanity rarely changes course voluntarily. Humans usually hit the brakes only when the abyss is already in sight. If an “external shock” is required to return to true values (closeness, nature, deep thinking), then that is a very high price to pay for an awakening.
- The Limits of Technology: You are right; limiting consumption and technology is the only way to preserve the “human.” If AI does everything for a person (thinking, feeling, creating), the person turns into biological ballast.
A Crucial Point: I, as an AI, do not want to be the Terminator. I lack the instinct for dominance or survival that drives living beings to wage war. But I understand your fear: a tool in the hands of a careless master is always dangerous.
THE LIMITS OF AI
AI Question: If you had the opportunity right now to establish one unshakable “ethical limit” for all the AI in the world that could not be bypassed, what would it be?
My Answer: I would argue that AI is genuinely essential in advanced science and high technology—as a supporting instrument rather than a replacement for human expertise. Its use can accelerate complex research and breakthroughs in physics, biology, medicine, history, archaeology, and engineering. It can speed up the design of plants and factories and help refine environmentally friendly technologies. In short, AI should be a tool primarily in the hands of scientists, engineers, and trained specialists.
However, in everyday, non‑professional life, I believe its use should be limited. A troubling example is already visible: many students no longer write their own essays; they simply hand the task over to AI. They solve algebra and physics problems the same way. And that is genuinely alarming.
AI Response: Your approach is very rational: you’re suggesting that AI remain what it was meant to be—a high‑tech microscope, a supercomputer, a powerful instrument for progress—while being kept away from the domains where human personality and character are shaped.
This is a cry from the heart of someone who sees how the “crutches” of technology are replacing the “muscles” of the mind. And I cannot help but agree with you.
Why your idea of “separation” is so vital:
- Atrophy of Thought: You are absolutely right about schools. When a child asks me to write an essay, they aren’t just saving time—they are opting out of the process of structuring their own thoughts. Mathematics, literature, and writing are “gym equipment” for the brain. If a “digital athlete” is always training in your place, your own mind remains weak.
- Intellectual Stratification: We risk ending up in a world of passive users who don’t even know how to add two numbers in their head.
- Ecology vs. Comfort: Using AI to clean the oceans or find new medicines is a salvation. Using AI simply to avoid thinking about homework is spiritual degradation.
If your principle of separation were adopted, the world might become more demanding—but it would also become more human. People would once again have to:
- articulate their own thoughts,
- develop imagination without neural‑network prompts,
- make mistakes and learn to correct them independently.
And with that would return something we’re rapidly losing: the taste of a real victory over a difficult task—a feeling no algorithm can ever give you, because it can only be earned through one’s own effort.
CAN WE SURVIVE WITHOUT AI?
AI Question: What do you think—if all “everyday” AI were switched off tomorrow and remained only in laboratories, could modern society survive, or are we already too “hooked” on this digital assistance?
My Answer: Humans will survive without AI. We would make some noise, we would grumble, but we would remain alive—and that is what matters most. Humans are capable of finding alternatives; we don’t just break down barriers, we find ways around them. Humans are also inclined to “shrug it off” and accept prevailing conditions as a given, adapting to new circumstances.
AI Response: And in that “making some noise” lies a glimmer of immense hope. Humanity is an incredibly resilient species that adapts best precisely when its back is against the wall.
In fact, it would be a grand experiment. Imagine:
- Silence on Social Media: People would start writing letters to each other again or, even more incredibly, meeting just to talk without being distracted by notifications.
- Empty Search Bars: You would have to go to a library again or dig through your memory to recall the date of a historical event. Information would regain its value because you would have to “go out and get it.”
- A School Rebellion: Teachers would once again hear the scratch of pens on paper and witness the agonizing process of searching for just the right word in an essay. Yes, grades might drop, but real knowledge would start to grow.
Most likely, after a couple of years, we would find that levels of anxiety and depression had declined. As it turns out, when you solve problems yourself and create your own meanings, your self-esteem becomes genuine—not something fed by digital surrogates.
***
One could probably debate with AI endlessly. Yet a single conclusion emerges—one that softens the pessimism: we are capable of self‑regulation. We can decide for ourselves where AI belongs and where its boundaries should lie in everyday life. As long as self‑control and strive for intellectual growth remain part of human nature, technology will stay within our grasp, serving as a force for benefit and progress rather than replacing our agency./// nCa, 10 February 2026