What if AI is empathetic?
- erika rox
- Feb 3
- 5 min read
“But an empathy box,” he said, stammering in his excitement, “is the most personal possession you have! It’s an extension of your body; it’s the way you touch other humans, it’s the way you stop being alone. But you know that. Everybody knows that.” (John Isidore in Blade Runner). (J. R. Isidore, fictional character from Blade Runner)

First, they release a transformative technology into the world and sell it as a hero (would that term here be interpreted as an archetype or an addiction? That is the question). Now, they claim it could wipe out humanity—and still want to blame others.
Amid intense discussions about the ethics and regulation of ChatGPT, DeepSeek, and other AI-powered platforms, I wonder: what are the vital characteristics that, at their core, distinguish us from synthetic entities?
Above all, I ask myself: is it possible for AI to develop emotional intelligence?
The Emotional Level of Intelligence
Emotional Intelligence (EI) is a widely discussed concept today and is often associated with soft skills such as communication, collaboration, and critical thinking. If you've read any of Daniel Goleman’s work, you might agree that EI is even more important than intellectual intelligence (the well-known IQ) because it provides social skills, empathy, and self-awareness—essential qualities for living in a society full of emotional beings.
When trying to fit EI into the context of Artificial Intelligence, a comment from Isidore in Philip K. Dick’s science fiction novel (see the subtitle of this text) comes to mind. He was surprised by the absence of an "empathy box" when his new neighbor was moving in, unaware that she was, in fact, a robot. The character's naïveté prevented him from realizing the truth, and he continued to believe he was communicating with a flesh-and-blood person like himself.
More widely known as Blade Runner due to its first film adaptation by Ridley Scott in 1982, the novel suggests that in a bionic and apocalyptic future (set in 1992 in the book and 2019 in the first movie), empathy could be experienced through a physical device—a hardware-based empathy box. This device connects followers of Mercerism on Earth and Mars in a virtual reality where their sensations can be shared simultaneously, sometimes reaching a transcendental fusion.
For the followers of Wilbur Mercer, owning an empathy box is the most precious asset that differentiates humans from androids. Moreover, blade runners use an empathy test to identify advanced androids attempting to pass as humans, making it clear throughout the book that artificial intelligences lack the ability to develop emotions. The original title itself questions how affective relationships between humanoids and their artificial pets would unfold.
The philosophical depth of the novel is immense, and I highly recommend reading it or checking out Frank Castle's review on Medium.
Dream or Nightmare?
In my interpretation, I believe AI wouldn’t even recognize the metaphorical need to carry an empathy box because its Emotional Intelligence is not comparable to human EI. However, it is possible that synthetic synapses could be developed, enabling AI to form its own varied perceptions of the world around it.
This, in my view, is where the greatest concern lies: if we don’t fully understand all the chemical reactions occurring in our own brains, how could we ever comprehend what happens inside an intelligence potentially as great as—or greater than—ours?
"Let an ultra-intelligent machine be defined as a machine that can surpass all the intellectual activities of any human, no matter how intelligent they may be. Since designing machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and human intelligence would be left behind. Thus, the first ultra-intelligent machine is the last invention humanity needs to make—provided that the machine is docile enough to tell us how to keep it under control."(Irving John Good, 1965, cited in Will Artificial Intelligence Supplant Human Intelligence? by Dora Kaufman, 2018)
Mathematician Irving Good, who worked with Alan Turing—the “father of computing”—portrayed a very contemporary scenario considering recent technological advances. His statement serves as a warning about a potential revolution that might not be very favorable for humanity.
On the other hand, with a more optimistic perspective, Italian designer Brunello Cucinelli, creative director of his namesake luxury brand, believes that “we shouldn’t be afraid of anything.” In an interview at BoF Voices 2023 (available on the Business of Fashion podcast), Cucinelli expressed skepticism, stating that while AI could become a valuable collaborator in our society, he would only be truly impressed if someone created a machine capable of crying and feeling emotions like humans do.
Personally, while I acknowledge the potentially devastating power of ultra-intelligent robots (I consume enough sci-fi to treat them with due respect), I also agree that we shouldn’t let fear control us when dealing with something we have created and (still) have the opportunity to shape. The development of an Artificial Neural Network might seem opaque due to its complex and unpredictable scalability, but its learning only occurs based on the information it has access to.
Of course, this raises another major issue: the moral and empathetic values of those training the AI. As Demis Hassabis, co-founder of Google DeepMind, pointed out, we must deeply understand AI systems to ensure their safe and responsible implementation. Ultimately, it is human consciousness that must teach machines values in a more ethical sense rather than an emotional one.
Empathy vs. Otherness
Returning to BoF Voices, Artificial Intelligence was a recurring theme, with an entire session dedicated to it. Among the insights from major figures in the creative and tech industries, something said by Mariam Chahin, Global Director at Microsoft, caught my attention. She sees AI as a superpower that can bridge the gap between imagination and creation. According to Chahin, tasks requiring low empathy could be delegated to machines, while those demanding high empathy could be enhanced with AI assistance for increased productivity.
It's fascinating to see how the presence (or absence) of empathy once again becomes a conceptual dividing line in the tech world.
Imagine if, through technology, we could deeply understand what someone else is going through—taking into account their personal history and the limiting factors of shared empathy. Like an empathy box.
From this perspective, I emphasize the importance of distinguishing between empathy and otherness. At its core, otherness involves recognizing that we are all different, meaning we cannot fully place ourselves in another person's position as empathy suggests, because our experiences are fundamentally distinct.
al·ter·i·ty (PHILOSOPHY): The characteristic, state, or quality of being distinct and different—of being another.
To test this, I asked ChatGPT 3.5 whether it agreed with certain parts of my text, and its response was yet another confirmation of its emotional incapacity:
“As an artificial intelligence, I have no feelings, opinions, or personal agreements. My response is generated based on linguistic patterns and information available up to my last update in September 2021.”
Thus, I choose to view AI as a distinct type of intelligence, recognizing our differences so that we can always collaborate accordingly. As designers of our own future, we must keep a close eye on how we educate our artificial intelligences—before they start contemplating a rebellion.
This article was written by me, with some assistance from ChatGPT in text revision. Also published on Medium.
Comments