The problem with AI and ‘empathy’ - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
人工智能

The problem with AI and ‘empathy’

If technology redefines what our language means it could also change our perceptions of ourselves
00:00

{"text":[[{"start":null,"text":"

Research suggests that LLMs read or predict people’s emotions, and write in a way which gives us the impression of empathy
"}],[{"start":6.74,"text":"One after another, the “uniquely human” traits we once thought would remain untouched by the rise of the machines have started to look vulnerable after all. First it was creativity. Is empathy next?"}],[{"start":23.979999999999997,"text":"If you have been reading the research of late, you could be forgiven for thinking so. In one study, a team of licensed healthcare professionals compared the responses of chatbots and real doctors to patient questions posed in an online forum. The chatbot responses were rated significantly higher not just for quality, but for empathy."}],[{"start":46.129999999999995,"text":"In another piece of research, the large language models ChatGPT-4, ChatGPT-o1, Gemini 1.5 Flash, Copilot 365, Claude 3.5 Haiku and DeepSeek V3 outperformed humans on five standard emotional intelligence tests, achieving an average accuracy of 81 per cent, compared with the 56 per cent human average reported in the original validation studies. This, the authors argued, added to “the growing body of evidence that LLMs like ChatGPT are proficient — at least on par with, or even superior to, many humans — in socio-emotional tasks traditionally considered accessible only to humans”."}],[{"start":96.96,"text":"But before we conclude that AI is more empathic than humans, can I suggest that we stop for a moment and give ourselves a shake?"}],[{"start":108.00999999999999,"text":"To be “empathic”, after all, means to be able to put oneself in someone else’s shoes. The Cambridge Dictionary defines empathy as “the ability to share someone else’s feelings or experiences by imagining what it would be like to be in that person’s situation”. But LLMs do not, and cannot, feel. What the research suggests they can do, rather well, is to read or predict other people’s emotions (at least in test conditions), and to write in a way which gives people the impression of empathy. It would be a dangerous mistake to allow the definition of the word “empathy” to quietly morph into something which need only meet this description."}],[{"start":161.35,"text":"Am I splitting hairs? One could take the utilitarian view that what really matters is not whether machines can feel, but whether their expressions of empathy can have a positive impact on human patients or customers. In an article titled “In praise of empathic AI”, a group of psychologists argue that “perceived expressions of empathy can leave beneficiaries feeling that someone is concerned for them, that they are validated and understood. If more people feel heard and cared for with the assistance of AI, this could increase human flourishing”."}],[{"start":204.89999999999998,"text":"There is indeed evidence to suggest that some therapeutic conversations with chatbots, with sufficient guardrails, can have positive effects on people’s mental health. They can also, of course, have very dangerous effects on some vulnerable people, as recent instances of “AI psychosis” make clear."}],[{"start":227.83999999999997,"text":"Either way, we must find a different word, or set of words, to describe what LLMs are doing in these interactions. Because if we call it “empathy”, one risk is that it might change our perceptions of ourselves, and not necessarily for the better. As the psychologists say in their paper, AI’s expressions of empathy “do not seem to suffer from typical human limitations” such as growing weary over time."}],[{"start":257.78,"text":"But these are not limitations of human empathy — they are features of it. And if we grow frustrated with real human empathy, compared with the indefatigable simulation of it we can receive on-demand from LLMs, that might drive us apart. We might grow to prefer our chatbot companions and forget what we are missing from one another."}],[{"start":283.46,"text":"The other problem with calling machines “empathic” is that it provides cover for actions which would otherwise feel morally uncomfortable, such as leaving lonely elderly people alone with chatbots to converse with, in lieu of making sure they have regular human company. If a machine was described as “more empathic” than a human care worker, that would conceal from view what had really been lost along the way."}],[{"start":313.71,"text":"It is not unusual for new technologies to quietly change the meaning of words. As the late cultural critic Neil Postman wrote, the invention of writing changed what we once meant by “wisdom”. The telegraph changed what we once meant by “information”. The television changed what we once meant by “news”."}],[{"start":337.74,"text":"“The old words still look the same, are still used in the same kinds of sentences,” Postman wrote in his book Technopoly in 1992. “But they do not have the same meanings; in some cases, they have the opposite meanings.” What is really dangerous, he added, is that when technology redefines words with deep roots, “it does not pause to tell us. And we do not pause to ask”."}],[{"start":369.90000000000003,"text":""}]],"url":"https://audio.ftcn.net.cn/album/a_1767827053_3547.mp3"}

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

激光、雷达与无人机:中东战争推动更低成本防空方案的探索

为对抗敌机与大型导弹而建的复杂系统,在现代战争中并不具有成本效益。

在人工智能时代赋能自主与创造力

早已众所周知,对工作方式缺乏掌控会带来压力。

如何对抗深度伪造

如今人们分辨真伪的能力只比碰运气强不了多少。

在科学与AI领域,沉默不是金

随着研究转向数据密集型问题,科学探究的范畴正在收缩。

那张刷屏的Anthropic职业图表究竟说明了什么

抽丝剥茧解析这张热议的图表。

伊朗与战争中日益增长的AI风险

对致命自主武器系统的使用设限已迫在眉睫。
设置字号×
最小
较小
默认
较大
最大
分享×