星度环球文化

AI具有意识吗?专家警示:社会或因此产生裂痕

发布日期:2024年12月23日

Significant "social ruptures"...

Significant "social ruptures" between people who think artificial intelligence systems are conscious and those who insist the technology feels nothing are looming, a leading philosopher has said.认为人工智能系统具有意识的人与坚持认为这项技术没有感觉的人之间,即将出现严重的“社会裂痕”, 一位知名哲学家表示。

The comments, from Jonathan Birch, a professor of philosophy at the London School of Economics, come as governments prepare to gather this week in San Francisco to accelerate the creation of guardrails to tackle the most severe risks of AI.伦敦政治经济学院哲学教授乔纳森·伯奇发表上述评论时, 正值各国政府准备本周在旧金山集会,加速制定监管措施,以应对人工智能带来的*严重风险。Last week, a transatlantic group of academics predicted that the dawn of consciousness in AI systems is likely by 2035 and one has now said this could result in "subcultures that view each other as making huge mistakes" about whether computer programmes are owed similar welfare rights as humans or animals.上周,一个跨大西洋的学者小组预测,人工智能系统意识觉醒的时代可能在2035年到来,现在已有学者表示,这可能会导致“不同亚文化群体之间认为对方犯下巨大错误”, 即是否应该赋予计算机程序与人类或动物相似的福利权利。Birch said he was "worried about major societal splits", as people differ over whether AI systems are actually capable of feelings such as pain and joy.伯奇表示,他“担心社会出现重大分裂”, 因为人们在人工智能系统是否真正具有疼痛和快乐等感受方面存在分歧。The debate about the consequence of sentience in AI has echoes of science fiction films, such as Steven Spielberg"s AI (2001) and Spike Jonze"s Her (2013), in which humans grapple with the feeling of AIs. AI safety bodies from the US, UK and other nations will meet tech companies this week to develop stronger safety frameworks as the technology rapidly advances.关于人工智能感知能力后果的争论, 让人想起了《人工智能》(2001年,史蒂文·斯皮尔伯格执导)和《她》(2013年,斯派克·琼斯执导)等科幻电影,片中人类在与人工智能的情感纠葛中苦苦挣扎。随着人工智能技术飞速发展,来自美国、 英国和其他国家的人工智能安全机构将于本周与科技公司会面,以制定更强大的安全框架。There are already significant differences between how different countries and religions view animal sentience, such as between India, where hundreds of millions of people are vegetarian, and America which is one of the largest consumers of meat in the world. Views on the sentience of AI could break along similar lines, while the view of theocracies, like Saudi Arabia, which is positioning itself as an AI hub, could also differ from secular states. The issue could also cause tensions within families with people who develop close relationships with chatbots, or even AI avatars of deceased loved ones, clashing with relatives who believe that only flesh and blood creatures have consciousness.不同国家和宗教在动物感知能力问题上已经存在显著差异, 比如印度(数亿人口为素食主义者)和美国(全球最大的肉类消费国之一)。对于人工智能感知能力的看法也可能沿着类似的路线产生分歧,同时, 像沙特阿拉伯这样的神权国家(将自己定位为人工智能中心)的观点也可能与世俗国家不同。此外,这个问题还可能在家庭内部引发紧张关系, 即与聊天机器人或已故亲人的人工智能化身建立亲密关系的人,与认为只有血肉之躯的生物才有意识的人发生冲突。Birch, an expert in animal sentience who has pioneered work leading to a growing number of bans on octopus farming, was a co-author of a study involving academics and AI experts from New York University, Oxford University, Stanford University and the Eleos and Anthropic AI companies that says the prospect of AI systems with their own interests and moral significance "is no longer an issue only for sci-fi or the distant future".伯奇是动物感知能力方面的专家,他率先开展了一系列工作,推动越来越多国家禁止养殖章鱼。他也是一项研究的合著者,该研究由来自纽约大学、牛津大学、 斯坦福大学以及Eleos和Anthropic人工智能公司的学者和人工智能专家共同完成,指出人工智能系统拥有自身利益和道德意义的前景“已不再是科幻小说或遥远未来的问题”。They want the big tech firms developing AI to start taking it seriously by determining the sentience of their systems to assess if their models are capable of happiness and suffering, and whether they can be benefited or harmed.他们希望开发人工智能的大型科技公司能够认真对待这一问题,通过确定其系统的感知能力来评估其模型是否能够感受快乐和痛苦, 以及它们是否能够受益或受害。"I"m quite worried about major societal splits over this," Birch said. "We"re going to have subcultures that view each other as making huge mistakes … There could be huge social ruptures where one side sees the other as very cruelly exploiting AI while the other side sees the first as deluding itself into thinking there"s sentience there."“我很担心社会会因此而出现重大分歧,”伯奇说。“我们将出现不同的亚文化群体,认为对方犯下了巨大错误……可能会出现巨大的社会裂痕,一方认为另一方非常残忍地剥削人工智能,而另一方则认为第*方是在欺骗自己,认为人工智能存在感知能力。”But he said AI firms "want a really tight focus on the reliability and profitability … and they don"t want to get sidetracked by this debate about whether they might be creating more than a product but actually creating a new form of conscious being. That question, of supreme interest to philosophers, they have commercial reasons to downplay."但他表示,人工智能公司“非常关注可靠性和盈利能力……他们不想被这场辩论分散注意力,即他们可能创造的不仅仅是一个产品,而是一种新的意识形态。对于哲学家来说,这个问题极为有趣,但出于商业原因,它们会淡化这个问题”。One method of determining how conscious an AI is could be to follow the system of markers used to guide policy about animals. For example, an octopus is considered to have greater sentience than a snail or an oyster.确定人工智能意识程度的一种方法,可能是借鉴用于指导动物相关政策的标记体系。例如,章鱼被认为比蜗牛或牡蛎具有更高的感知能力。Any assessment would effectively ask if a chatbot on your phone could actually be happy or sad or if the robots programmed to do your domestic chores suffer if you do not treat them well. Consideration would even need to be given to whether an automated warehouse system had the capacity to feel thwarted.任何评估都会有效地询问你手机上的聊天机器人是否真的能够感到快乐或悲伤,或者如果你不善待它们,那些被编程来做家务的机器人是否会感到痛苦。甚至还需要考虑一个自动化仓库系统是否有能力感受到挫败感。Another author, Patrick Butlin, research fellow at Oxford University"s Global Priorities Institute, said: "We might identify a risk that an AI system would try to resist us in a way that would be dangerous for humans" and there might be an argument to "slow down AI development" until more work is done on consciousness.另一位作者、牛津大学全球优先事项研究所的研究员帕特里克·布特林说:“我们可能会发现一种风险,即人工智能系统会试图以对人类危险的方式对抗我们”, 并可能有理由“减缓人工智能的发展速度”,直到在意识方面开展更多工作。"These kinds of assessments of potential consciousness aren"t happening at the moment," he said.“目前并没有进行这种潜在意识的评估,”他说。Microsoft and Perplexity, two leading US companies involved in building AI systems, declined to comment on the academics" call to assess their models for sentience. Meta, Open AI and Google also did not respond.微软和Perplexity是美国两家参与人工智能系统构建的公司, 他们拒绝就学者呼吁评估其模型感知能力的做法发表评论。Meta、Open AI和谷歌也未予以回应。Not all experts agree on the looming consciousness of AI systems. Anil Seth, a leading neuroscientist and consciousness researcher, has said it "remains far away and might not be possible at all. But even if unlikely, it is unwise to dismiss the possibility altogether".并非所有专家都同意人工智能系统即将出现意识。著名神经科学家和意识研究者阿尼尔·塞斯表示,这“仍然遥不可及,甚至可能根本无法实现。但即使不太可能,也完全忽视这种可能性是不明智的”。He distinguishes between intelligence and consciousness. The former is the ability to do the right thing at the right time, the latter is a state in which we are not just processing information but "our minds are filled with light, colour, shade and shapes. Emotions, thoughts, beliefs, intentions – all feel a particular way to us."他区分了智力和意识。前者是在正确的时间做正确的事情的能力, 后者是一种我们不仅在处理信息,而且“我们的脑海中充满了光明、色彩、阴影和形状”的状态。情感、思想、信仰、意图——所有这些对我们来说都有一种特别的感觉”。But AI large-language models, trained on billions of words of human writing, have already started to show they can be motivated at least by concepts of pleasure and pain. When AIs including Chat GPT-4o were tasked with maximising points in a game, researchers found that if there was a trade-off included between getting more points and "feeling" more pain, the AIs would make it, another study published last week showed.但是,经过数十亿字的人类写作训练的人工智能大型语言模型已经开始显示出, 它们至少可以被快乐和痛苦的概念所激励。上周发布的另一项研究显示, 当包括ChatGPT-4在内的AI被要求在游戏中尽可能多地得分时,研究人员发现, 如果要在获得更多分数和“感受”更多痛苦之间做出权衡,AI会做出这样的权衡。END【声明】内容转载自网络,版权归原作者或平台所有,如有侵权请联系删除。欢迎大家关注我们的新媒体平台: 微信公众号:星度国际翻译微信公众号:星度外语微信公众号:星度环球文化小红书:星度环球留学小红书:星度环球语言微博:星度环球文化微博:星度国际翻译知乎:星度国际翻译今日头条:星度国际翻译今日头条:星度环球文化

珠海公司沈老师:13926998689

加微信咨询
沈老师 @星度环球文化
微信号:139******89

专业解答各类课程问题、介绍师资和学校情况

微信咨询
相关资讯
转给小红书上的“TikTok难民”朋友们!这个真的有用 【中外合作办学】2025西交利物浦大学本硕博申请指南! 小红薯变小洋芋!外媒都在如何介绍小红书? 【香港硕士】岭南大学数据科学理学硕士课程招生简章 外国小朋友在天坛合唱《如愿》,网友:发音好标准!
相关课程