首页> 外文期刊>AI communications >Norms for beneficial A.I.: A computational analysis of the societal value alignment problem
【24h】

Norms for beneficial A.I.: A computational analysis of the societal value alignment problem

机译:有益A.I的规范:对社会价值对齐问题的计算分析

获取原文
获取原文并翻译 | 示例
       

摘要

The rise of artificial intelligence (A.I.) based systems is already offering substantial benefits to the society as a whole. However, these systems may also enclose potential conflicts and unintended consequences. Notably, people will tend to adopt an A.I. system if it confers them an advantage, at which point non-adopters might push for a strong regulation if that advantage for adopters is at a cost for them. Here we propose an agent-based game-theoretical model for these conflicts, where agents may decide to resort to A.I. to use and acquire additional information on the payoffs of a stochastic game, striving to bring insights from simulation to what has been, hitherto, a mostly philosophical discussion. We frame our results under the current discussion on ethical A.I. and the conflict between individual and societal gains: the societal value alignment problem. We test the arising equilibria in the adoption of A.I. technology under different norms followed by artificial agents, their ensuing benefits, and the emergent levels of wealth inequality. We show that without any regulation, purely selfish A.I. systems will have the strongest advantage, even when a utilitarian A.I. provides significant benefits for the individual and the society. Nevertheless, we show that it is possible to develop A.I. systems following human conscious policies that, when introduced in society, lead to an equilibrium where the gains for the adopters are not at a cost for non-adopters, thus increasing the overall wealth of the population and lowering inequality. However, as shown, a self-organised adoption of such policies would require external regulation.
机译:基于人工智能的崛起(A.I.)的系统已经为整个社会提供了大量的利益。然而,这些系统还可以封闭潜在的冲突和意外后果。值得注意的是,人们会倾向于采用A.I.系统如果它赋予它们的优势,那么如果采用参与者的优势,那么非采用者可能会推动强大的监管。在这里,我们提出了一个基于代理的游戏理论模型,用于这些冲突,代理商可以决定诉诸A.I.要使用和获取有关随机游戏的收益的额外信息,努力将模拟从模拟到迄今为止,主要是哲学讨论的洞察力。我们在目前关于道德A.I的讨论下框架我们的结果。以及个人和社会收益之间的冲突:社会价值对齐问题。我们在采用A.I时测试出现的均衡。不同规范下的技术随后是人工代理,其随后的福利,以及突出的财富不平等。我们表明没有任何规则,纯粹是自私的。系统将具有最强的优势,即使是功利主义者。为个人和社会提供了重大福利。然而,我们表明可以开发A.I.在社会中引入的人类有意识的政策之后的系统导致平衡,采用者的收益不是不采用的成本,从而提高人口的整体财富和降低不平等。但是,如图所示,自组织采用此类政策将需要外部监管。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号