首页> 外文期刊>AI communications >Modelling deception using theory of mind in multi-agent systems
【24h】

Modelling deception using theory of mind in multi-agent systems

机译:在多主体系统中使用心智理论对欺骗进行建模

获取原文
获取原文并翻译 | 示例
       

摘要

Agreement, cooperation and trust would be straightforward if deception did not ever occur in communicative interactions. Humans have deceived one another since the species began. Do machines deceive one another or indeed humans? If they do, how may we detect this? To detect machine deception, arguably requires a model of how machines may deceive, and how such deception may be identified. Theory of Mind (ToM) provides the opportunity to create intelligent machines that are able to model the minds of other agents. The future implications of a machine that has the capability to understand other minds (human or artificial) and that also has the reasons and intentions to deceive others are dark from an ethical perspective. Being able to understand the dishonest and unethical behaviour of such machines is crucial to current research in AI. In this paper, we present a high-level approach for modelling machine deception using ToM under factors of uncertainty and we propose an implementation of this model in an Agent-Oriented Programming Language (AOPL). We show that the Multi-Agent Systems (MAS) paradigm can be used to integrate concepts from two major theories of deception, namely Information Manipulation Theory 2 (IMT2) and Interpersonal Deception Theory (IDT), and how to apply these concepts in order to build a model of computational deception that takes into account ToM. To show how agents use ToM in order to deceive, we define an epistemic agent mechanism using BDI-like architectures to analyse deceptive interactions between deceivers and their potential targets and we also explain the steps in which the model can be implemented in an AOPL. To the best of our knowledge, this work is one of the first attempts in AI that (i) uses ToM along with components of IMT2 and IDT in order to analyse deceptive interactions and (ii) implements such a model.
机译:如果在交流中从来没有发生欺骗行为,那么协议,合作和信任将是直截了当的。自该物种开始以来,人类就相互欺骗了。机器欺骗彼此还是欺骗人类?如果他们这样做,我们怎么能检测到呢?为了检测机器欺骗,可以说需要一个机器如何欺骗以及如何识别这种欺骗的模型。心智理论(ToM)提供了创建能够模拟其他代理人思想的智能机器的机会。从伦理的角度来看,具有理解其他人(人类或人造人)思想能力并具有欺骗他人意图和意图的机器的未来含义是黑暗的。能够理解此类机器的不诚实和不道德行为对于当前AI研究至关重要。在本文中,我们提出了一种在不确定性因素下使用ToM对机器欺骗进行建模的高级方法,并提出了在面向代理的编程语言(AOPL)下该模型的实现。我们证明了多智能体系统(MAS)范式可用于整合来自两种主要欺骗理论的概念,即信息操纵理论2(IMT2)和人际欺骗理论(IDT),以及如何将这些概念应用于建立一个考虑了ToM的计算欺骗模型。为了显示代理如何使用ToM进行欺骗,我们使用类似于BDI的体系结构定义了一种认知代理机制,以分析欺骗者及其潜在目标之间的欺骗性相互作用,并且还解释了可以在AOPL中实现该模型的步骤。据我们所知,这项工作是AI中的首次尝试之一,(i)使用ToM以及IMT2和IDT的组件来分析欺骗性交互,并且(ii)实现这种模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号