【24h】

Abductive Metareasoning for Truth-Seeking Agents

机译:寻求真相的代理人的诱因性元推理

获取原文

摘要

My research seeks to answer the question of how any agent that is tasked with making sense of its world, by finding explanations for evidence (e.g., sensor reports) using domain-general strategies, may accurately and efficiently handle incomplete evidence, noisy evidence, and an incomplete knowledge base. I propose the following answer to the question. The agent should employ an optimal abductive reasoning algorithm (developed piece-wise and shown to be best in a class of similar algorithms) that allows it to reason from evidence to causes. For the sake of efficiency and operational concerns, the agent should establish beliefs periodically rather than waiting until it has obtained all evidence it will ever be able to obtain. If the agent commits to beliefs on the basis of incomplete or noisy evidence or an incomplete knowledge base, these beliefs may be incorrect. Future evidence obtained by the agent may result in failed predictions or anomalies. The agent is then tasked with determining whether it should retain its beliefs and therefore discount the newly-obtained evidence, revise its prior beliefs, or expand its knowledge base (what can be described as anomaly-driven or explanation-based learning). When the agent is considering whether its failed predictions or anomalies are the result of false beliefs or limitations in its knowledge, or instead the result of incomplete or noisy sensor reports, the agent is performing a kind of metareasoning, or reasoning about its own reasoning (Schmill et al. 2011). My approach treats this metareasoning procedure as itself abductive. When faced with failed predictions or anomalies, the agent attempts to explain its potential failure of reasoning. Possible explanations are that the agent committed to incorrect beliefs based on prior misleading evidence. Or, the newly-obtained evidence is misleading and the agent does not possess incorrect beliefs. A further explanation is that the agent's knowledge base is incomplete, and that the anomaly resulted from the agent not having the proper facts about what kinds of events are possible in the world. The abductive metareasoning procedure (which utilizes the same abductive inference algorithm as the first-level reasoning procedure) produces its best explanation. Based on this explanation, the agent may attempt to repair its beliefs, ignore the newly-obtained evidence, or expand its knowledge base. These "fixes," such as expanding its knowledge base, may themselves be reverted if the agent reaches further failed predictions or anomalies in the near future.
机译:我的研究旨在通过使用领域通用策略寻找证据的解释(例如,传感器报告)来回答任何任务负责人了解其世界的问题,从而可以准确有效地处理不完整的证据,嘈杂的证据以及不完整的知识库。我对这个问题提出以下答案。代理应采用最佳的归纳推理算法(逐段开发,在同类算法中表现最佳),以使其能够从证据到原因进行推理。出于效率和运营方面的考虑,代理应该定期建立信念,而不是等到它获得了将能够获得的所有证据。如果代理人根据不完整或嘈杂的证据或不完整的知识库做出信念,则这些信念可能是不正确的。代理商获得的未来证据可能会导致预测失败或异常。然后,代理人的任务是确定是否应该保留自己的信念,并因此剥夺新近获得的证据,修改其先前的信念或扩大其知识库(可以描述为异常驱动或基于解释的学习)。当代理正在考虑其失败的预测或异常是错误信念或知识局限性的结果,还是不完整或嘈杂的传感器报告的结果时,代理正在执行某种元推理,或对其自身推理进行推理( Schmill et al。2011)。我的方法将这种metareasoning程序本身视为诱发性的。当面临失败的预测或异常时,代理尝试解释其潜在的推理失败。可能的解释是,代理人根据先前的误导性证据做出了错误的信念。或者,新获得的证据具有误导性,并且代理人没有错误的信念。进一步的解释是,代理人的知识库不完整,并且异常情况是由于代理人没有关于世界上可能发生的事件的正确事实而导致的。归纳元推理程序(使用与第一级推理程序相同的归纳推理算法)产生了最佳解释。基于此解释,代理可以尝试修复其信念,忽略新获得的证据或扩展其知识库。如果代理在不久的将来达到进一步失败的预测或异常,则可能会还原这些“修复”(例如扩展其知识库)本身。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号