首页> 外文期刊>Foresight >The intelligence explosion revisited
【24h】

The intelligence explosion revisited

机译:重温智能爆炸

获取原文
获取原文并翻译 | 示例
           

摘要

Purpose - The claim that super intelligent machines constitute a major existential risk was recently defended in Nick Bostrom's book Superintelligence and forms the basis of the sub-discipline AI risk. The purpose of this paper is to critically assess the philosophical assumptions that are of importance to the argument that AI could pose an existential risk and if so, the character of that risk. Design/methodology/approach - This paper distinguishes between "intelligence" or the cognitive capacity of an individual and "techne", a more general ability to solve problems using, for example, technological artifacts. White human intelligence has not changed much over historical time, human techne has improved considerably. Moreover, the fact that human techne has more variance across individuals than human intelligence suggests that if machine techne were to surpass human techne, the transition is likely going to be prolonged rather than explosive. Findings - Some constraints for the intelligence explosion scenario are presented that imply that AI could be controlled by human organizations. Originality/value - If true, this argument suggests that efforts should focus on devising strategies to control AI rather strategies that assume that such control is impossible.
机译:目的-尼克·波斯特罗姆(Nick Bostrom)的著作《超智能》最近辩护称超级智能机器构成主要的存在风险,并构成了次级学科AI风险的基础。本文的目的是批判性地评估对AI可能构成存在风险以及该风险的特征这一论点至关重要的哲学假设。设计/方法/方法-本文区分“智能”或个人的认知能力与“技术”,这是使用例如技术工件来解决问题的更一般的能力。白人的智力在历史上并没有太大变化,人类的技术水平有了很大提高。此外,人类技术在个体之间的差异要大于人类的智力,这一事实表明,如果机器技术要超越人类技术,则过渡可能会延长而不是爆炸性的。发现-针对情报爆炸场景的一些约束条件表明,人工智能可以由人类组织控制。独创性/价值-如果为真,则该论点表明,努力应该集中在设计控制AI的策略上,而不是假定这种控制是不可能的策略。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号