首页> 外文学位 >Machine learning for real-time strategy computer games.
【24h】

Machine learning for real-time strategy computer games.

机译:实时策略计算机游戏的机器学习。

获取原文
获取原文并翻译 | 示例

摘要

A commercial Real-Time Strategy (RTS) game requires an artificial intelligence component, simply called an "AI", capable of providing a human player with a challenging opponent. These AIs must simulate the play style of a competent human player. The AIs of current RTS games can play with multiple levels of skill. However, their simulation of a competent human player is incomplete because they cannot change their tactics to adapt to a human player.;In this thesis, neural networks provide learning mechanisms for RTS AIs. Two AI prototypes that each learn a binary decision function are described. Both of these prototypes use a Stochastic Back-Propagation (SBP) algorithm to train a neural network. However, the data set for training the neural network is created in two different ways. The Moving Window technique records input state, output state> tuples for each AI-controlled game agent whenever it makes a decision during games. When learning is performed, these tuples are evaluated for their contribution to the overall performance of the AI-controlled agents. The Simulated Memory technique extracts a set of input state, output state> tuples from the neural network, updates this set based on the effectiveness of recent commands, and retrains the neural network with the updated set.;The two prototypes were tested against several scripted AI opponents. Various input parameters were tried to tune the performance of the prototypes. Both prototypes successfully adapted their strategies until they were victorious over simulated opponents with simple deterministic strategies. The Moving Window prototype performed better than the Simulated Memory technique with respect to accuracy and speed in the experiments for the parameter settings used.
机译:商业实时策略(RTS)游戏需要一个人工智能组件,简称为“ AI”,能够为人类玩家提供具有挑战性的对手。这些AI必须模拟有能力的人类玩家的游戏风格。当前的RTS游戏的AI可以玩多种技能。然而,他们对有能力的人类玩家的模拟是不完整的,因为他们无法改变适应人类玩家的策略。描述了两个AI原型,每个原型都学习一个二进制决策函数。这两个原型都使用随机反向传播(SBP)算法来训练神经网络。但是,用于训练神经网络的数据集是以两种不同的方式创建的。每当在游戏过程中做出决定时,移动窗口技术都会为每个由AI控制的游戏代理记录<输入状态,输出状态>元组。进行学习时,将评估这些元组对AI受控代理的整体性能的贡献。 Simulated Memory技术从神经网络中提取了一组 state,output state>元组,并根据最近命令的有效性更新了该集合,并使用更新后的集合对神经网络进行了训练。脚本化的AI对手。尝试了各种输入参数来调整原型的性能。两个原型都成功地调整了他们的策略,直到他们以简单的确定性策略战胜了模拟对手。在使用的参数设置的实验中,“移动窗口”原型在准确性和速度方面的表现均优于“模拟内存”技术。

著录项

  • 作者

    Marusiak, Warren.;

  • 作者单位

    The University of Regina (Canada).;

  • 授予单位 The University of Regina (Canada).;
  • 学科 Artificial Intelligence.;Computer Science.
  • 学位 M.Sc.
  • 年度 2008
  • 页码 144 p.
  • 总页数 144
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 人工智能理论;自动化技术、计算机技术;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号