首页> 外文会议>2019 56th ACM/IEEE Design Automation Conference >Adversarial Attack against Modeling Attack on PUFs
【24h】

Adversarial Attack against Modeling Attack on PUFs

机译:针对PUF的建模攻击的对抗攻击

获取原文
获取原文并翻译 | 示例

摘要

The Physical Unclonable Function (PUF) has been proposed for the identification and authentication of devices and cryptographic key generation. A strong PUF provides an extremely large number of device-specific challenge-response pairs (CRP) which can be used for identification. Unfortunately, the CRP mechanism is vulnerable to modeling attack, which uses machine learning (ML) algorithms to predict PUF responses with high accuracy. Many methods have been developed to strengthen strong PUFs with complicated hardware; however, recent studies show that they are still vulnerable by leveraging GPU-accelerated ML algorithms. In this paper, we propose to deal with the problem from a different approach. With a slightly modified CRP mechanism, a PUF can provide poison data such that an accurate model of the PUF under attack cannot be built by ML algorithms. Experimental results show that the proposed method provides an effective countermeasure against modeling attacks on PUF. In addition, the proposed method is compatible with hardware strengthening schemes to provide even better protection for PUFs.
机译:已经提出了物理不可克隆功能(PUF),用于设备的识别和认证以及加密密钥的生成。强大的PUF提供了可用于识别的大量特定于设备的质询-响应对(CRP)。不幸的是,CRP机制很容易受到建模攻击的影响,建模攻击使用机器学习(ML)算法来预测PUF响应的准确性很高。已经开发出许多方法来通过复杂的硬件来增强强大的PUF。但是,最近的研究表明,利用GPU加速的ML算法,它们仍然容易受到攻击。在本文中,我们建议以不同的方法来解决该问题。通过稍微修改的CRP机制,PUF可以提供有毒数据,从而使得ML算法无法建立受到攻击的PUF的准确模型。实验结果表明,该方法为针对PUF的建模攻击提供了有效的对策。另外,所提出的方法与硬件增强方案兼容,从而为PUF提供更好的保护。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号