首页> 外文会议>International Conference on Cyber Conflict >Explainable AI for Classifying Devices on the Internet
【24h】

Explainable AI for Classifying Devices on the Internet

机译:解释用于在Internet上进行分类设备的AI

获取原文

摘要

Devices reachable on the Internet pose varying levels of risk to their owners and the wider public, depending on their role and functionality, which can be considered their class. Discussing the security implications of these devices without knowing their classes is impractical. There are multiple AI methods to solve the challenge of classifying devices. Since the number of significant features in device HTTP response was determined to be low in the existing word-embedding neural network, we elected to employ an alternative method of Naive Bayes classification. The Naive Bayes method demonstrated high accuracy, but we recognise the need to explain classification results to improve classification accuracy. The black-box implementation of Artificial Neural Networks has been a serious concern when evaluating the classification results produced in most fields. While devices on the Internet have historically been classified manually or using trivial fingerprinting to match major vendors, these are not feasible anymore because of an ever-increasing variety of devices on the Internet. In the last few years, device classification using Neural Networks has emerged as a new research direction. These research results often claim high accuracy through the validation employed, but through random sampling there always occur devices that cannot be easily classified, that an expert intuitively would classify differently. Addressing this issue is critical for establishing trust in classification results and can be achieved by employing explainable AI. To better understand the models for classifying devices reachable on the Internet and to improve classification accuracy, we developed a novel explainable AI method, which returns the features that are most significant for classification decisions. We employed a Local Interpretable Model-Agnostic Explanations (LIME) framework toexplain Naive Bayes model classification results, and using this method were able to further improve accuracy with a better understanding of the results.
机译:根据他们的角色和功能,互联网上可达的设备对其所有者和更广泛的公众来说,可以对其所有者和更广泛的公众构成不同程度的风险。讨论这些设备的安全影响而不知道他们的课程是不切实际的。有多种AI方法来解决分类设备的挑战。由于在现有的单词嵌入神经网络中确定了设备HTTP响应中的显着特征的数量,我们选择采用朴素贝叶斯分类的替代方法。朴素的贝叶斯方法表现出高精度,但我们认识到需要解释分类结果以提高分类准确性。在评估大多数领域产生的分类结果时,人工神经网络的黑匣子实施是一个严重的问题。虽然互联网上的设备在历史上手动分类或使用琐碎的指纹识别来匹配主要供应商,但由于互联网上的各种设备越来越多,这些都不可行。在过去的几年中,使用神经网络的设备分类已成为新的研究方向。这些研究结果通常通过所采用的验证来声明高精度,但通过随机抽样,总是发生不能轻易归类的设备,专家直观地将不同地分类。解决此问题对于在分类结果中建立信任至关重要,并且可以通过使用可解释的AI来实现。为了更好地了解对互联网上可达的分类设备的模型并提高分类准确性,我们开发了一种新颖的可解释的AI方法,该方法返回对分类决策最重要的功能。我们雇用了一个本地可解释的模型 - 不可知的解释(石灰)框架拓展朴素的贝叶斯模型分类结果,并且使用这种方法能够进一步提高准确性,更好地了解结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号