...
首页> 外文期刊>IEEE transactions on dependable and secure computing >Robust Transparency Against Model Inversion Attacks
【24h】

Robust Transparency Against Model Inversion Attacks

机译:防止模型反转攻击的稳健透明度

获取原文
获取原文并翻译 | 示例
           

摘要

Transparency has become a critical need in machine learning (ML) applications. Designing transparent ML models helps increase trust, ensure accountability, and scrutinize fairness. Some organizations may opt-out of transparency to protect individuals' privacy. Therefore, there is a great demand for transparency models that consider both privacy and security risks. Such transparency models can motivate organizations to improve their credibility by making the ML-based decision-making process comprehensible to end-users. Differential privacy (DP) provides an important technique to disclose information while protecting individual privacy. However, it has been shown that DP alone cannot prevent certain types of privacy attacks against disclosed ML models. DP with low c values can provide high privacy guarantees, but may result in significantly weaker ML models in terms of accuracy. On the other hand, setting c value too high may lead to successful privacy attacks. This raises the question whether we can disclose accurate transparent ML models while preserving privacy. In this article we introduce a novel technique that complements DP to ensure model transparency and accuracy while being robust against model inversion attacks. We show that combining the proposed technique with DP provide highly transparent and accurate ML models while preserving privacy against model inversion attacks.
机译:透明度已成为机器学习(ML)应用中的关键需求。设计透明ML模型有助于增加信任,确保责任和审查公平性。有些组织可能选择退出透明度以保护个人隐私。因此,对隐私和安全风险的透明度模型有很大的需求。这种透明度模型可以激励组织通过使ML的决策过程能够理解最终用户来激励他们的可信度。差分隐私(DP)提供了一个重要的技术,在保护个人隐私时披露信息。然而,已经表明,单独的DP无法防止某些类型的隐私攻击对公开的ML模型。具有低C值的DP可以提供高隐私保证,但可能导致最明显弱的ML模型在准确性方面。另一方面,设置C值太高可能导致成功的隐私攻击。这提出了这个问题,无论我们是否可以在保留隐私时披露准确的透明ML模型。在本文中,我们介绍一种新颖的技术,可以补充DP,以确保模型透明度和准确性,同时对模型反演攻击稳健。我们表明,将所提出的技术与DP相结合,提供高度透明和准确的ML模型,同时保持对模型反演攻击的隐私。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号