首页> 外文期刊>Journal of visual communication & image representation >Robust multimodal discrete hashing for cross-modal similarity search
【24h】

Robust multimodal discrete hashing for cross-modal similarity search

机译:用于交叉模态相似性搜索的强大多模式离散散列

获取原文
获取原文并翻译 | 示例
           

摘要

Hashing technology improves the search efficiency and reduces the storage space of data. However, building an effective modal with unsupervised cross modal retrieval and generating efficient binary code is still a challenging task, considering of some issues needed to be further discussed and researched for unsupervised multimodal hashing. Most of the existing methods ignore the discrete restriction, and manually or experientially determine the weights of each modality. These limitations may significantly reduce the retrieval accuracy of unsupervised cross-modal hashing methods. To solve these problems, we propose a robust hash modal that can efficiently learn binary code by employing a flexible and noise-resistant l(2,1)-loss with nonlinear kernel embedding. In addition, we introduce an intermediate state mapping that facilitate later modal optimization to measure the loss between the hash codes and the intermediate states. Experiments on several public multimedia retrieval datasets validate the superiority of the proposed method from various aspects.
机译:散列技术提高了搜索效率并减少了数据存储空间。然而,构建具有无监督的跨模型检索和产生有效二进制代码的有效模式仍然是一个具有挑战性的任务,考虑到需要进一步讨论和研究无监督的多峰散列所需的一些问题。大多数现有方法都忽略了离散限制,手动或经验上确定每个模态的权重。这些限制可能显着降低无监督的跨模型散列方法的检索精度。为了解决这些问题,我们提出了一种强大的散列模态,可以通过使用具有非线性内核嵌入的灵活和抗噪声L(2,1)嵌入来有效地学习二进制代码。此外,我们介绍了一种中间状态映射,便于稍后的模态优化来测量散列代码和中间状态之间的损耗。关于几个公共多媒体检索数据集的实验验证了来自各个方面的提议方法的优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号