首页> 外文期刊>International Journal of Multimedia Information Retrieval >aMM: Towards adaptive ranking of multi-modal documents
【24h】

aMM: Towards adaptive ranking of multi-modal documents

机译:aMM:迈向多模式文档的自适应排名

获取原文
获取原文并翻译 | 示例
           

摘要

Information reranking aims to recover the true order of the initial search results. Traditional reranking approaches have achieved great success in uni-modal document retrieval. They, however, suffer from the following limitations when reranking multi-modal documents: (1) they are unable to capture and model the relations among multiple modalities within the same document; (2) they usually concatenate diverse features extracted from different modalities into one single vector, rather than adaptively fusing them by considering their discriminative capabilities with respect to the given query; and (3) most of them consider the pairwise relations among documents but discard their higher-order grouping relations, which leads to information loss. Towards this end, we propose an adaptive multi-modal multi-view ((mathbf{aMM })) reranking model. This model is able to jointly regularize the relatedness among modalities, the effects of feature views extracted from different modalities, as well as the complex relations among multi-modal documents. Extensive experiments on three datasets well validated the effectiveness and robustness of our proposed model.
机译:信息重排旨在恢复最初搜索结果的真实顺序。传统的重新排序方法已在单模式文档检索中取得了巨大的成功。但是,它们在对多模式文档进行重新排序时具有以下局限性:(1)他们无法捕获和建模同一文档中多个模式之间的关系; (2)它们通常将从不同模式中提取的各种特征连接到一个向量中,而不是通过考虑它们对给定查询的区分能力来自适应地融合它们; (3)大多数考虑文档之间的成对关系,却丢弃它们之间的高阶分组关系,从而导致信息丢失。为此,我们提出了一种自适应多模式多视图((mathbf {aMM}))重新排序模型。该模型能够共同规范化模态之间的关联性,从不同模态提取的特征视图的效果以及多模态文档之间的复杂关系。在三个数据集上的大量实验很好地验证了我们提出的模型的有效性和鲁棒性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号