首页> 外文会议>IEEE International Conference on Image Processing;ICIP 2012 >Fast online incremental approach of unseen place classification using disjoint-text attribute prediction
【24h】

Fast online incremental approach of unseen place classification using disjoint-text attribute prediction

机译:基于不相交文本属性预测的不可见地点分类的快速在线增量方法

获取原文

摘要

A new approach of unseen place classification in a commercial district is presented. It can classify input scenes into the correct place classes without the needs for sample images of places for training. The number of place classes and their definition are supervised by humans using text information only. A description of individual place classes is obtained from humans as a set of words that are regarded as the disjoint-text-attributes of the unseen place. During classification, our approach determines the number of text-attributes found in an image. Our approach runs in an online incremental manner in the sense that the description of place classes can be updated and a new place class can be added at any time. Our approach can be used, does not require any training dataset, and is available in multiple languages. The evaluation is done by a set of Google Street View images of a shopping area in Japan where both the Japanese and English languages are available. The result shows that the proposed method outperforms the state-of-the-art methods of scene text recognition and standard pattern recognition. The computation is sufficiently fast for real-time application.
机译:提出了一种在商业区进行不可见地点分类的新方法。它可以将输入场景分类为正确的场所类别,而无需训练场所的样本图像。人类仅使用文本信息来监督场所类别的数量及其定义。从人类获得的单个场所类别的描述是一组单词,这些单词被视为看不见的场所的脱节文本属性。在分类期间,我们的方法确定在图像中找到的文本属性的数量。我们的方法以在线增量方式运行,其意义在于可以随时更新场所类的描述并可以添加新的场所类。我们的方法可以使用,不需要任何训练数据集,并且可以使用多种语言。评估是通过一组日本购物区的Google街景图像完成的,该图像提供日语和英语两种语言。结果表明,所提出的方法优于最新的场景文本识别和标准模式识别方法。对于实时应用而言,计算速度足够快。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号