首页> 外文期刊>GMS Medizinische Informatik, Biometrie und Epidemiologie >Modality prediction of biomedical literature images using multimodal feature representation
【24h】

Modality prediction of biomedical literature images using multimodal feature representation

机译:使用多峰特征表示的生物医学文献图像模态预测

获取原文
           

摘要

This paper presents the modelling approaches performed to automatically predict the modality of images found in biomedical literature. Various state-of-the-art visual features such as Bag-of-Keypoints computed with dense SIFT descriptors, texture features and Joint Composite Descriptors were used for visual image representation. Text representation was obtained by vector quantisation on a Bag-of-Words dictionary generated using attribute importance derived from a χ2-test. Computing the principal components separately on each feature, dimension reduction as well as computational load reduction was achieved. Various multiple feature fusions were adopted to supplement visual image information with corresponding text information. The improvement obtained when using multimodal features vs. visual or text features was detected, analysed and evaluated. Random Forest models with 100 to 500 deep trees grown by resampling, a multi class linear kernel SVM with C=0.05 and a late fusion of the two classifiers were used for modality prediction. A Random Forest classifier achieved a higher accuracy and computed Bag-of-Keypoints with dense SIFT descriptors proved to be a better approach than with Lowe SIFT.
机译:本文介绍了用于自动预测生物医学文献中发现的图像形态的建模方法。各种最新的视觉特征(例如使用密集的SIFT描述符计算的“关键点”,纹理特征和“联合合成”描述符)用于视觉图像表示。文本表示是通过在使用Bag of of Words字典进行矢量量化而获得的,该字典使用从χ 2 检验得出的属性重要性生成。分别计算每个特征的主要成分,减少尺寸以及减少计算负荷。采用各种多特征融合以用相应的文本信息补充视觉图像信息。检测,分析和评估了使用多峰特征对比视觉或文本特征时获得的改进。随机森林模型具有通过重采样生长的100至500棵深树,C = 0.05的多类线性核SVM和两个分类器的后期融合,用于模态预测。随机森林分类器实现了更高的准确度,并且与Lowe SIFT相比,使用密集SIFT描述符计算出的袋式关键点被证明是一种更好的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号