...
首页> 外文期刊>Information Processing & Management >Hybrid context enriched deep learning model for fine-grained sentiment analysis in textual and visual semiotic modality social data
【24h】

Hybrid context enriched deep learning model for fine-grained sentiment analysis in textual and visual semiotic modality social data

机译:混合上下文丰富的深度学习模型,用于文本和视觉符号模式社会数据中的细粒度情感分析

获取原文
获取原文并翻译 | 示例
           

摘要

Detecting sentiments in natural language is tricky even for humans, making its automated detection more complicated. This research proffers a hybrid deep learning model for fine-grained sentiment prediction in real-time multimodal data. It reinforces the strengths of deep learning nets in combination to machine learning to deal with two specific semiotic systems, namely the textual (written text) and visual (still images) and their combination within the online content using decision level multimodal fusion. The proposed contextual ConvNet-SVMBovw model, has four modules, namely, the discretization, text analytics, image analytics, and decision module. The input to the model is multimodal text, m epsilon {text, image, info-graphic}. The discretization module uses Google Lens to separate the text from the image, which is then processed as discrete entities and sent to the respective text analytics and image analytics modules. Text analytics module determines the sentiment using a hybrid of a convolution neural network (ConvNet) enriched with the contextual semantics of SentiCircle. An aggregation scheme is introduced to compute the hybrid polarity. A support vector machine (SVM) classifier trained using bag-of-visual-words (BoVW) for predicting the visual content sentiment. A Boolean decision module with a logical OR operation is augmented to the architecture which validates and categorizes the output on the basis of five fine-grained sentiment categories (truth values), namely 'highly positive,' positive,"neutral,"negative' and 'highly negative.' The accuracy achieved by the proposed model is nearly 91% which is an improvement over the accuracy obtained by the text and image modules individually.
机译:即使对于人类来说,以自然语言检测情感也是棘手的,这使其自动检测更加复杂。这项研究为实时多模态数据中的细粒度情感预测提供了一种混合深度学习模型。它增强了深度学习网络与机器学习相结合的优势,从而可以处理两个特定的符号系统,即文本(书面文本)和视觉(静态图像),以及使用决策级多模式融合在在线内容中的组合。提出的上下文ConvNet-SVMBovw模型具有四个模块,即离散化,文本分析,图像分析和决策模块。模型的输入是多模式文本,m epsilon {文本,图像,信息图形}。离散化模块使用Google Lens从图像中分离文本,然后将其作为离散实体进行处理并发送到相应的文本分析和图像分析模块。文本分析模块使用丰富的SentiCircle上下文语义的卷积神经网络(ConvNet)的混合体来确定情感。引入聚合方案以计算混合极性。使用视觉词袋(BoVW)训练的支持向量机(SVM)分类器,用于预测视觉内容情感。具有逻辑“或”运算的布尔决策模块被扩展到该体系结构,该体系结构基于五个细粒度的情感类别(真实值)对输出进行验证和分类,即“高度积极”,“积极”,“中立”,“消极”和“高度负面。”所提出的模型实现的精度接近91%,这比文本和图像模块分别获得的精度有所提高。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号