首页> 外国专利> Violence detection framework using spatiotemporal characteristic analysis of shading image based on deep learning

Violence detection framework using spatiotemporal characteristic analysis of shading image based on deep learning

机译:基于深度学习的阴影图像时空特征分析的暴力检测框架

摘要

The present invention relates to a violence detection framework using spatiotemporal characteristic analysis of a shading image based on deep learning, capable of improving ability and accuracy of detecting violence in images. To this end, according to the present invention, the violence detection framework for detecting violence of an image by detecting a feature point of violence in an input image including image frames provided from a video camera or a video file includes: a first step of dividing a real-time input image into images per frame; a second step of excluding red (R), green (G), and blue (B) from each of the divided images per frame to extract a 2D-based Y-frame monochrome image; a third step of sequentially accumulating a plurality of extracted 2D-based Y-frame monochrome images to convert the accumulated 2D-based Y-frame monochrome images into a Y-frame monochrome image in a 3D environment; and a fourth step of extracting a frame of a uniform layer from the converted Y-frame monochrome image in the 3D environment, and performing accumulation again to perform image convolution, and deriving a desired detection scene by using a 3*3*3 filter. Accordingly, an image optimized for network lightening and a time space is created and applied to an algorithm so as to allow the feature point of violence to be continuously remembered and re-learned on a specific layer in an image convolution process, so that the ability and the accuracy of detecting the violence in the images are improved, analysis is performed regardless of a length of an analysis frame, and consecutive actions are analyzed.
机译:本发明涉及一种基于深度学习的利用阴影图像的时空特征分析的暴力检测框架,能够提高图像中暴力检测的能力和准确性。为此,根据本发明,用于通过检测包括从摄像机或视频文件提供的图像帧的输入图像中的暴力特征点来检测图像的暴力的暴力检测框架包括:划分的第一步实时输入图像成每帧图像;第二步,每帧从每个分割图像中排除红色(R),绿色(G)和蓝色(B),以提取基于2D的Y帧单色图像;第三步骤,在3D环境中,顺序地累积多个提取的基于2D的Y帧单色图像,以将累积的基于2D的Y帧单色图像转换为Y帧单色图像;第四步是在3D环境下从转换后的Y帧单色图像中提取均匀层的帧,并再次进行累加以进行图像卷积,并通过使用3 * 3 * 3滤波器得出所需的检测场景。因此,创建了针对网络减轻和时空优化的图像,并将其应用于算法,以便在图像卷积过程中在特定图层上连续记住并重新学习暴力特征点,从而具有从而提高了图像中暴力检测的准确性,无论分析帧的长度如何,都可以进行分析,并对连续的动作进行分析。

著录项

  • 公开/公告号KR20200057834A

    专利类型

  • 公开/公告日2020-05-27

    原文格式PDF

  • 申请/专利权人 GYNETWORKS CO. LTD.;

    申请/专利号KR20180140481

  • 发明设计人 BANG SEUNG ON;

    申请日2018-11-15

  • 分类号G06T7/292;G06T15/80;G06T5;H04N7/18;

  • 国家 KR

  • 入库时间 2022-08-21 11:06:58

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号