首页> 外文会议>Conference on signal and data processing of small targets >Adaptive model-based 3-D target detection--Part II: statistical behavior
【24h】

Adaptive model-based 3-D target detection--Part II: statistical behavior

机译:基于自适应模型的3-D目标检测-第二部分:统计行为

获取原文

摘要

Abstract: Previous work in adaptive space-time processing has concentrated on either covariance estimation or on using a region segmentation approach where a bank of filters is designed from previously collected data. The first method typically involves performing a generalized likelihood ratio test (GLRT), which for a typical 3-D target signature involves estimation of an enormous number of covariance elements. The second method relies on filter construction from previously collected data. Two of the major penalties which are incurred in any covariance estimation technique include a large amount of computational complexity and a lack of robustness in a changing environment due to the large number of covariance samples required for statistical stability. The region segmentation approach is useful when the clutter being processed resembles that which was used for filter construction, but suffers large potential losses when the data which is operated on has different statistical properties than that which was used in filter construction. The method which is being addressed in this study for mitigating the problems associated with a space- time covariance estimation procedure and/or the dependence on a bank of fixed filters, is to assume a low degree of freedom model for the space-time clutter characteristics. This allows the adaptive filter to be estimated over a much smaller region. The detection algorithm can therefore track the clutter characteristics of a changing environment more closely while minimizing any losses in a stationary environment. This paper addresses the statistical behavior of the model-based algorithms. The statistical behavior is analyzed as a function of the number of filter tap weights and the estimation region size used for filter construction. Performance in a non-stationary environment is analyzed via Monte-Carlo techniques on both simulated and recently collected longwave IR clutter. The results indicate that the reduced degree of freedom model-based algorithms can provide significant performance improvement when the dimensions of the test vector are large and only a small amount of data is available for covariance estimation. !7
机译:摘要:自适应空时处理的先前工作主要集中在协方差估计或使用区域分割方法,其中从先前收集的数据中设计了一组滤波器。第一种方法通常涉及执行广义似然比测试(GLRT),对于典型的3-D目标签名,该方法涉及对大量协方差元素的估计。第二种方法依赖于先前收集的数据中的过滤器构造。任何协方差估计技术中发生的两个主要惩罚包括大量计算复杂性以及由于统计稳定性所需的大量协方差样本而在不断变化的环境中缺乏鲁棒性。当要处理的杂波类似于用于滤波器构造的杂波,但是当操作的数据具有与用于滤波器构造的统计特性不同的统计属性时,区域分割方法很有用。本研究中要解决的缓解与时空协方差估计程序和/或对固定滤波器组的依赖关系的问题的方法,是要为时空杂波特性假设一个低自由度模型。这允许在小得多的区域上估计自适应滤波器。因此,检测算法可以更紧密地跟踪不断变化的环境的杂波特性,同时最大程度地减少静止环境中的任何损失。本文介绍了基于模型的算法的统计行为。将统计行为作为过滤器抽头权重数量和用于过滤器构造的估计区域大小的函数进行分析。通过蒙特卡洛技术在模拟和最近收集的长波红外杂波上分析了非平稳环境下的性能。结果表明,当测试向量的维数较大且只有少量数据可用于协方差估计时,基于自由度模型的降低的算法可以显着提高性能。 !7

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号