Abstract: Fusion of multiple sensor imagery is generally agreed to be an effective approach to clutter rejection in target detection and recognition. However, image registration at the pixel level and even at the feature level poses significant problems. A neural network computational scheme is developed that will permit fusion of multiple sensor information according to target motion characteristics. Invariant among different types of sensors in different positions, motion-based segmentation provides a natural means by which different types of sensory data may be fused for target recognition. This paper describes two computational approaches developed to process image motion information. One scheme implements the Law of Common Fate to differentiate moving targets from dynamic background clutter on the basis of homogeneous velocity. Here spatio-temporal frequency analysis is applied to time-varying sensor imagery to detect and locate individual moving objects on the basis of image motion. Another computational scheme applies Gabor filters and differential Gabor filters to calculate image flow and then employs a Lie group-based neural network to interpret the 2-D image flow in terms of 3-D motion and to delineate regions of homogeneous 3-D motion. Then the motion-keyed regions may be correlated among sensor types to associate multi-attribute information with the individual targets in the scene and to exclude clutter.!
展开▼