$mathbf{M}^{mathbf{2}}mathbf{V'/> Jointly Trained Variational Autoencoder for Multi-Modal Sensor Fusion
首页> 外文会议>International Conference on Information Fusion >Jointly Trained Variational Autoencoder for Multi-Modal Sensor Fusion
【24h】

Jointly Trained Variational Autoencoder for Multi-Modal Sensor Fusion

机译:联合训练的变分自动编码器,用于多模态传感器融合

获取原文

摘要

This work presents the novel multi-modal Variational Autoencoder approach $mathbf{M}^{mathbf{2}}mathbf{VAE}$ which is derived from the complete marginal joint log-likelihood. This allows the end-to-end training of Bayesian information fusion on raw data for all subsets of a sensor setup. Furthermore, we introduce the concept of in-place fusion – applicable to distributed sensing - where latent embeddings of observations need to be fused with new data. To facilitate in-place fusion even on raw data, we introduced the concept of a re-encoding loss that stabilizes the decoding and makes visualization of latent statistics possible. We also show that the $mathbf{M}^{mathbf{2}}mathbf{VAE}$ finds a coherent latent embedding, such that a single naïve Bayes classifier performs equally well on all permutations of a bi-modal Mixture-of-Gaussians signal. Finally, we show that our approach outperforms current VAE approaches on a bi-modal MNIST & fashion-MNIST data set and works sufficiently well as a preprocessing on a tri-modal simulated camera & LiDAR data set from the Gazebo simulator.
机译:这项工作提出了新颖的多模式变分自动编码器方法 $ \ mathbf {M} ^ {\ mathbf {2}} \ mathbf {VAE} $ 这是从完全边缘联合对数似然得出的。这允许针对传感器设置的所有子集的原始数据进行贝叶斯信息融合的端到端训练。此外,我们引入了就地融合的概念-适用于分布式传感-需要将观测的潜在嵌入与新数据融合。为了甚至在原始数据上也就地融合,我们引入了重新编码丢失的概念,该概念可稳定解码并使可视化潜在统计数据成为可能。我们还表明 $ \ mathbf {M} ^ {\ mathbf {2}} \ mathbf {VAE} $ 发现一个相干的潜在嵌入,这样单个朴素的贝叶斯分类器在双模高斯混合信号的所有排列上表现都一样好。最后,我们证明了在双模态MNIST和fashion-MNIST数据集上,我们的方法优于当前的VAE方法,并且在凉亭模拟器中对三模态仿真相机和LiDAR数据集进行预处理时,效果很好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号