首页> 外文期刊>Optical fiber technology >A novel Deep-Learning model for RDTS signal denoising based on graph neural networks
【24h】

A novel Deep-Learning model for RDTS signal denoising based on graph neural networks

机译:A novel Deep-Learning model for RDTS signal denoising based on graph neural networks

获取原文
获取原文并翻译 | 示例
           

摘要

Focusing on the problem of increasing the deviation of temperature measurement for Raman-based distributed temperature sensor (RDTS) caused by random noise, a new method of applying a three-layer GraphSAGE-based graph neural network (3L-GraphSAGE) to noise reduction is proposed, where the spatial relationship between each signal is first constructed, and then the effective denoised results are obtained from the developed 3L-GraphSAGE model. First, an experimental setup is built for collecting fiber signals. Then, the datasets are input into the 3L-GraphSAGE to train the model. Finally, the test datasets are input into the well-trained 3L-GraphSAGE model to obtain effective denoised signals. To evaluate the performance of 3L-GraphSAGE, three evaluation indexes are calculated, including maximum deviation (MD), root mean square error (RMSE) and smoothness. The experimental results show that it can efficiently suppress the random noise and reduce the temperature measurement deviation in RDTS compared with direct demodulation of the raw data, and signifi-cantly improve the curve smoothness compared with wavelet transform by soft threshold function (WT-soft) and fast waveform type (FWT). Therefore, 3L-GraphSAGE model can provide an available method for improving the performance of RDTS.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号