首页> 外文期刊>IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control >Deep Convolutional Neural Networks for Displacement Estimation in ARFI Imaging
【24h】

Deep Convolutional Neural Networks for Displacement Estimation in ARFI Imaging

机译:ARFI成像中位移估计的深度卷积神经网络

获取原文
获取原文并翻译 | 示例
           

摘要

Ultrasound elasticity imaging in soft tissue with acoustic radiation force requires the estimation of displacements, typically on the order of several microns, from serially acquired raw data A-lines. In this work, we implement a fully convolutional neural network (CNN) for ultrasound displacement estimation. We present a novel method for generating ultrasound training data, in which synthetic 3-D displacement volumes with a combination of randomly seeded ellipsoids are created and used to displace scatterers, from which simulated ultrasonic imaging is performed using Field II. Network performance was tested on these virtual displacement volumes, as well as an experimental ARFI phantom data set and a human in vivo prostate ARFI data set. In the simulated data, the proposed neural network performed comparably to Loupas's algorithm, a conventional phase-based displacement estimation algorithm; the rms error was 0.62 mu m for the CNN and 0.73 mu m for Loupas. Similarly, in the phantom data, the contrast-to-noise ratio (CNR) of a stiff inclusion was 2.27 for the CNN-estimated image and 2.21 for the Loupas-estimated image. Applying the trained network to in vivo data enabled the visualization of prostate cancer and prostate anatomy. The proposed training method provided 26 000 training cases, which allowed robust network training. The CNN had a computation time that was comparable to Loupas's algorithm; further refinements to the network architecture may provide an improvement in the computation time. We conclude that deep neural network-based displacement estimation from ultrasonic data is feasible, providing comparable performance with respect to both accuracy and speed compared to current standard time-delay estimation approaches.
机译:具有声辐射力的软组织中的超声弹性成像需要从串联获取的原始数据A线估计位移,通常在几微米的顺序上。在这项工作中,我们为超声位移估计实施了一个完全卷积神经网络(CNN)。我们提出了一种用于产生超声训练数据的新方法,其中产生具有随机播种的椭圆体组合的合成3-D位移体积,并用于移位散射体,从中使用田间II进行模拟超声成像。在这些虚拟位移卷上测试网络性能,以及实验ARFI幻像数据集和体内前列腺ARFI数据集的人。在模拟数据中,所提出的神经网络相当于Loupas的算法,是一种传统的基于相位的位移估计算法;用于CNN的RMS误差为0.62μm,对于Loupas为0.73μm。类似地,在幻像数据中,对于Loupas估计图像的CNN估计图像和2.21的CNN估计图像和2.21的对比度达到噪声比(CNR)为2.27。将训练的网络应用于体内数据,使前列腺癌和前列腺解剖学的可视化。建议的培训方法提供了26 000个培训案例,可允许强大的网络培训。 CNN具有与Loupas算法相当的计算时间;对网络架构的进一步改进可以提供计算时间的改进。我们得出结论,与电流标准时间延迟估计方法相比,来自超声数据的深度神经网络的位移估计是可行的,提供了相对于精度和速度的相当性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号