【24h】

Multimodality brain atlas based on the visible human project dataset

机译:基于可见的人类项目数据集的多模态脑图集

获取原文

摘要

A multimodality brain atlas is implemented based on the visible human dataset. Firstly, the VHD's images of different modalities are interpolated and registered into a common cubic space. A global rotation and translation is then used to transform the registered dataset into the standard Talairach coordinate. The Talairach atlas is mapped onto the VHD's images with a piecewise linear scaling. A texture segmentation method is developed to label the different tissue types according to the knowledge provided by the Talairach atlas. Finally, the 3D rendering results are provided.
机译:基于可见的人类数据集实现了多模式脑图集。首先,将不同模态的VHD图像进行插值并配准到一个公共的立方空间中。然后使用全局旋转和平移将注册的数据集转换为标准的Talairach坐标。 Talairach地图集以分段线性缩放比例映射到VHD的图像。根据Talairach地图集提供的知识,开发了一种纹理分割方法来标记不同的组织类型。最后,提供3D渲染结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号