PointNeuron 3D Neuron Reconstruction via Geometry and Topology Learning of Point Clouds Runkai Zhao

2025-05-02 0 0 5.47MB 13 页 10玖币
侵权投诉
PointNeuron: 3D Neuron Reconstruction via Geometry and Topology Learning
of Point Clouds
Runkai Zhao
University of Sydney
rzha9419@uni.sydney.edu.au
Heng Wang
University of Sydney
hwan9147@uni.sydney.edu.au
Chaoyi Zhang
University of Sydney
czha5168@uni.sydney.edu.au
Weidong Cai
University of Sydney
tom.cai@sydney.edu.au
Abstract
Digital neuron reconstruction from 3D microscopy im-
ages is an essential technique for investigating brain con-
nectomics and neuron morphology. Existing reconstruction
frameworks use convolution-based segmentation networks
to partition the neuron from noisy backgrounds before ap-
plying the tracing algorithm. The tracing results are sensi-
tive to the raw image quality and segmentation accuracy. In
this paper, we propose a novel framework for 3D neuron re-
construction. Our key idea is to use the geometric represen-
tation power of the point cloud to better explore the intrin-
sic structural information of neurons. Our proposed frame-
work adopts one graph convolutional network to predict
the neural skeleton points and another one to produce the
connectivity of these points. We finally generate the target
SWC file through interpretation of the predicted point coor-
dinates, radius, and connections. Evaluated on the Janelia-
Fly dataset from the BigNeuron project, we show that our
framework achieves competitive neuron reconstruction per-
formance. Our geometry and topology learning of point
clouds could further benefit 3D medical image analysis,
such as cardiac surface reconstruction. Our code is avail-
able at https://github.com/RunkaiZhao/PointNeuron.
1. Introduction
Neuron morphology plays an essential role in the anal-
ysis of brain functionality. Digital 3D neuron reconstruc-
tion, also named 3D neuron tracing, is a computer-aided
process to extract the anatomical structure and connectiv-
ity of neuron circuits from the volumetric microscopy im-
age. Acquisitions of neuron morphology models in the past
several decades have relied on manual annotation from neu-
roscientists. Due to the diversity and complexity of neuron
Figure 1: We re-think the structural representation of neu-
ron by using point cloud. Left: the voxel-wise neuron mi-
croscopy image; Right: the point-wise neuron after the for-
mat transformation. Due to the limits of optical microscope
imaging, there are gaps along neuron tree-like arbors (red
arrows), and the neuron structure is surrounded by the back-
ground noises (green box). Note voxel-based representa-
tion is inherently dense in three dimensions while our point-
based one is more efficient in memory (e.g., 200×100×150
voxels versus 4500 points).
morphology, the manual annotation work is extremely time-
consuming and labor-intensive. The manual annotations are
recorded as SWC files for digitally storing the neuron mor-
phologies which use a set of connected points to constitute
the hierarchical neuronal trees. It includes the identity of
each neuron node, such as ID, type, position, radius, and
parent ID. Recently, many researchers have devoted more
arXiv:2210.08305v2 [cs.CV] 18 Oct 2022
Raw Volumetric Image
Tranformation Skeletonization Reconstruction
Neuronal Point Cloud Neuronal Skeleton Single Neuron
Reconstruction Result
Figure 2: The main procedures of our proposed method, PointNeuron, for neuron reconstuction from a volumetric microscopy
image. The green and red boxes highlight parts of our reconstruction improvements.
attention to completing neuron reconstruction in an auto-
matic or semi-automatic manner. The BigNeuron challenge
[34] and the DIADEM challenge [6] have been hosted to
develop automatic tracing algorithms by providing a size-
able single neuron morphology database and open-source
software tools for neuroscience studies.
Early digital neuron reconstruction algorithms relied on
sophisticated mathematical models, which can be catego-
rized into global and local algorithms. The global ap-
proaches, such as open-curve snake [48], APP [35], APP2
[51], FMST [52], and others [24, 33, 40, 17, 38, 45], con-
sist of multiple stages which are pre-processing to denoise
the raw image, tree-like structure initialization, and post-
processing to refine the reconstructed traces. The local ap-
proaches [55, 3, 12] are to trace the neuronal tree from the
seed point location with manual intervention or automatic
detection.
Nevertheless, reconstructing neuron morphologies from
microscopy images are still error-prone especially when
given low-quality neuron image data. Due to the inhomo-
geneous fluorescence illumination and inherent light mi-
croscopy imaging limits, the raw neuron image stacks are
often contaminated by numerous background noises. In
addition, the voxels in dendrite and branch termini have a
much lower intensity than those in soma and axon regions,
which results in discontinuous neuron branches and im-
pedes predicting the intact connections of neuron circuits.
These two challenges are highlighted in the two neuron ex-
amples of Figure 1. Lastly, since the 3D neuron images
in the BigNeuron dataset are obtained from worldwide re-
search laboratories and light microscopy measurements are
varied, the exhibited neuron morphologies are diverse and
complex.
Various deep learning techniques have been success-
fully applied in medical image processing [37, 32, 15, 14,
21], which has inspired researchers to utilize the hierar-
chical feature learning ability of convolution-based mod-
els to solve the challenging neuron reconstruction prob-
lem [26, 42]. In order to identify the neuron structures
from a larger receptive field, recent works focus on in-
troducing global contextual features into the convolution-
based segmentation work, such as inception learning mod-
ule [26], multi-scale kernel fusion [46], Atrous Spatial Pyra-
mid Pooling (ASPP) [25], and graph reasoning module [43].
In this paper, we re-think the spatial representation
of neuron morphology. Rather than following the tradi-
tional 3D volumetric representation, we propose to explic-
itly leverage the sparsely organized point clouds to repre-
sent neuronal arbours and dendrites. As shown in Figure
1, we transform the voxels of original 3D neuron images
into points, then reformulate this reconstruction task to pre-
dict the geometric and topological properties for the points
in cartesian coordinate system. We design a novel frame-
work, named PointNeuron, to extract neuronal structures
from these point cloud data. Specifically, our framework
consists of two major stages. The first stage is to extract
a succinct neuron skeleton from the noisy input points and
formulate the geometric feature. The connectivity among
the unordered points is predicted at the second stage. The
general idea of our method is stated in Figure 2. Our key
contributions are summarized as follows: 1) we propose to
describe the neuron circuits in point format for better un-
derstanding of the spatial information in 3D space, instead
of original volumetric image stacks; 2) we present a novel
pipeline, PointNeuron, as an automatic 3D neuron recon-
struction method by learning the characterization of point
clouds, which can be generalized to improve reconstruction
performance of all tracing algorithms; and 3) we present
the point-based module to effectively capture the geometry
information for generating the compact neuron skeletoniza-
tion.
2. Related Works
Traditional neuron reconstruction algorithms consist of
three main steps: pre-processing the raw 3D microscopy
image stacks, initializing the tree-like neuronal graph map,
and then pruning the reconstruction map until the com-
pact result is obtained. APP [35] and APP2 [51] cover
all the potential neuron signals on the raw image input for
the initial reconstruction map and remove the surplus neu-
ron branches for a compact structure at the pruning step.
Like the APP family, FMST [52] applies the fast march-
ing algorithm with edge weights to initialize neuron traces
and prunes them based on the coverage ratio of two inter-
sected neuron nodes. NeuTube [16] implements free editing
functions and the multi-branch tracing algorithm from seed
source points. Reversely, Rivulet [54] and Rivulet2 [29]
capture the neuron traces from the furthest branch termini
to the seed point. LCMBoost [20] and SmartTracing [8]
incorporate the deep learning-based modules into an auto-
matic tracing algorithm without human intervention.
With the emergence of 3D U-Net [13] showing great suc-
cess in medical image segmentation tasks, learning-based
segmentation prior to applying the tracing algorithm is able
to highlight the neuron signal and enhance the input neuron
image quality. Some advanced deep learning techniques are
applied to improve the image segmentation performance,
such as inception learning module [26], multi-scale ker-
nel fusion [46], atrous convolution [9], and Atrous Spa-
tial Pyramid Pooling (ASPP) [10, 25]. [43, 41] incorporate
graph reasoning module to the multi-scale encoder-decoder
network for eliminating the semantic gap of image feature
learning. For computational saving and faster inference,
[47] proposes a light-weighted student inference model
guided by the more complex teacher model via knowledge
distillation. To handle the small-size neuron dataset, [44]
improves neuron image segmentation performance through
the VCV-RL module extracting intra- and inter-volume vox-
els of same semantics into the latent space. [39] builds a
GAN-based framework to synthesize neuron training im-
ages from the manually annotated skeletons.
As the deep learning advances in medical image anal-
ysis, researchers have raised the interests to analyze 3D
medical images by applying deep learning techniques. Al-
though the existing works process medical images in voxel-
wise representation, an increasing number of researchers
are studying the 3D structures with the insight of point
clouds. They leverage the 3D point cloud representation to
learn more discriminative object features for different med-
ical image tasks [53], such as cardiac mesh reconstruction
[11], volumetric segmentation [23, 2], and vessel centerline
extraction [22]. For example, [23, 22, 2] use the character-
ization of point clouds to learn the global context feature
for enhancing the CNN-based image segmentation perfor-
mance. Also, [1] and [4] take into account the anatomical
properties of streamline and mesh structure in the form of
point cloud representation.
The great success of introducing point cloud concepts
into the domain of medical image analysis and the fact that
existing tracing methods have not considered the usage of
the point cloud encourage us to address the challenging neu-
ron reconstruction task from a novel perspective. We aim to
improve 3D neuron reconstruction performance through the
powerful geometric and topological representation of point
clouds. Therefore, we shift one of the most challenging
medical image tasks to the scope of point clouds.
3. Method
We propose a novel pipeline, PointNeuron, to perform
3D neuron morphology reconstruction in the point-based
manner. Given the voxel-wise neuron image input, we ini-
tially convert it to a point cloud in Section 3.1. Then we
forward the neuron point cloud into the Skeleton Predic-
tion module to generate a series of neuron skeletal points in
Section 3.2. After that, we design the Connectivity Predic-
tion module to link these skeletal points through analyzing
the node relationships of graph data structure in Section 3.3.
Lastly, we present the specific training losses in Section 3.4.
Our pipeline is shown in Figure 3.
3.1. Voxel-to-Point Transformation
Given a raw volumetric neuron image of size RH×W×D,
a thresholding value θis pre-defined to segment the neuron
structure and remove a majority of noises. Every voxel with
an intensity larger than θis positioned and transformed to
a point. To handle large amount of points, we split all the
neuron points into Kpatches. Hence, the neuron point in-
put can be represented as P=K× {pi: [xi;Ii]}Np
i=1 where
Npis the number of points per patch with the Cartesian co-
ordinate xiR3and the intensity IiR.
3.2. Neuron Skeleton Prediction
In this module, we extract Nsskeletal points from the
neuronal point cloud input to constitute a neuron skele-
ton with the point-wise F-dimensional geometric features.
There are three primary steps: learning the deep geometric
features of neuron points through a graph-based encoder,
generating the center proposals at local regions, and pro-
ducing the compact neuron skeleton.
Point cloud geometry learning. Since the point clouds
representing neuron structures are uneven and unordered
in coordinate space, they cannot be simply processed by
a regular grided convolution kernel like typical pixel- or
摘要:

PointNeuron:3DNeuronReconstructionviaGeometryandTopologyLearningofPointCloudsRunkaiZhaoUniversityofSydneyrzha9419@uni.sydney.edu.auHengWangUniversityofSydneyhwan9147@uni.sydney.edu.auChaoyiZhangUniversityofSydneyczha5168@uni.sydney.edu.auWeidongCaiUniversityofSydneytom.cai@sydney.edu.auAbstractDigit...

展开>> 收起<<
PointNeuron 3D Neuron Reconstruction via Geometry and Topology Learning of Point Clouds Runkai Zhao.pdf

共13页,预览3页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:13 页 大小:5.47MB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 13
客服
关注