Image fusion of CT and MR with sparse representation in NSST domain (Q1784147): Difference between revisions
From MaRDI portal
Set OpenAlex properties. |
ReferenceBot (talk | contribs) Changed an Item |
||
Property / cites work | |||
Property / cites work: Curvelets and curvilinear integrals / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Sparse directional image representations using the discrete shearlet transform / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: $rm K$-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation / rank | |||
Normal rank |
Latest revision as of 16:07, 16 July 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Image fusion of CT and MR with sparse representation in NSST domain |
scientific article |
Statements
Image fusion of CT and MR with sparse representation in NSST domain (English)
0 references
26 September 2018
0 references
Summary: Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation.
0 references
image fusion
0 references
CT
0 references
MR
0 references
nonsubsampled shearlet transform domain
0 references