•  
  •  
 

Turkish Journal of Electrical Engineering and Computer Sciences

DOI

10.3906/elk-2105-170

Abstract

Multimodal medical image fusion approaches have been commonly used to diagnose diseases and involve merging multiple images of different modes to achieve superior image quality and to reduce uncertainty and redundancy in order to increase the clinical applicability. In this paper, we proposed a new medical image fusion algorithm based on a convolutional neural network (CNN) to obtain a weight map for multiscale transform (curvelet/ non-subsampled shearlet transform) domains that enhance the textual and edge property. The aim of the method is achieving the best visualization and highest details in a single fused image without losing spectral and anatomical details. In the proposed method, firstly, non-subsampled shearlet transform (NSST) and curvelet transform (CvT) were used to decompose the source image into low-frequency and high-frequency coefficients. Secondly, the low-frequency and high-frequency coefficients were fused by the weight map generated by Siamese Convolutional Neural Network (SCNN), where the weight map get by a series of feature maps and fuses the pixel activity information from different sources. Finally, the fused image was reconstructed by inverse multi-scale transform (MST). For testing of proposed method, standard gray-scaled magnetic resonance (MR) images and colored positron emission tomography (PET) images taken from Brain Atlas Datasets were used. The proposed method can effectively preserve the detailed structure information and performs well in terms of both visual quality and objective assessment. The fusion experimental results were evaluated (according to quality metrics) with quantitative and qualitative criteria.

First Page

2780

Last Page

2794

Share

COinS