DTCWTASODCNN: DTCWT based Weighted Fusion Model for Multimodal Medical Image Quality Improvement with ASO Technique & DCNN
Medical image fusion approaches are sub-categorized as single-mode as well as multimodal fusion strategies. The limitations of single-mode fusion approaches can be resolved by introducing a multimodal fusion approach. Multimodal medical image fusion approach is formed by integrating two or more medical images of similar or dissimilar modalities aims to enhance the image quality and to preserve the image information. Hence, this paper introduced a new way to meld multimodal medical images via utilizing developed weighted fusion model relied on Dual Tree Complex Wavelet Transform (DTCWT) for fusing the multimodal medical image. Here, the two medical images are considered for image fusion process and we have implied DTCWT to the medical images for generating four sub-bands partition of the source medical images. The Renyientropy-based weighted fusion model is used to combine the weighted coefficient of DTCWT of images. The final fusion process is carried out using Atom Search Sine Cosine Algorithm (ASSCA)-based Deep Convolutional Neural Network (DCNN). Moreover, the simulation work output demonstrated for developed fusion model gained the superior outcomes relied on key indicators named as Mutual Information i.e. MI, Peak Signal to Noise Ratio abbreviated as PSNR as well as Root Mean Square Error, in short RMSE with the values of 1.554, 40.45 dB as well as 5.554, correspondingly.
Full Text: PDF (downloaded 188 times)
- There are currently no refbacks.