Please use this identifier to cite or link to this item:
Title: Image Fusion Based Framework for Multimodal Medical Image Registration
Authors: Irshad, Muhammad Touseef
Keywords: Physical Sciences
Computer & IT
Issue Date: 2022
Publisher: National University of Computer & Emerging Sciences, Islamabad
Abstract: Digital images are rich in information and supplement many intelligent processes across domains. An important application area for digital images is the modern healthcare system. A wide variety of images are used in the modern healthcare system for various purposes, such as, disease diagnosis, disease prognosis, and treatment planning etc. These images are divided into various modalities such as, X-Ray imaging, Ultrasound Imaging (UI), Magnetic Resonance Imaging (MRI), Computed Tomography (CT), Positron Emission Tomography (PET) and Single Photon Emission Tomography (SPECT) etc., just to name a few. Images of the same organ captured using these various modalities contain information complementary in nature. A frequent problem arises when doctors need to analyze an organ captured by different sensors resulting in heterogeneous images (such as MRI and CT of a patient’s head). Such images can be combined using image registration process to generate an image with greater information content. Image registration is a powerful tool used for the integration of information obtained from different images belonging to the same scene. An important step that affects the performance of an image registration process is the image fusion stage. Owing to its importance, a lot of the research in recent years is directed towards the development of new and the improvement of existed medical image fusion methods to increase the acceptability of registered medical images. However, most image fusion-based frameworks suffer from various limitations, such as, addition of noise and transfer of incorrect and insufficient information from source images to registered image. In this work, we proposed and evaluated two methods that address these problems. The first method i.e., Gradient Compass based Adaptive Multimodal Medical Image Fusion uses gradient compass to extract multi-directional edges and uses the extracted edge profile to compute detailed images. The detailed images are then used to generate weighted matrices that are then used for adaptive fusion process. The developed method is adaptive and alter its behaviour according to the distribution of noise in the source images. This work is further extended in the second method, i.e., An Edge Strength based Adaptive Multimodal Medical Image Fusion which incorporates vi edge profiles using Marr-Hildreth filter and improves the overall information transfer. We benchmarked the developed algorithms on pairs of CT, MRI and PET images. Our methods showed improved performance computed on various image fusion quality parameters i.e., subjective and objective parameters. Overall in the comparative study of fusion algorithms, for subjective scores, we achieved a fusion mutual information score of 4.64, spatial frequency score of 26.49, average gradient score of 12.89 and fusion symmetry score of 1.99. Likewise, for objective scores, we achieved total information transfer score of 0.84 and combined information loss, artifacts added percentage score of 15.47.
Gov't Doc #: 27328
Appears in Collections:PhD Thesis of All Public / Private Sector Universities / DAIs.

Files in This Item:
File Description SizeFormat 
Muhammad Touseef Irshad Computer Science 2022 nu fast isb.pdf 21.10.22.pdfPh.D thesis11.16 MBAdobe PDFView/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.