Back

Traditional diagnostic methods suffer from high error rates and misdiagnoses, with approximately 40 million radiologist errors reported annually. This underscores the need for more accurate and intelligent solutions. Generative Adversarial Networks (GANs) have shown promise in medical image translation tasks, such as CT-to-MRI conversion, enhancing diagnostic accuracy and efficiency. 

While unpaired image-to-image translation using GAN frameworks like CycleGAN has demonstrated potential, challenges such as mode collapse and structural degradation remain significant barriers.

This study presents DeCGAN, a novel CycleGAN framework that addresses mode collapse and structural degradation in unpaired medical image translation. Key highlights include: a solution to mode collapse with spectral normalisation, structural consistency loss, and an optional mode diversity loss, improving inception scores by up to 75%. Improved structural fidelity, with up to 11.9% better SSIM in challenging translation tasks. 

Validation on CT-MRI tasks, with expert radiologist confirmation of clinical relevance. DeCGAN thus establishes a new benchmark for unpaired medical image translation, balancing diversity, structural fidelity, and clinical utility.

Matthew Cobbinah (PhD)