Banner Banner

Generating 3D TOF-MRA volumes and segmentation labels using generative adversarial networks

Pooja Subramaniam
Tabea Kossen
Kerstin Ritter
Anja Hennemuth
Kristian Hildebrand
Adam Hilbert
Jan Sobesky
Michelle Livne
Ivana Galinovic
Ahmed A. Khalil
Jochen B. Fiebach
Dietmar Frey
Vince I. Madai

February 24, 2022

Deep learning requires large labeled datasets that are difficult to gather in medical imaging due to data privacy issues and time-consuming manual labeling. Generative Adversarial Networks (GANs) can alleviate these challenges enabling synthesis of shareable data. While 2D GANs have been used to generate 2D images with their corresponding labels, they cannot capture the volumetric information of 3D medical imaging. 3D GANs are more suitable for this and have been used to generate 3D volumes but not their corresponding labels. One reason might be that synthesizing 3D volumes is challenging owing to computational limitations. In this work, we present 3D GANs for the generation of 3D medical image volumes with corresponding labels applying mixed precision to alleviate computational constraints.

We generated 3D Time-of-Flight Magnetic Resonance Angiography (TOF-MRA) patches with their corresponding brain blood vessel segmentation labels. We used four variants of 3D Wasserstein GAN (WGAN) with: 1) gradient penalty (GP), 2) GP with spectral normalization (SN), 3) SN with mixed precision (SN-MP), and 4) SN-MP with double filters per layer (c-SN-MP). The generated patches were quantitatively evaluated using the Fréchet Inception Distance (FID) and Precision and Recall of Distributions (PRD). Further, 3D U-Nets were trained with patch-label pairs from different WGAN models and their performance was compared to the performance of a benchmark U-Net trained on real data. The segmentation performance of all U-Net models was assessed using Dice Similarity Coefficient (DSC) and balanced Average Hausdorff Distance (bAVD) for a) all vessels, and b) intracranial vessels only.

Our results show that patches generated with WGAN models using mixed precision (SN-MP and c-SN-MP) yielded the lowest FID scores and the best PRD curves. Among the 3D U-Nets trained with synthetic patch-label pairs, c-SN-MP pairs achieved the highest DSC (0.841) and lowest bAVD (0.508) compared to the benchmark U-Net trained on real data (DSC 0.901; bAVD 0.294) for intracranial vessels.

In conclusion, our solution generates realistic 3D TOF-MRA patches and labels for brain vessel segmentation. We demonstrate the benefit of using mixed precision for computational efficiency resulting in the best-performing GAN-architecture. Our work paves the way towards sharing of labeled 3D medical data which would increase generalizability of deep learning models for clinical use.