JOURNAL ARTICLE

SYNTHETIC CT GENERATION FROM T1-WEIGHTED KNEE MRIs USING A UNET

Abstract

Despite recent evolution in magnetic resonance imaging (MRI) techniques for musculoskeletal applications, computed tomography (CT) remains the reference modality for the assessment of bone structure. The use of generative deep learning models, such as U-Nets, was shown to enable the synthesis of CT-like contrasts from MRI images. However, the development and validation of such tools have been hindered by the need for large datasets with paired CT and MRI acquisitions. In this preliminary work, we propose to train a U-Net, a supervised deep learning technique, to generate synthetic CT (sCT) knee images from three-dimensional T1-weighted MRI scans by leveraging a large knee dataset with paired acquisitions. The synthetic CT images were then assessed quantitatively and qualitatively. A cohort consisting of 249 patients (39.7±16.0 years old, 133 females) received both a knee MR examination (3T MAGNETOM Prismafit, Siemens Healthcare, Erlangen, Germany) and a CT scan (Revolution, General Electronics Healthcare). T1-weighted MR images (TR 700ms, TE 11ms, 0.5mm isotropic) were spatially registered to the down-sampled CT data (originally 0.3mm isotropic). To ensure a good voxel-to-voxel correspondence, the 99 best registered image pairs were selected and split between training (80%), testing (10%) and validation sets (10%). During training, 100 central slices were extracted for each orientation (axial, coronal and sagittal) and fed to a 2.5D network as stacks of three consecutive MRI slices. At inference, sCT slices were generated for each orientation, and the voxel-wise median across orientations was computed. Qualitatively, our method successfully generated images with a CT-like contrast exhibiting satisfactory levels of anatomical details, including bone contours, and the femoral and tibial physes. However, the sCT images looked generally oversmoothed compared to the original CT data, hindering the visualization of some of the bone trabeculae, especially in the epiphyses. Some anatomical details such as vascular canals were not depicted accurately. In terms of quantitative evaluation, our model achieved a mean average error of 167±23.1 and 49.5±6.76 Hounsfield Units (HU) in bone and soft tissue, respectively. In this preliminary work, we showed the feasibility of generating sCT images from T1-weighted MR data with a good level of anatomical details and quantitative HU estimation. Future work will focus on reducing the impact of registration errors to further improve the model accuracy.

Keywords:
Voxel Magnetic resonance imaging Coronal plane Sagittal plane Artificial intelligence Computer science Deep learning Orientation (vector space) Medicine Nuclear medicine Radiology Mathematics

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.04
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Artificial Intelligence in Healthcare and Education
Health Sciences →  Medicine →  Health Informatics
Radiomics and Machine Learning in Medical Imaging
Health Sciences →  Medicine →  Radiology, Nuclear Medicine and Imaging
AI in cancer detection
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Synthetic T1-weighted brain image generation with incorporated coil intensity correction using DESPOT1

Sean DeoniBrian K. RuttTerry M. Peters

Journal:   Magnetic Resonance Imaging Year: 2006 Vol: 24 (9)Pages: 1241-1248
JOURNAL ARTICLE

Synthetic derivative T2-weighted abdominal images from T1-weighted images using a generative adversarial network (GAN)

Shu ZhangPhillip K. MartinNakul GuptaMaría I. AltbachAli BilginDiego Aponte

Journal:   Proceedings on CD-ROM - International Society for Magnetic Resonance in Medicine. Scientific Meeting and Exhibition/Proceedings of the International Society for Magnetic Resonance in Medicine, Scientific Meeting and Exhibition Year: 2024
© 2026 ScienceGate Book Chapters — All rights reserved.