JOURNAL ARTICLE

Self-supervised and supervised deep learning for PET image reconstruction

Andrew J. Reader

Year: 2024 Journal:   AIP conference proceedings Vol: 3062 Pages: 030003-030003   Publisher: American Institute of Physics

Abstract

A unified self-supervised and supervised deep learning framework for PET image reconstruction is presented, including deep-learned filtered backprojection (DL-FBP) for sinograms, deep-learned backproject then filter (DL-BPF) for backprojected images, and a more general mapping using a deep network in both the sinogram and image domains (DL-FBP-F). The framework accommodates varying amounts and types of training data, from the case of having only one single dataset to reconstruct through to the case of having numerous measured datasets, which may or may not be paired with high-quality references. For purely self-supervised mappings, no reference or ground truth data are required at all, but at minimum just the measured dataset to reconstruct from. Instead of a supplied reference, the output reconstruction from the trainable mapping is forward modelled, and the input data serve as a reference target for this forward-modelled data. The self-supervised deep learned reconstruction operators presented here all use a conventional image reconstruction objective within the loss function (e.g. maximum Poisson likelihood, maximum a posteriori). If it is desired for the reconstruction networks to generalise (i.e. to need either no or minimal retraining for a new measured dataset, but to be fast, ready to reuse), then these self-supervised networks show potential even when previously trained from just one single dataset. For any given new measured dataset, finetuning is however usually necessary for improved agreement with the reconstruction objective, and of course the initial training set should ideally go beyond just one dataset if a generalisable network is sought. This work presents preliminary results for the purely self-supervised single-dataset case, but the proposed networks can be i) trained uniquely for any measured dataset in hand to reconstruct, ii) pretrained on multiple datasets and then used with no retraining for new measured data, iii) pretrained and then finetuned for new measured data, iv) optionally trained with high-quality references. The overall unified framework, with its optional inclusion of supervised learning, provides a wide spectrum of reconstruction approaches by making use of whatever (if any) training data quantities and types are available for image reconstruction. Such a spectrum of reconstruction methods (ranging from purely self-supervised model-driven for a single measured dataset in hand only, through to non-model / fully-data driven) can provide a balance between a conventional reconstruction objective (e.g. data fidelity, with or without regularisation) and the potential risks/benefits of supervised regularisation (which uses training data with high-quality references).

Keywords:
Artificial intelligence Computer science Computer vision Deep learning Iterative reconstruction Image (mathematics) Supervised learning Pattern recognition (psychology) Artificial neural network

Metrics

8
Cited By
6.55
FWCI (Field Weighted Citation Impact)
28
Refs
0.93
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Medical Imaging Techniques and Applications
Health Sciences →  Medicine →  Radiology, Nuclear Medicine and Imaging
Advanced MRI Techniques and Applications
Health Sciences →  Medicine →  Radiology, Nuclear Medicine and Imaging
Advanced X-ray and CT Imaging
Physical Sciences →  Engineering →  Biomedical Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.