JOURNAL ARTICLE

Explainable Deep Machine Learning for Medical Image Analysis

Abstract

Explanations justify the development and adoption of algorithmic solutions for prediction problems in medical image analysis. This thesis introduces two guiding principles for creating and exploiting explanations of deep networks and medical image data. The first guiding principle is to use explanations to expose inefficiencies in the design of models and image datasets. The second principle is to leverage tools of compression and fixed-weight methods that minimize learning to make more efficient and effective models and more usable medical image datasets. The outcome is more effective deep learning in medical image analysis. Application of these guiding principles in different settings results in five main contributions: (a) improved understanding of biases present in deep networks and medical images, (b) improved predictive and computational performance of predictive models, (c) creation of ante-hoc models that are interpretable by design, (d) creation of smaller image datasets, and (e) improved visual privacy. This thesis falls within the scope of the TAMI project for Transparent Artificial Machine Intelligence and focuses on explainable artificial intelligence (XAI) for medical image data.

Keywords:
USable Leverage (statistics) Deep learning Image (mathematics) Scope (computer science) Medical imaging Medical diagnosis Image compression

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.54
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Genetic diversity and population structure
Life Sciences →  Biochemistry, Genetics and Molecular Biology →  Genetics
Pasture and Agricultural Systems
Life Sciences →  Agricultural and Biological Sciences →  Forestry
Plant Diversity and Evolution
Life Sciences →  Agricultural and Biological Sciences →  Ecology, Evolution, Behavior and Systematics
© 2026 ScienceGate Book Chapters — All rights reserved.