Janek GröhlMelanie SchellenbergKris K. DreherNiklas HolzwarthMinu D. TizabiAlexander SeitelLena Maier‐Hein
Photoacoustic imaging (PAI) has the potential to revolutionize healthcare due to the valuable information on tissue physiology that is contained in multispectral signals. Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information. In this work, we present a deep learning-based approach to semantic segmentation of multispectral PA images to facilitate interpretability of recorded images. Based on a validation study with experimentally acquired data of healthy human volunteers, we show that a combination of tissue segmentation, sO2 estimation, and uncertainty quantification can create powerful analyses and visualizations of multispectral photoacoustic images.
Melanie SchellenbergKris K. DreherNiklas HolzwarthFabian IsenseeAnnika ReinkeNicholas SchreckAlexander SeitelMinu D. TizabiLena Maier‐HeinJanek Gröhl
Patricia VietenKris K. DreherNiklas HolzwarthMelanie SchellenbergJan-Hinrich NölkeAlexander SeitelJanek GröhlZoe E. RachelAndrei SieaThomas HeldSebastian AdebergJürgen DebusLena Maier‐Hein
İrem ÜlküErdem AkagündüzPedram Ghamisi
Thanh TranOh‐Heum KwonKi‐Ryong KwonSuk‐Hwan LeeKyung-Won Kang
Chandra Pal KushwahKaruna Markam