Abstract

Team TOXIC (“Understanding Computational Toxicology”) seeks to apply interpretability techniques to machine learning models which predict drug safety. Currently, such models have been developed with relative accuracy and are used in industry for drug development. However, because they are not sufficiently rooted in chemical knowledge, they are not widely used in regulatory processes. To contribute towards a solution, we evaluate existing explanation methods for toxicity predction models trained on open-source data sets. Additionally, we are working towards models involving the usage of more interpretable data representations. Ultimately, we hope to demonstrate a proof-of-concept for an interpretable model for predicting drug safety which can illustrate its reasoning.

Keywords:
Deep learning Artificial intelligence Computer science Machine learning

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Explainable Artificial Intelligence (XAI)
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.