The integration of machine learning into financial decision-making systems has precipitated a paradigm shift in credit risk assessment, offering predictive accuracy superior to traditional scorecards. However, the deployment of deep neural networks in high-stakes domains is increasingly impeded by the black-box nature of these models, which often fail to satisfy regulatory requirements for transparency and the intuitive expectations of domain experts. A critical flaw in standard deep learning architectures is the violation of monotonic constraints; for instance, a model might irrationally penalize an applicant for an increase in income due to localized noise in the training manifold. This paper proposes a novel framework that enforces strict monotonicity in neural networks through architectural constraints while simultaneously generating counterfactual explanations to provide actionable recourse for rejected applicants. By integrating a constrained weight optimization strategy with a gradient-based counterfactual search, we ensure that the decision boundary remains topologically consistent with economic logic. Our experimental results on benchmark credit datasets demonstrate that the proposed method achieves predictive performance comparable to unconstrained ensembles while guaranteeing semantic consistency and providing sparse, valid, and actionable explanations.
Xolani DastileTurgay ÇelikHans Vandierendonck
Jiahong LiYiyuan ChenYi‐Chi WangYiqiang YeMin SunHao RenWeibin ChengHaodi Zhang
S. CarrowKyle ErwinOlga VilenskaiaParikshit RamTim KlingerNaweed KhanNdivhuwo MakondoAlexander Gray
Ouns El HarzliBernardo Cuenca GrauIan Horrocks