Visual explainability remains a significant challenge in artificial intelligence, particularly in complex scenarios where decisions are based on intricate relationships between objects within a scene. This paper introduces a novel neuro-symbolic approach to scene graph reasoning designed to enhance visual explainability. By integrating neural networks for visual perception with symbolic reasoning techniques for structural analysis, our model generates scene graphs that capture objects, attributes, and their relationships. These scene graphs are then used to perform reasoning tasks, with the symbolic component providing a transparent and interpretable decision-making process. We propose a hybrid architecture that leverages the strengths of both neural and symbolic methods to achieve improved accuracy, robustness, and explainability. We evaluate our approach on a diverse set of visual reasoning tasks, demonstrating its superior performance compared to existing methods. The resulting system provides not only accurate predictions but also clear, human-understandable explanations for its reasoning process, thereby advancing the field of visual explainability.