THE ROLE OF EXPLAINABLE AI IN ETHICAL DECISIONMAKING SYSTEMS
DOI:
https://doi.org/10.71465/mrcis141Keywords:
Explainable AI, Ethical Decision Making, Algorithmic Transparency, Trustworthy AIAbstract
As artificial intelligence (AI) systems increasingly impact highstakes ethical decisions—such as in healthcare, justice, hiring or autonomous systems—the need for transparency, interpretability and accountability has become paramount. Explainable AI (XAI) emerges as a critical enabler for ethical decisionmaking systems by providing humanunderstandable insights into algorithmic decisions and thereby supporting fairness, trust and responsibility. This article explores the role of XAI in ethical decisionmaking systems: we discuss foundational principles, review frameworks and usecases, present two illustrative charts indicating adoption and performance tradeoffs, and highlight the deployment roadmap and research agenda. We show that while XAI can strengthen ethical alignment of AI systems, significant tradeoffs, operational constraints and governance issues remain.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
All articles published in the Multidisciplinary Research in Computing Information Systems are licensed under an open-access model. Authors retain full copyright and grant the journal the right of first publication. The content can be freely accessed, distributed, and reused for non-commercial purposes, provided proper citation is given to the original work.
