Review of the use of ethical technologies for good governance of artificial intelligence

Main Article Content

José Yesán-Luján
Marcos Diaz-Tomas
Alberto Mendoza-De los Santos

Abstract

This descriptive literature review examines the implementation of ethical technologies within the framework of artificial intelligence governance, analyzing recent advances in integrating ethical principles into AI systems. As AI expands across critical sectors, significant challenges emerge, such as algorithmic bias, privacy, and explainability. Through an analysis of 19 peer-reviewed articles (2021-2025) selected based on currency and thematic relevance, this study systematizes frameworks like sociotechnical pragmatism and ethical audits. The primary contribution of this article is identifying the lack of an integrated approach that harmonizes technical requirements with social demands. It concludes that effective governance requires flexible, multi-stakeholder regulation that embeds ethics throughout the AI lifecycle, ensuring equity and the protection of human rights.

Article Details

How to Cite
Yesán-Luján, J., Diaz-Tomas, M., & Mendoza-De los Santos, A. (2026). Review of the use of ethical technologies for good governance of artificial intelligence. Tecnología En Marcha Journal, 39(5), Pág. 16–27. https://doi.org/10.18845/tm.v39i5.8526
Section
Ética, gobernanza y regulación de la IA

References

[1] J. Mökander y L. Floridi, “From algorithmic accountability to digital governance,” Springer Nature, 2022. [En línea]. Disponible en: https://doi.org/10.1038/s42256-022-00504-5

[2] D. S. Watson, J. Mökander y L. Floridi, “Competing narratives in AI ethics: a defense of sociotechnical pragmatism,” Springer Nature, 2024. [En línea]. Disponible en: https://doi.org/10.1007/s00146-024-02128-2

[3] C. Thomas, H. Roberts, J. Mökander, A. Tsamados, M. Taddeo y L. Floridi, “The case for a broader approach to AI assurance: addressing ‘hidden’ harms in the development of artificial intelligence,” Springer Nature, 2024. [En línea]. Disponible en: https://doi.org/10.1007/s00146-024-01950-y

[4] J. Mökander and R. Schroeder, “Artificial Intelligence, Rationalization, and the Limits of Control in the Public Sector: The Case of Tax Policy Optimization,” Social Science Computer Review, vol. 42, no. 6, pp. 1359–1378, Mar. 2024, doi: https://doi.org/10.1177/08944393241235175.

[5] J. Amann et al., “To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems,” PLOS Digital Health, vol. 1, no. 2, p. e0000016, Feb. 2022, doi: https://doi.org/10.1371/journal.pdig.0000016.

[6] T. Hagendorff, “Blind spots in AI ethics,” Springer Nature, 2021. [En línea]. Disponible en: https://doi.org/10.1007/s43681-021-00122-8

[7] T. Hagendorff, “A virtue-based framework to support putting AI ethics into practice,” Springer Nature, 2022. [En línea]. Disponible en: https://doi.org/10.1007/s43681-022-00162-8

[8] R. V. Zicari et al., “Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier,” Frontiers in Human Dynamics, vol. 3, Jul. 2021, doi: https://doi.org/10.3389/fhumd.2021.688152.

[9] R. V. Zicari et al., “On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls,” Frontiers in Human Dynamics, vol. 3, Jul. 2021, doi: https://doi.org/10.3389/fhumd.2021.673104.

[10] T. Hagendorff et al., “Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals,” Springer Nature, 2022. [En línea]. Disponible en: https://doi.org/10.1007/s43681-022-00199-9

[11] R. V. Zicari et al., “Co-design of trustworthy AI in healthcare: deep learning as a tool,” Springer Nature, 2022. [En línea]. Disponible en: https://doi.org/10.3389/fhumd.2021.688152

[12] “Why we need biased AI: How including cognitive biases can enhance AI systems,” Journal of Experimental & Theoretical Artificial Intelligence, 2024, doi: https://doi.org/10.1080//0952813X.2023.2178517.

[13] Maricielo Estefany Caciano-Arroyo, A. F. Vasquez-Cabrera, and A. C. Mendoza-de-los-Santos, “Integración de inteligencia artificial en la gobernanza de TI: Una revisión sistemática,” Aibi revista de investigación administración e ingeniería, vol. 13, no. 2, pp. 1–12, May 2025, doi: https://doi.org/10.15649/2346030X.4532.

[14] J. Mökander, “Auditing large language models: a three-layered approach,” Springer Nature, 2023. [En línea]. Disponible en: https://doi.org/10.1007/s43681-023-00289-2

[15] S. Taeihagh, “Governance of Generative AI (Introductory Article),” Oxford Academic, 2023. [En línea]. Disponible en: https://doi.org/10.1093/polsoc/puaf001

[16] “Ethical Considerations and Responsible Governance of Generative AI: A Systematic Review - Premier Science,” Premier Science, Apr. 26, 2025. https://premierscience.com/pjai-25-800/ (accessed Feb. 28, 2026).

[17] B. C. Cheong, “Transparency and accountability in AI systems: safeguarding wellbeing in the age of algorithmic decision-making,” Frontiers in Human Dynamics, vol. 6, Jul. 2024, doi: https://doi.org/10.3389/fhumd.2024.1421273.

‌ [18] A. F. Winfield, K. Michael, J. Pitt, and V. Evers, “Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems [Scanning the Issue],” Proceedings of the IEEE, vol. 107, no. 3, pp. 509–517, Mar. 2019, doi: https://doi.org/10.1109/jproc.2019.2900622.