Alfa-Pi-Mi Intelligent Neural Network Model: optimizing learning in engineering through artificial intelligence
Main Article Content
Abstract
This paper presents the design and implementation of the Alfa-Pi-Mi neural network model, developed to personalize and optimize learning for Industrial Engineering students through artificial intelligence (AI). The model identifies learning patterns and generates tailored instructional strategies. The findings should be interpreted as initial feasibility evidence from a pilot study, not as generalizable effects. We detail the system architecture (input variables, activation functions, and optimization methods) and report pilot results with n = 30 students over four weeks: academic performance improved relative to baseline in Optimization and Mathematical Models (+60%) and Statistics and Quantitative Methods (+55%), with additional gains in Production Management (+50%), Project Management (+45%), and Quality Engineering (+50%). Overall, 62% of participants achieved moderate-to-high improvement, and learning trajectories steepened from week 3. In addition, as a methodological validation of the predictive component, a minimal multilayer perceptron (MLP) task reduced mean absolute error (MAE) by 23% and root mean squared error (RMSE) by 19% versus baseline (initial, untuned configuration), after normalization/standardization and hyperparameter tuning.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Los autores conservan los derechos de autor y ceden a la revista el derecho de la primera publicación y pueda editarlo, reproducirlo, distribuirlo, exhibirlo y comunicarlo en el país y en el extranjero mediante medios impresos y electrónicos. Asimismo, asumen el compromiso sobre cualquier litigio o reclamación relacionada con derechos de propiedad intelectual, exonerando de responsabilidad a la Editorial Tecnológica de Costa Rica. Además, se establece que los autores pueden realizar otros acuerdos contractuales independientes y adicionales para la distribución no exclusiva de la versión del artículo publicado en esta revista (p. ej., incluirlo en un repositorio institucional o publicarlo en un libro) siempre que indiquen claramente que el trabajo se publicó por primera vez en esta revista.
References
[1] L. Huang et al., “Normalization Techniques in Training DNNs: Methodology, Analysis, and Application,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 8, pp. 10173–10196, Aug. 2023, doi: 10.1109/TPAMI.2023.3250241.
[2] S. R. Dubey, S. K. Singh, and B. B. Chaudhuri, “Activation Functions in Deep Learning: A Comprehensive Survey and Benchmark,” Neurocomputing, 2022, doi: 10.1016/j.neucom.2022.06.111.
[3] A. Abdulkadirov et al., “Survey of Optimization Algorithms in Modern Neural Networks,” Mathematics, vol. 11, no. 4, p. 831, 2023, doi: 10.3390/math11040831.
[4] O. Estrada-Molina, J. Mena, and A. López-Padrón, “The Use of Deep Learning in Open Learning: A Systematic Review (2019–2023),” IRRODL, vol. 25, no. 3, Aug. 2024.
[5] I. D. Mienye and P. Sun, “A Comprehensive Review of Deep Learning: Architectures, Applications, and Challenges,” Information, vol. 15, no. 12, p. 755, 2024, doi: 10.3390/info15120755.
[6] N. Cannistrà et al., “Machine Learning and Generative AI in Learning Analytics: A Review,” Applied Sciences, vol. 15, no. 9, p. 3769, 2025, doi: 10.3390/app15093769.
[7] European Commission, “Industry 5.0: Towards a sustainable, human-centric and resilient European industry,” 2021. (Official report). Publications Office of the EU
[8] Universidad Continental, “II Convención Internacional… Industria 5.0,” Arequipa, Perú, 2023. (Evento; válido como referencia contextual.)