Alfa-Pi-Mi Intelligent Neural Network Model: optimizing learning in engineering through artificial intelligence

Main Article Content

Diógenes Álvarez-Solórzano

Abstract

This paper presents the design and implementation of the Alfa-Pi-Mi neural network model, developed to personalize and optimize learning for Industrial Engineering students through artificial intelligence (AI). The model identifies learning patterns and generates tailored instructional strategies. The findings should be interpreted as initial feasibility evidence from a pilot study, not as generalizable effects. We detail the system architecture (input variables, activation functions, and optimization methods) and report pilot results with n = 30 students over four weeks: academic performance improved relative to baseline in Optimization and Mathematical Models (+60%) and Statistics and Quantitative Methods (+55%), with additional gains in Production Management (+50%), Project Management (+45%), and Quality Engineering (+50%). Overall, 62% of participants achieved moderate-to-high improvement, and learning trajectories steepened from week 3. In addition, as a methodological validation of the predictive component, a minimal multilayer perceptron (MLP) task reduced mean absolute error (MAE) by 23% and root mean squared error (RMSE) by 19% versus baseline (initial, untuned configuration), after normalization/standardization and hyperparameter tuning.

Article Details

How to Cite
Álvarez-Solórzano, D. (2026). Alfa-Pi-Mi Intelligent Neural Network Model: optimizing learning in engineering through artificial intelligence. Tecnología En Marcha Journal, 39(5), Pág. 338–351. https://doi.org/10.18845/tm.v39i5.8506
Section
Modelos, algoritmos y desarrollo tecnológico en IA

References

[1] L. Huang et al., “Normalization Techniques in Training DNNs: Methodology, Analysis, and Application,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 8, pp. 10173–10196, Aug. 2023, doi: 10.1109/TPAMI.2023.3250241.

[2] S. R. Dubey, S. K. Singh, and B. B. Chaudhuri, “Activation Functions in Deep Learning: A Comprehensive Survey and Benchmark,” Neurocomputing, 2022, doi: 10.1016/j.neucom.2022.06.111.

[3] A. Abdulkadirov et al., “Survey of Optimization Algorithms in Modern Neural Networks,” Mathematics, vol. 11, no. 4, p. 831, 2023, doi: 10.3390/math11040831.

[4] O. Estrada-Molina, J. Mena, and A. López-Padrón, “The Use of Deep Learning in Open Learning: A Systematic Review (2019–2023),” IRRODL, vol. 25, no. 3, Aug. 2024.

[5] I. D. Mienye and P. Sun, “A Comprehensive Review of Deep Learning: Architectures, Applications, and Challenges,” Information, vol. 15, no. 12, p. 755, 2024, doi: 10.3390/info15120755.

[6] N. Cannistrà et al., “Machine Learning and Generative AI in Learning Analytics: A Review,” Applied Sciences, vol. 15, no. 9, p. 3769, 2025, doi: 10.3390/app15093769.

[7] European Commission, “Industry 5.0: Towards a sustainable, human-centric and resilient European industry,” 2021. (Official report). Publications Office of the EU

[8] Universidad Continental, “II Convención Internacional… Industria 5.0,” Arequipa, Perú, 2023. (Evento; válido como referencia contextual.)