Knowledge Retention and Mathematical Foundations in Machine Learning Education
Exploring the Role of Prior Mathematical Knowledge in Retaining Core Machine Learning Concepts
More Info
expand_more
Abstract
As Machine Learning (ML) continues to shape advancements in academia and industry, ensuring effective ML education is essential. This study examines the retention of four core ML concepts- Principal Component Analysis, Gradient Descent, Bayes’ Theorem, and Hierarchical Clustering- two years after students completed a university-level ML course. Using a survey-based methodology, it explores how prior mathematical knowledge, perceived difficulty, and confidence influence long-term retention. Results reveal a significant positive correlation between Calculus knowledge and Gradient Descent retention, with weaker correlations for Linear Algebra with PCA and Probability with Bayes’ Theorem. Perceived difficulty and confidence also shape retention outcomes. The findings emphasize the need for targeted mathematical refreshers in ML courses to strengthen foundational knowledge and improve retention. This research provides actionable insights for curriculum design, aiming to bridge mathematical gaps, enhance learning outcomes, and sustain student engagement with advanced ML concepts.