Utilizing Convolution Neural Network (CNN) Algorithm for the Classification of Visual, Auditory, Read/Write, and Kinesthetic (VARK) Learning Styles Based On Real-Time Datasets
Keywords:
Hunter e-Academy, LMS, event listener, VARK, CNNAbstract
Identifying learners’ preferred learning styles is essential for effective personalization in educational environment. The VARK model (Visual, Auditory, Read/Write, and Kinesthetic) is widely used for this course, yet traditional questionnaire-based assessments struggle with scalability, static data, and limited adaptability. This study introduced an optimized Convolutional Neural Network (CNN) framework for real-time, automated VARK classification using multimodal interaction data. Learner engagement was tracked through event listener technique within a learning management system, capturing HTTP+play/pause for visual and auditory media, HTTP+scroll for reading/writing materials, and HTTP+focus/blur for kinesthetic activities. These event listeners were used to track time spent in each modality and combined with corresponding quiz performance scores to form a comprehensive dataset. The CNN model was trained on twelve thousand (12,000) collected datasets of learners from Hunter e-Academy (He-A) learning management system to classify individual learning styles, enabling dynamic adaptation of content delivery.To evaluate performance, the CNN model was compared through A/B testing against other machine learning (ML) models, including Support Vector Machines (SVM), Random Forest, Naive Bayes, and XGBoost. Metrics such as accuracy, precision, recall, and F1-score were used for assessment. The CNN achieved an accuracy of 99.05%, surpassing SVM (98.01%), XGBoost (98.0%), Random Forest (96.69%), Naive Bayes (96.45%), and Decision Tree (95.98%). It demonstrated perfect precision for Auditory and Read/Write, perfect recall for Visual and Auditory, and F1-scores ?0.98 across all categories, addressing the bias and uneven performance observed in unimodal approaches like KNN (89%). The study confirmed the effectiveness of multimodal data fusion for accurate, objective learning style assessment, offering a scalable, AI-driven alternative to surveys and supporting real-time adaptive learning environments.