Developing AI into explanatory supporting models

Using Artificial Intelligence (AI) and machine learning technologies to automatically mine latent patterns from educational data holds great potential to inform teaching and learning practices. However, the current AI technology mostly works as “black box” — only the inputs and the corresponding outputs are available, which largely impedes researchers from gaining access to explainable feedback. This interdisciplinary work presents an explainable AI prototype with visualized explanations as feedback for computer-supported collaborative learning (CSCL). This research study seeks to provide interpretable insights with machine learning technologies for multimodal learning analytics (MMLA) by introducing two different explanatory machine learning-based models (neural network and Bayesian network) in different manners (end-to-end learning and probabilistic analysis) and for the same goal — provide explainable and actionable feedback. The prototype is applied to the real-world collaborative learning scenario with data-driven learning based on sensor-data from multiple modalities which can assess collaborative learning processes and render explanatory real-time feedback.

Chen Haoyu, Tan Esther, Lee Yoon, Praharaj Sambit, Specht Marcus, Zhao Guoying

A4 Article in conference proceedings

14th International Conference of the Learning Sciences: The Interdisciplinarity of the Learning Sciences, ICLS 2020

Chen, H., Tan, E., Lee, Y., Praharaj, S., Specht, M., & Zhao, G. (2020). Developing AI into Explanatory Supporting Models: An Explanation-visualized Deep Learning Prototype for Computer Supported Collaborative Learning. In Gresalfi, M. and Horn, I. S. (Eds.), The Interdisciplinarity of the Learning Sciences, 14th International Conference of the Learning Sciences (ICLS) 2020, Volume 2 (pp. 1133-1140). Nashville, Tennessee: International Society of the Learning Sciences

https://repository.isls.org//handle/1/6305 http://urn.fi/urn:nbn:fi-fe2021110453715