A Survey of Static and Temporal Explainable Methods and Their Applications in Knowledge Tracing
-
Abstract
Deep learning has found widespread application across diverse domains owing to its exceptional performance. Nevertheless, the lack of transparency in deep learning models' decision-making processes undermines their usability, especially in critical contexts.
While researchers have made noteworthy advancements in explaining these models, they have frequently overlooked the differences between static and temporal models during explanation generation.
In temporal models, features change over time, posing new challenges in the generation of explanations.
Though extensive research has been dedicated to surmounting these hurdles, a survey summarizing these contributions is currently absent. To bridge this gap, this paper endeavors to summarize existing methods and their contributions in terms of both static and temporal models, highlighting their disparities.
Additionally, we propose an innovative classification approach based on the comprehensibility of explanations, demonstrating that different explanation methods vary in their understandability for users.
Finally, to assess the limitations of the explanation capabilities of existing methods, we specifically choose knowledge tracing to analyze the evolution of explanation methods in this context of temporal modeling and interpretations.
-
-