Given recent advances in Artificial Intelligence (AI) and Machine Learning (ML) among others in the context of deep learning, more and more intelligent systems are entering the spheres of professional and private life – from credit ranking systems, to smartwatches. Given this growing involvement in (and also influence over) people’s everyday activities, the question for the explainability and/or interpretability and/or comprehensibility of intelligent systems, their decisions and their behavior, is receiving growing attention.
From the perspective of AI and ML research, the corresponding questions are varied, ranging from theory (What is an explanation in a technological context? What does it mean to comprehend a system?) to applied research (How can interfaces be made better comprehensible? How does information about system states and processing have to be presented as to serve as explanation?). These issues tie into several long-standing strands of research in AI and ML, among others including work in neural-symbolic integration, research into ontologies and knowledge representation, but also into human-computer interaction and actual system design.
As it appears certain that questions of comprehensibility and explanation in AI and ML will become even more pressing in the near future, and that the AI and ML research community are driving the development of the underlying technologies, the Special Interest Group on Explainable AI offers a forum for exchange, interaction, and collaboration.
Tarek R. Besold, City, University of London.
Derek Doran, Wright State University. Zack Chase Lipton, Carnegie Mellon University.
The main communication channel of the SIG XAI is the SIG XAI mailing list for announcements and discussions - it's a low traffic mailing list open to everyone interested in explainable/interpretable/comprehensible AI and ML.
General questions concerning the SIG XAI should be addressed to Tarek R. Besold at Tarek(hyphen)R(dot)Besold(at)city(dot)ac(dot)uk.
The SIG XAI is related to the Workshop on Comprehensibility and Explanation in AI and ML (CEX) and the series of International Workshops on Neural-Symbolic Learning and Reasoning (NeSy).