The use of AI and ML systems is increasingly becoming more commonplace in everyday life. In everything from recommender systems for media streaming services to machine vision for clinical decision support, intelligent systems are supporting both the personal and professional spheres of our society. However explaining the outcomes and decision-making of these systems remains a challenge. As the prevalence of AI grows in our society, so too does the complexity and expectation surrounding the ability of autonomous models to explain their actions.
Regulations increasingly support users rights to fair and transparent processing in automated decision-making systems. This can be difficult when the latest trends in data-driven ML systems, such as deep learning architectures, tend to be black-boxes with opaque decision-making processes. Furthermore, the need for accountability means that pipeline, ensemble and multi-agent systems may require complex combinations of explanations before being understandable to their target audience. Beyond the models themselves, designing explainer algorithms for users remains a challenge due to the highly subjective nature of the explanation itself.
The SICSA XAI workshop will provide a forum to share exciting research on methods targeting explanation of AI and ML systems. Our goal is to foster connections among SICSA researchers interested in Explainable AI by highlighting and documenting promising approaches, and encouraging further work. We expect to draw interest from AI researchers working in a number of related areas including NLP, ML, reasoning systems, intelligent user interfaces, conversational AI and adaptive user interfaces, causal modelling, computational analogy, constraint reasoning and cognitive theories of explanation and transparency.
Call for Papers
The SICSA XAI Workshop Organisation Committee would like to invite submissions of novel theoretical and applied research targeting the explainability of AI and ML systems. Example submission areas include (but are not limited to):
- Design and implementation of new methods of explainability for intelligent systems of all types, particularly highlighting complex systems combining multiple AI components.
- Evaluation of explainers or explanations using autonomous metrics, novel methods of user-centered evaluation, or evaluation of explainers with users in a real-world setting.
- Ethical considerations surrounding explanation of intelligent systems, including subjects such as accountability, accessibility, confidentiality and privacy.
- Submission Deadline: May 6th 2021
- Notification Date: May 20th 2021
- Camera-Ready Deadline: June 1st 2021
- Workshop Date: June 1st 2021