The framework for explainable AI and interactive machine learning. Making XAI accessible.

IML Cycle

The XAI Challenge

Available XAI Methods

The recent abundance of artificial intelligence and the resulting spread of black-box models in critical domains has lead to an increasing demand for explainability. The field of eXplainable Artificial Intelligence tackles this challenge by providing ways to interpret the decisions of such models. Which, in turn, resulted in a variety of different explaining techniques, all having various dependencies and providing highly diverse outputs.

The User Challenge

Besides the variety of XAI methods, there also is a variety of user-groups with different interests and requirements. They need tailored access to XAI methods that suit their needs and support them in their daily workflow.

The Newcomer.

The model novice is the ‘new one’ in the machine-learning class. His goal is to learn about concepts of machine learning models; he wants to understand the building blocks of the model as well as its general working. Learning resources are essential to him, either by example or by textual, visual, or external resources.

The Operator.

He is the ‘user’ among the users, i.e., he uses existing machine learning models to solve specific tasks. For example, this could be a domain expert - let’s say a biologist - who needs to classify protein structures. To decide on a model, he wants to compare architectures, understand their underlying working, and verify his decision by executing XAI methods on some data samples.

The expert.

The model developer is an expert on machine learning. He develops models from scratch, refines existing models, and optimizes parameters to improve the model’s performance. He is interested in the architecture of the model, including in-depth information, such as layer-sizes, initializers, and activation functions. To debug the model, explanations on all abstraction levels are relevant. His insights might lead to a model update, covering the full development and refinement process.

XAI as a Process

The XAI Framework

XAI Framework

Our XAI framework structures the process of XAI. The figure is built around the abstract template of an explainer, showing its inputs, properties, and outputs. The iterative XAI workflow of model understanding, diagnosis, and refinement are captured in the XAI pipeline. Explainers have five properties; they take one or more model states as input, applying an XAI method, to output an explanation or a transition function. Global monitoring and steering mechanisms expand the pipeline to the full XAI framework, supporting the overall workflow by guiding, steering, or tracking the explainers during all steps.

Explainer Types

Model-Specific Explainers

This type of explainers considers inputs, outputs as well as the inner-workings of a machine learning model. This is particularly useful for model developers as they can help in diagnosing the internal structure of a model, refining it based on the interplay of inputs and outputs, concerning the given architecture.

Model-Agnostic Explainers

In contrast to the model-specific explainers, model agnostic explainers consider the model to be a black-box, i.e., they ignore the inner workings of the model. The explanation happens solely on the data level, explaining the transition between input and output. This is useful for model novices and model users, not interested in the specific architecture of the model but rather in applying it to their data and tasks.

Explainer Properties

To integrate the variety of XAI methods into our XAI framework, we categorize them using several criteria. This helps us tailoring them to the intended field of application and managing constraints. Therefore, we introduce the following explainer properties:

Number of Model States Considered

Does the explainer take a single model state (single-model explainer), or multiple model states (multi-model explainer) as input?

Parts of Model State Considered

Which parts of the model state does the explainer take as input? The model’s input, the model’s output, or the model itself?

Explainer Dependencies

Which inputs are needed to execute the explainer?

Explainer Properties

Explainer Level

Which part of the data search space is covered? An explainer taking all possible inputs and outputs into account is considered global, while an explainer that works on a subset or a single data sample is deemed to be local.

Explainer Abstraction

Which parts of the model are explained? An explainer focussing on parts or single components of the model is considered a low abstraction, while an explainer that explains the full model is regarded as a high abstraction.

The explAIner System

Embedded in TensorBoard

explAIner System Screenshot

Bringing the XAI Framework to Life

Our XAI framework was not designed to create another overarching theory on what things should be done. It also answers the question: How can we bring this theory to life? To demonstrate that the concepts in our paper can be operationalized in an actual system, we present explAIner.

explAIner Plugging Into Your Daily Workflow

To achieve our goal to extend the daily workflow of our users with accessible XAI, we integrate explAIner into the widely-used TensorBoard application. This is done by extending TensorBoard with four additional plugins, covering the tasks understanding, diagnosis, refinement, and reporting.

Take me to the demo!

Or try and extend it yourself: [GitHub Repo]

Paper

BibTex Entry