Towards Metaphors for Cascading AI

In the future, more and more systems will be powered by AI. This may exacerbate existing blind spots in explainability research, such as focusing on outputs of an individual AI pipeline as opposed to a holistic and integrative view on the system dynamics of data, algorithms, stakeholders, context and their respective interactions. AI systems will increasingly rely on patterns and models of other AI systems. This will likely introduce a major shift in the desiderata of interpretability, explainability and transparency. In this world of Cascading AI (CAI), AI systems will use the output of other AI systems as their inputs. The typical formulations of desiderata for explaining AI decision-making, such as post-hoc interpretability or model-agnostic explanations, may simply not hold in a world of cascading AI. In this paper, we propose two metaphors which may help designers to frame their efforts when designing Cascading AI systems.