Deep learning is used for most machine learning applications at the moment, as the results are often very good and it limits manual feature engineering. As an intrinsically black box model it causes problems in domains such as medicine where mistakes can have serious effects and in general when humans need to integrate and understand outcomes of deep-learning based decision support with other data. In these situations it becomes important to explain how a decision was reached by a model to create trust with the users and avoid serious mistakes, for example linked to changes in the used data or bias in the models. interpretability of AI tools as well as quantification and visualization of uncertainty in decisions and boundaries of decisions can help using such tools and to make informed decisions and avoid automation bias.