Complex artificial intelligence models, like deep neural networks, have shown exceptional capabilities to detect early-stage polyps and tumors in the gastrointestinal tract. These technologies are already beginning to assist gastroenterologists in the endoscopy suite. To understand how these complex models work and their limitations, model explanations can be useful. Moreover, medical doctors specialized in gastroenterology can provide valuable feedback on the model explanations. This study explores three different explainable artificial intelligence methods for explaining a deep neural network detecting gastrointestinal abnormalities. The model explanations are presented to gastroenterologists. Furthermore, the clinical applicability of the explanation methods from the healthcare personnel’s perspective is discussed. Our findings indicate that the explanation methods are not meeting the requirements for clinical use, but that they can provide valuable information to researchers and model developers. Higher quality datasets and careful considerations regarding how the explanations are presented might lead to solutions that are more welcome in the clinic.