Deep learning (DL) models are currently not widely deployed for critical tasks such as in health. This is attributable to the "black box", making it difficult to gain the trust of practitioners. This paper proposes the use of visualizations to enhance performance verification, improve monitoring, ensure understandability, and improve interpretability needed to gain practitioners' confidence. These are demonstrated through the development of a CapsNet model for the recognition of gastrointestinal tract infection. The gastrointestinal tract comprises several organs joined in a long tube from the mouth to the anus. It is susceptive to diseases that are difficult for medics to diagnose, since it is not easy to have physical access to the sick regions. Consequently, manual access and analysis of images of the unhealthy parts requires the skills of an expert, as it is tedious, prone to errors, and costly. Experimental results show that visualizations in the form of post-hoc interpretability can demonstrate the reliability and interpretability of the CapsNet model applied to gastrointestinal tract diseases. The outputs can also be explained to gain practitioners? confidence in paving the way for its adoption in critical areas of society.
Capsule network, explainable artificial intelligence (XAI), convolutional neural network
AYIDZOE, MIGHTY ABRA; YONGBIN, YU; MENSAH, PATRICK KWABENA; CAI, JINGYE; and BAWAH, FAIZA UMAR
"Visual Interpretability of Capsule Network for Medical Image analysis,"
Turkish Journal of Electrical Engineering and Computer Sciences: Vol. 30:
3, Article 31.
Available at: https://journals.tubitak.gov.tr/elektrik/vol30/iss3/31