Recent developments in deep networks allow us to train networks with more parameters by yielding better performance given sufficient amount of data. However, we are still restricted with the availability of labelled data in medical image segmentation, where the problem is exacerbated with high intra- and intervariability of anatomical structures. In order to bypass this problem without compromising network performance, this study introduces a PERINet, which promises to achieve higher performance while being with smaller parameter count such as on the order of 0.8 million than its counterparts. The network benefits from rich features generated by our versions of inception modules, better communication between encoding and decoding paths and an effective way of segmentation mask generation. We evaluate the performance of our architecture on the segmentation of retinal vasculature in fundus image datasets of DRIVE, CHASE_DB1 and IOSTAR and the segmentation of axons in a 2-photon microscopy image dataset. According to the results of our experiments, PERI-Net achieves state of the art performance on sensitivity and G-mean metrics with a significant margin for the 3 datasets, by outperforming our training of a U-net sharing the same properties and training strategies as PERI-Net.
USLU, FATMATÜLZEHRA; BASS, CHER; and BHARATH, ANIL A.
"PERI-Net: a parameter efficient residual inception network for medical imagesegmentation,"
Turkish Journal of Electrical Engineering and Computer Sciences: Vol. 28:
4, Article 31.
Available at: https://journals.tubitak.gov.tr/elektrik/vol28/iss4/31