•  
  •  
 

Turkish Journal of Electrical Engineering and Computer Sciences

DOI

10.3906/elk-1808-208

Abstract

Deep learning networks have been demonstrated to be helpful when used in salient object detection and achieved superior performance than the methods that are based on low-level hand-crafted features. In this paper, we propose a novel spatial-aware contrast cube-based convolution neural network (CNN) which can further improve the detection performance. From this cube data structure, the contrast of the superpixel is extracted. Meanwhile, the spatial information is preserved during the transformation. The proposed method has two advantages compared to the existing deep learning-based saliency methods. First, instead of feeding the deep learning networks with raw image patches or pixels, we explore the spatial-aware contrast cubes of superpixels as training samples of CNN. Our method is superior because the saliency of a region is more dependent on its contrast with the other regions than its appearance. Second, to adapt to the diversity of a real scene, both the color and textural cues are considered. Two CNNs, color CNN and textural CNN, are constructed to extract corresponding features. The saliency maps generated by two cues are concatenated in a dynamic way to achieve optimum results. The proposed method achieves the maximum precision of 0.9856, 0.9250, and 0.8949 on three benchmark datasets, MSRA1000, ECSSD, and PASCAL-S, respectively, which shows an improvement of performance in comparison to the state-of-the-art saliency detection methods.

Keywords

Saliency detection, convolutional neural networks, spatial-aware, global contrast cube

First Page

2412

Last Page

2429

Share

COinS