Saliency Integration

Saliency integration has attracted much attention on unifying saliency maps from multiple saliency models. Previous offline integration methods usually face two challenges: 1) if most of the candidate saliency models misjudge the saliency on an image, the integration result will lean heavily on those inferior candidate models; and 2) an unawareness of the ground truth saliency labels brings difficulty in estimating the expertise of each candidate model. To address these problems, in this paper, we propose an arbitrator model (AM) for saliency integration. First, we incorporate the consensus of multiple saliency models and the external knowledge into a reference map to effectively rectify the misleading by candidate models. Second, our quest for ways of estimating the expertise of the saliency models without ground truth labels gives rise to two distinct online model-expertise estimation methods. Finally, we derive a Bayesian integration framework to reconcile the saliency models of varying expertise and the reference map. To extensively evaluate the proposed AM model, we test 27 state-of-the-art saliency models, covering both traditional and deep learning ones, on various combinations over four datasets. The evaluation results show that the AM model improves the performance substantially compared to the existing state-of-the-art integration methods, regardless of the chosen candidate saliency models.

Xu Yingyue, Hong Xiaopeng, Porikli Fatih, Liu Xin, Chen Jie, Zhao Guoying

A1 Journal article – refereed

Y. Xu, X. Hong, F. Porikli, X. Liu, J. Chen and G. Zhao, "Saliency Integration: An Arbitrator Model," in IEEE Transactions on Multimedia, vol. 21, no. 1, pp. 98-113, Jan. 2019. doi: 10.1109/TMM.2018.2856126

http://urn.fi/urn:nbn:fi-fe2019060318242