SalLiDAR: Saliency Knowledge Transfer Learning for 3D Point Cloud Understanding


Guanqun Ding (University of Tsukuba),* Nevrez Imamoglu (AIST), Ali Caglayan (National Institute of Advanced Industrial Science and Technologhy (AIST), Tokyo, Japan), Masahiro Murakawa (National Institute of Advanced Industrial Science and Technology (AIST)), Ryosuke Nakamura (National Institute of Advanced Industrial Science and Technology)
The 33rd British Machine Vision Conference

Abstract

Saliency prediction has achieved significant progress in color images owing to deep neural networks trained on annotated human eye-fixation data or ground truth saliency maps. Unlike in image/video domain, only a few works have addressed saliency information to further guide 3D point cloud understanding due to the lack of annotated training data. Moreover, it is certainly difficult and not feasible for subjects to annotate eye-fixation or density saliency map groundtruth for point clouds due to the irregular, unordered, and sparse characteristics of 3D point cloud data. To alleviate this issue, we present a universal framework to transfer saliency distribution knowledge from color images to point clouds. We first apply pre-trained RGB saliency models to predict saliency maps for images. We then assign saliency value of each point on 3D point cloud registered to respective 2D multi-view color images by using the RGB saliency prediction. Based on that, we construct a pseudo-saliency dataset (i.e. FordSaliency) that presents 2D to 3D transferred saliency information for point clouds. Furthermore, we adopt existing point cloud-based models to learn saliency distribution from pseudo-saliency labels. Experimental results on our FordSaliency dataset verify that the point cloud-based models can learn saliency distributions from point cloud pseudo-labels. Finally, we demonstrate an application of point cloud saliency predictions on 3D semantic segmentation. Specifically, we propose an attention guided learning model by combining learned saliency knowledge and semantic features for large-scale point cloud segmentation. Extensive experiments of the proposed attention guided learning model on SemanticKITTI dataset show that the learned saliency knowledge effectively improves the performance of the 3D semantic segmentation task.

Video



Citation

@inproceedings{Ding_2022_BMVC,
author    = {Guanqun Ding and Nevrez Imamoglu and Ali Caglayan and Masahiro Murakawa and Ryosuke Nakamura},
title     = {SalLiDAR: Saliency Knowledge Transfer Learning for 3D Point Cloud Understanding},
booktitle = {33rd British Machine Vision Conference 2022, {BMVC} 2022, London, UK, November 21-24, 2022},
publisher = {{BMVA} Press},
year      = {2022},
url       = {https://bmvc2022.mpi-inf.mpg.de/0584.pdf}
}


Copyright © 2022 The British Machine Vision Association and Society for Pattern Recognition
The British Machine Vision Conference is organised by The British Machine Vision Association and Society for Pattern Recognition. The Association is a Company limited by guarantee, No.2543446, and a non-profit-making body, registered in England and Wales as Charity No.1002307 (Registered Office: Dept. of Computer Science, Durham University, South Road, Durham, DH1 3LE, UK).

Imprint | Data Protection