3D Object Detection and Instance Segmentation from 3D & 2D Images


Volumetric CNN-based algorithms



Recently, there have been a plethora of classification and detection systems from RGB as well as 3D images. In this work, we describe a new 3D object detection system from an RGB-D or depth-only point cloud. Our system first detects objects in 2D (either RGB, or pseudo-RGB constructed from depth). The next step is to detect 3D objects within the 3D frustums these 2D detections define. This is achieved by voxelizing parts of the frustums (since frustums can be really large), instead of using the whole frustums as done in earlier work. The main novelty of our system has to do with determining which parts (3D proposals) of the frustums to voxelize, thus allowing us to provide high resolution representations around the objects of interest. It also allows our system to have reduced memory requirements. These 3D proposals are fed to an efficient ResNet-based 3D Fully Convolutional Network (FCN). Our 3D detection system is fast, and can be integrated into a robotics platform. With respect to systems that do not perform voxelization (such as PointNet), our methods can operate without the requirement of subsampling of the datasets. We have also introduced a pipelining approach that further improves the efficiency of our system. Results on SUN RGB-D dataset show that our system, which is based on a small network, can process 20 frames per second with comparable detection results to the state-of-the-art [16], achieving a 2x speedup.

The addition of an instance segmentation module and more experiments are presented here (Results from SUB RGBD dataset):
Instance Segmentation

 



This work was partially supported by NSF Award CNS1625843 and Google Faculty Research Award 2017 ("Classification of urban objects in 3D point clouds")We acknowledge the support of NVIDIA with the donation of the Titan-X GPU used for this work.
Finally, support has been provided by CUNY PSC-CUNY and Bridge funds.