3D/2D Registration and Photorealistic Modeling

NSF IIS - 0237878:  CAREER: Photorealistic 3-D Modeling of Large-Scale Scenes: Integration of 3-D Range and 2-D Intensity Sensing in a Complete System

Additional support provided by: NSF Major Research Instrumentation - Grant No. 021596 (range scanning and computing equipment), PSC-CUNY and CUNY Institute of Softward Design and Development (CISDD) awards.

Vision and Graphics Lab

 

Ioannis Stamos - PI

Lingyun Liu - PhD student, CUNY Graduate Center.
Cecilia Chao Chen - PhD student, CUNY Graduate Center.
Marius Leordeanu - Undergraduate student, Hunter College (now PhD student at CMU)


[publications]

 

(a)   Registered range scans of large urban area. A point-based model is shown. (b) Detail of texture map after automated 3D range to 2D image registration and calibration.

 

Complete texture-mapped 3D model of urban cite. The locations (white dots) and local coordinate frames (green lines are the orientations) of the 2D cameras are shown as well.

 

Our goal is the production of highly accurate photorealistic descriptions of the 3D world with a minimum of human interaction and increased computational efficiency. Our input is a large number of unregistered 3D and 2D photographs of an urban site. The generated 3D representations, after automated registration, are useful for urban planning, historical preservation, or virtual reality applications.

A major bottleneck in the process of 3D scene acquisition is the automated registration of a large number of geometrically complex 3D range scans and high-resolution 2D images in a common frame of reference. We have developed novel methods for the accurate and efficient registration of a large number of 3D range scans. The methods utilize range segmentation and feature extraction algorithms. We have also developed a context-sensitive user interface to overcome problems emerging from scene symmetry. Finally, we developed a novel and efficient algorithm for the 3D range to 2D image registration problem in urban scene settings. This algorithm calibrates each 2D image and computes an optimized transformation between the 2D images and 3D range scans. A mesh-simplification method of the final 3D model based on the segmentation results of each range image has been produced as well. Graduate and undergraduate students are being introduced to our research through the  3D Photography  class taught by the PI.

 

3D Range to 3D Range Registration

Two novel range-range registration algorithms have been developed. The automated method performs two major functions: scan pair registration and global stitching. This process involves three steps: line clustering, rotation estimation and translation estimation. When the transformations between all pairs are computed and verified by the user a global registration procedure computes the transformation of all other scans with respect to a selected pivot scan.



Registered sets of lines and points, Thomas Hunter building NYC.

Representative Publications:

3D Range to 2D Image Registration

Camera
          configurations wrt texture-mapped point-based model.

We developed a novel and efficient algorithm for the 3D range to 2D image registration problem in urban scene settings. Our input is a set of unregistered 3D range scans and a set of unregistered and uncalibrated 2D images of the scene. The 3D range scans and 2D images capture real scenes in extremely high detail. A new automated algorithm calibrates each 2D image and computes an optimized transformation between the 2D images and 3D range scans. This transformation is based on a match of 3D with 2D features that maximizes an overlap criterion. Our algorithm attacks the hard 3D range to 2D image registration problem in a systematic, efficient, and automatic way. Images captured by a high-resolution 2D camera, that moves and adjusts freely, are mapped on a centimeter-accurate 3D model of the scene providing photorealistic renderings of high quality.




Camera configurations with respect to a textured-mapped point-based 3D scene model.


Details of texture-maps for buildings 1 (image c), building 2 (image d) and building 3 (images a and b) verifies the high accuracy of the automated algorithm. Note, that for building 3 we show results using images taken under different lighting conditions.


Representative Publications:

 

 

Modeling & Simplification

 

We developed a mesh-simplification method of the final 3D model based on the segmentation results of each range image. Note, that our simplification method does not depend on the 3D modeling method used. Our goal is to retain the geometric details of the 3D model in areas where planar segmentation is not possible and to simplify the model in areas where planar segments from the segmentation module are available. Our ultimate goal is the automated generation of a scene CAD model. The fact that we are relying on the original segmentation results for simplification increases the accuracy of our algorithms, since the  final 3D model may diverge from the original scans due to mis-registrations or averaging.

 

Representative Publications: