PROJECT PLANTHE ROBLOG DEMONSTRATORSEVENTSPARTNERSPUBLICATIONS & AFFILIATIONSContact US!

3D Sensing and Perception

Work Package 4 - Dynamic Semantic 3D Models

This work package builds upon the preprocessed 3D-range-data handled in "Work Package 3 - 3D Sensing and Perception.'' The main goal of this work package is to further process and integrate this information into a dynamic 3D map with semantic information to use for adaptive modelling and planning. This system functions as the central intelligence of the demostrators, with the created software. The software processes the informaiton of the scene and coordinates the movement of the robot to ensure successful cycles. Additionally, the software performs these key functions:

- Retain the system memory of known objects which can be recognized
- Save new items to the object database
- Orchestrate the flow of information

3D Range sampling

The workflow of the recognition system for detecting and localizing objects from a single view-point is depicted below. The recognition is done by combining texture information obtained from a color image with geometric properties of the scene observed in a depth-image. To do this, an object database of point-cloud based 3D-models with visual and shape cues is created during the training phase. First, the geometric properties of the sensor data are examined by dividing the range image into regions of smooth surface patches. This is achieved via the Scan-Line-Edge-Detection algorithm. The scene is further analyzed during the segmentation process, which involves dividing a range image into connected components or segments, each of which is a surface patch that is separated from its neighbors. These models are then used to generate hypotheses about the objects in the observed environment or a "global mask" which can be used for extraction of texture features of the objects.

Jacobs Workflow of the Perception Challenge

Work Package 4 Deliverables

del_3.1---evaluation-of-available-sensors-moun.pdf [6.906 KB]