[Documentation] [TitleIndex] [WordIndex

  Show EOL distros: 

Package Summary

This package estimates Next-Best-Views as well as configurations (target positions and orientations) for a robot, where it is most likely to find and recognize searched objects the poses of which have, e.g., been predicted by means of ISM trees.

  • Maintainer: Meißner Pascal <asr-ros AT lists.kit DOT edu>
  • Author: Aumann Florian, Borella Jocelyn, Heller Florian, Meißner Pascal, Schleicher Ralf, Stöckle Patrick, Stroh Daniel, Trautmann Jeremias, Walter Milena, Wittenbeck Valerij
  • License: BSD-derived
  • Source: git https://github.com/asr-ros/asr_next_best_view.git (branch: master)

Package Summary

This package estimates Next-Best-Views as well as configurations (target positions and orientations) for a robot, where it is most likely to find and recognize searched objects the poses of which have, e.g., been predicted by means of ISM trees.

  • Maintainer: Meißner Pascal <asr-ros AT lists.kit DOT edu>
  • Author: Aumann Florian, Borella Jocelyn, Heller Florian, Meißner Pascal, Schleicher Ralf, Stöckle Patrick, Stroh Daniel, Trautmann Jeremias, Walter Milena, Wittenbeck Valerij
  • License: BSD-derived
  • Source: git https://github.com/asr-ros/asr_next_best_view.git (branch: melodic)

Logo-white-medium.jpg

Description

This package estimates Next-Best-Views as well as configurations (target positions and orientations) for a robot, where it is most likely to find and recognize searched objects the poses of which have, e.g., been predicted by means of ISM trees.

Functionality

This tool uses an iterative generate-and-test algorithm. Each iteration results in a NextBestView, the algorithm stops if the NextBestViews of two consecutive iteration steps have almost no improvement. The idea of the NextBestView is that it tells you where it is most probable that objects will be found.

The following images describe basic concepts to understand the next_best_view node: frustum.png hypothesis.png normals.png

The first image shows the robot and its frustum (view field), which indicates the area where objects can be found. This view field is also named 'view'. The second image shows object hypotheses. An object hypothesis is a pose (position and orientation) of an object, which are predictions where this object could probably be found. The more objects are in a cluster, the more likely the recognizers can find objects at this location. So the variety of positions and orientations of object hypotheses determine the possible information gain. The third image shows the object normals. The camera should search the object from these directions to increase the likelihood of detecting the object.

In each iteration step, views are generated and then rated. The view with the best rating is chosen as NextBestView for that iteration step.

The next_best_view node uses a set of modules:

Each module has at least one implementation, which can be chosen through parameter configuration.

Implementations:

To get a set of viewports from a set of positions and orientations, the cross product between those two sets is used. The resulting set of viewports is filtered by world and hypotheses information (MapHelper and CameraModelFilter class).

The following two images show the orientation and position sampling. Each arrow on the sphere shows the endposition of one direction vector which starts in the center of the sphere. The red rectangle limits the space sampling area. All hexagon corners and the center of the rectangle are used to generate viewports. orientationsampling.png spacesampling.png

Filter steps:

  1. Positions that can't be navigated by the robot are removed when generating positions.
  2. Orientations that can't be reached by the PTU are removed.
  3. Positions that are too far away from hypotheses are removed using a kd-tree/radius search (this is also used to get a first set of nearby hypotheses that might be in the frustum of a view).
  4. Views that have no samples in their frustum are removed.

The NextBestViewCalculator class manages all modules.

Usage

To find a NextBestView, we must first call the SetAttributedPointCloud and SetInitRobotState service calls, then we can use the GetNextBestView service call to get our NextBestView. SetAttributedPointCloud sets and visualizes the pointcloud as shown above in the second and third image, triggerFrustumVisualization is used to display the blue frustum which can be seen above in the first image.

To invalidate hypotheses in a frustum, call the UpdatePointCloud service call with the camera pose.

Needed packages

Needed software

Start system

Just run roslaunch asr_next_best_view next_best_view_core_sim.launch to get the next_best_view node running for the simulation or roslaunch asr_next_best_view next_best_view_core_real.launch to get it running in a real environment.

ROS Nodes

Subscribed Topics

Published Topics

The topics that the next_best_view node publishes are only meant for the visualization. Those topics are:

Parameters

Note that components from the asr_robot_model_services and costmap_2d packages are used and they also need parameters. So, the launch files of the asr_next_best_view package include parameter files from the asr_robot_model_services package and there is a configuration file for the costmap_2d package under param/costmap_params.yaml.

Needed Services

Provided Services

Tutorials

AsrNextBestViewSetPointCloud

GetNextBestView

UpdatePointCloud

When using this package, please cite the following publication. Thank you!


2022-05-28 12:27