Robot Setup
Bring up the robot and position it at the edge of a table, facing the table. When manipulating objects on tables, the workspace of the arms is generally increased by raising the robot torso all the way up.
Place an object on the table, within reach of the robot, and point the head of the robot so that the narrow_stereo (or Kinect, if you are using that instead) point cloud contains part of the table as well as the object. Also make sure that the arms of the robot are off to the side, so that they do not obstruct the robot's view of the table.
When you are done, the view of the robot and the narrow_stereo point cloud in rviz could look like this:
The point cloud from the narrow_stereo should look like this:
For a list of useful rviz markers to be monitored while running the manipulation pipeline see the Troubleshooting page.
Starting the Package
We will create a new package for our application, called pr2_pick_and_place_tutorial. This tutorial depends on four other packages, listed in the package creation command.
roscreate-pkg pr2_pick_and_place_tutorial actionlib object_manipulation_msgs tabletop_object_detector tabletop_collision_map_processing
Writing the Code
We will be adding all the code below in a single source file, in the main function. First we will add the necessary #include directives and a stub for the main function.
1 #include <ros/ros.h>
2
3 #include <actionlib/client/simple_action_client.h>
4 #include <tabletop_object_detector/TabletopDetection.h>
5 #include <tabletop_collision_map_processing/TabletopCollisionMapProcessing.h>
6 #include <object_manipulation_msgs/PickupAction.h>
7 #include <object_manipulation_msgs/PlaceAction.h>
8
9 int main(int argc, char **argv)
10 {
11 }
All the code snippets below then get added, in order, to the main function. At the end of this tutorial you will find a complete snapshot of the source file with all the functionality added in.
Initialization of ROS Service and Action Clients
We will first initialize our ROS node and create the service and action clients we need. We need four clients:
- a service client for object detection
- a service client for the service that takes the detection result and integrated it in the collision environment.
- an action client for the Pickup action
- an action client for the Place action
We initialize all these clients like this:
1 //initialize the ROS node
2 ros::init(argc, argv, "pick_and_place_app");
3 ros::NodeHandle nh;
4
5 //set service and action names
6 const std::string OBJECT_DETECTION_SERVICE_NAME =
7 "/object_detection";
8 const std::string COLLISION_PROCESSING_SERVICE_NAME =
9 "/tabletop_collision_map_processing/tabletop_collision_map_processing";
10 const std::string PICKUP_ACTION_NAME =
11 "/object_manipulator/object_manipulator_pickup";
12 const std::string PLACE_ACTION_NAME =
13 "/object_manipulator/object_manipulator_place";
14
15 //create service and action clients
16 ros::ServiceClient object_detection_srv;
17 ros::ServiceClient collision_processing_srv;
18 actionlib::SimpleActionClient<object_manipulation_msgs::PickupAction>
19 pickup_client(PICKUP_ACTION_NAME, true);
20 actionlib::SimpleActionClient<object_manipulation_msgs::PlaceAction>
21 place_client(PLACE_ACTION_NAME, true);
22
23 //wait for detection client
24 while ( !ros::service::waitForService(OBJECT_DETECTION_SERVICE_NAME,
25 ros::Duration(2.0)) && nh.ok() )
26 {
27 ROS_INFO("Waiting for object detection service to come up");
28 }
29 if (!nh.ok()) exit(0);
30 object_detection_srv =
31 nh.serviceClient<tabletop_object_detector::TabletopDetection>
32 (OBJECT_DETECTION_SERVICE_NAME, true);
33
34 //wait for collision map processing client
35 while ( !ros::service::waitForService(COLLISION_PROCESSING_SERVICE_NAME,
36 ros::Duration(2.0)) && nh.ok() )
37 {
38 ROS_INFO("Waiting for collision processing service to come up");
39 }
40 if (!nh.ok()) exit(0);
41 collision_processing_srv =
42 nh.serviceClient
43 <tabletop_collision_map_processing::TabletopCollisionMapProcessing>
44 (COLLISION_PROCESSING_SERVICE_NAME, true);
45
46 //wait for pickup client
47 while(!pickup_client.waitForServer(ros::Duration(2.0)) && nh.ok())
48 {
49 ROS_INFO_STREAM("Waiting for action client " << PICKUP_ACTION_NAME);
50 }
51 if (!nh.ok()) exit(0);
52
53 //wait for place client
54 while(!place_client.waitForServer(ros::Duration(2.0)) && nh.ok())
55 {
56 ROS_INFO_STREAM("Waiting for action client " << PLACE_ACTION_NAME);
57 }
58 if (!nh.ok()) exit(0);
Object Detection
We will now call the tabletop_object_detector, using the TabletopDetection service:
1 //call the tabletop detection
2 ROS_INFO("Calling tabletop detector");
3 tabletop_object_detector::TabletopDetection detection_call;
4 //we want recognized database objects returned
5 //set this to false if you are using the pipeline without the database
6 detection_call.request.return_clusters = true;
7 //we want the individual object point clouds returned as well
8 detection_call.request.return_models = true;
9 detection_call.request.num_models = 1;
10 if (!object_detection_srv.call(detection_call))
11 {
12 ROS_ERROR("Tabletop detection service failed");
13 return -1;
14 }
15 if (detection_call.response.detection.result !=
16 detection_call.response.detection.SUCCESS)
17 {
18 ROS_ERROR("Tabletop detection returned error code %d",
19 detection_call.response.detection.result);
20 return -1;
21 }
22 if (detection_call.response.detection.clusters.empty() &&
23 detection_call.response.detection.models.empty() )
24 {
25 ROS_ERROR("The tabletop detector detected the table, "
26 "but found no objects");
27 return -1;
28 }
Note that we are asking for both recognized database objects and individual object point clouds for both known and unknown objects to be returned. If you have started the manipulation pipeline without using the database, simply set detection_call.request.return_models = false; in this service call. Objects will still be returned as unrecognized point clouds.
Also note that the 'table' is simply detected as the largest plane in the scene. Currently there is no orientation constraint. If the largest plane in the scene is the wall or the floor, the robot will look for objects on that surface. Also, pieces of the robots arms can be misconstrued as objects, so make sure they are well to the side and out of view before running.
When the object detection call is run, you can inspect its result in rviz. Enable Markers on the following topics (need to be enabled before the call):
/tabletop_object_segmentation_markers - these markers show the results of segmentation, with the table and clusters on top
/tabletop_object_recognition_markers - these markers will show the results of the tabletop object detection, with meshes overlaying recognized objects
Collision Map Processing
Once we have the detection results, we will send them to the tabletop_collision_map_processing node. Here they will be added to the collision environment, and receive collision names that we can use to refer to them later. Note that the collision map processing service takes as part of the input the result of the detection service we just called.
1 //call collision map processing
2 ROS_INFO("Calling collision map processing");
3 tabletop_collision_map_processing::TabletopCollisionMapProcessing
4 processing_call;
5 //pass the result of the tabletop detection
6 processing_call.request.detection_result =
7 detection_call.response.detection;
8 //ask for the existing map and collision models to be reset
9 processing_call.request.reset_collision_models = true;
10 processing_call.request.reset_attached_models = true;
11 //ask for the results to be returned in base link frame
12 processing_call.request.desired_frame = "base_link";
13 if (!collision_processing_srv.call(processing_call))
14 {
15 ROS_ERROR("Collision map processing service failed");
16 return -1;
17 }
18 //the collision map processor returns instances of graspable objects
19 if (processing_call.response.graspable_objects.empty())
20 {
21 ROS_ERROR("Collision map processing returned no graspable objects");
22 return -1;
23 }
Note that we request any previous information about objects in the collision environment be reset before the new objects are added.
The TabletopCollisionMapProcessing service will return a list of GraspableObjects, which can be directly sent to a pickup action request, as shown below.
Object Pickup
We will now request a pickup of the first object in the list of GraspableObjects returned by the collision map processing service:
1 //call object pickup
2 ROS_INFO("Calling the pickup action");
3 object_manipulation_msgs::PickupGoal pickup_goal;
4 //pass one of the graspable objects returned
5 //by the collision map processor
6 pickup_goal.target = processing_call.response.graspable_objects.at(0);
7 //pass the name that the object has in the collision environment
8 //this name was also returned by the collision map processor
9 pickup_goal.collision_object_name =
10 processing_call.response.collision_object_names.at(0);
11 //pass the collision name of the table, also returned by the collision
12 //map processor
13 pickup_goal.collision_support_surface_name =
14 processing_call.response.collision_support_surface_name;
15 //pick up the object with the right arm
16 pickup_goal.arm_name = "right_arm";
17 //we will be lifting the object along the "vertical" direction
18 //which is along the z axis in the base_link frame
19 geometry_msgs::Vector3Stamped direction;
20 direction.header.stamp = ros::Time::now();
21 direction.header.frame_id = "base_link";
22 direction.vector.x = 0;
23 direction.vector.y = 0;
24 direction.vector.z = 1;
25 pickup_goal.lift.direction = direction;
26 //request a vertical lift of 10cm after grasping the object
27 pickup_goal.lift.desired_distance = 0.1;
28 pickup_goal.lift.min_distance = 0.05;
29 //do not use tactile-based grasping or tactile-based lift
30 pickup_goal.use_reactive_lift = false;
31 pickup_goal.use_reactive_execution = false;
32 //send the goal
33 pickup_client.sendGoal(pickup_goal);
34 while (!pickup_client.waitForResult(ros::Duration(10.0)))
35 {
36 ROS_INFO("Waiting for the pickup action...");
37 }
38 object_manipulation_msgs::PickupResult pickup_result =
39 *(pickup_client.getResult());
40 if (pickup_client.getState() != actionlib::SimpleClientGoalState::SUCCEEDED)
41 {
42 ROS_ERROR("The pickup action has failed with result code %d",
43 pickup_result.manipulation_result.value);
44 return -1;
45 }
Note the following:
- we are passing to the pickup action goal the names that can be used in the collision environment to refer to both our target object and the table it's resting on (we have these names from the previous step). This helps the pickup action reason about when contacts in various parts of the environment are acceptable. If one or both of these names are blank or incorrect, the pickup will still be attempted, but legal grasps might be rejected by the collision environment because they bring either the robot or the grasped object in contact with the environment
we are requesting that the pickup be executed by an arm/hand combination named "right_arm". This name references a group of parameters on the parameter server, which is where the manipulation pipeline will get its information about the characteristics of the hand being used. These parameters are loaded on the server as part of starting the manipulation pipeline. For details on the arm description parameters look in the object_manipulator documentation, while an example parameter file for the PR2 gripper can be found in pr2_object_manipulation_launch.
- a grasp is typically executed by first moving the arm to a "pre-grasp" position, using padding to ensure the robot does not collide with the environment, then going to the grasp position. This ensures that objects are always approached the same way and allows grasps to be optimized for this.
- the direction of the pre-grasp to grasp motion is part of the characteristics of the gripper, and can not be specified in the pickup goal.
- after the object is grasped, the pickup action will attempt to lift it to a safe location, out of contact with its support surface, from where the motion planner can be used to move the arm to some other desired position.
the direction of the lift motion is specified in the pickup goal. In this example, we are requesting that the object be lifted along the "vertical" direction, or along the positive z direction in the base_link frame.
- The length of the lift motion is specified in the pickup goal. Ideally, we would like the object to be lifted by 10cm, but are willing to executed grasps for which only 5cm of lift after grasp are possible
as an alternative, we could request that the object be lifted in the opposite of the grasp approach direction. In this case, the lift direction would be along negative x, and specified in the r_wrist_roll_link frame. This might be useful when vertical lift is not appropriate, like for example retrieving an object from a fridge shelf.
if you prefer that no lift be performed and the pickup action return as soon as the gripper has been closed on the object, set desired_lift_distance to 0. However, it is possible that the arm_navigation code will have difficulty moving the arm out of that position, as it will think that the grasped object is colliding with its support surface.
in this example, we are leaving the desired_grasps field of the pickup goal empty. This will instruct the object_manipulator to call one of its default grasp planner to generate a list of possible grasps for the object. Alternatively, you can specify your own lists of grasps to be attempted by filling in this field; the grasps will be tried in order until one is deemed feasible, given the current collision environment and kinematic constraints.
if you want to specify your own grasps, they must be encapsulated in the Grasp message. This message contains:
- the hand posture (finger joint angles) for the pre-grasp and the grasp
the hand pose relative to the object for the grasp. For an explanation of the pose of the object used for specifying grasps, see the "Object Location" section of this tutorial.
- During the pickup call, a call to set the planning scene is made. This takes the current octomap, generated by the tilting laser rangefinder, as well the table and as any objects we added to the scene, and sends it to the relevant motion planner(s) to use for collision avoidance.
You can look at the planning scene that was last used in rviz, as well as the current Octomap point cloud, and the attached object collision box (the last only if the object was successfully picked up). Enable Markers on the following topics:
/planning_scene_markers - these markers show the last planning scene used for planning.
/occupied_cells - these markers will show the current Octomap point cloud.
/attached_objects - these markers show any objects that are attached to the grippers.
If the pickup failed, it can be helpful to look at where the robot was trying to grasp, as well as what the robot thinks is in collision. Enable Markers on the following topics:
/object_manipulator/grasp_execution_markers - these markers show arrows for grasps that are being attempted.
/kinematics_collisions - these markers show small red spheres for collisions with grasps that were attempted (if you change from 'Move Camera' to 'Select' in rviz, and select one such sphere, then enable the 'Selection' view, you can see what two bodies are colliding), as well as a ghostly arm in the colliding postures.
Object Location
Before placing the object, we must decide where we want the object placed. We will create a place location by shifting the pickup location by 10cm. The frame_id specified in the headers contained in the array of place_locations should all be the same, and should also be the frame in which the grasp is specified (usually copied straight from the Pickup response). A place_location is a PoseStamped that contains a transform (which here we'll call place_trans), that together with the transformation contained in the grasp message's grasp_pose, determine where the robot's wrist should be at the moment of placing the object down. More specifically, place_wrist_pose = place_trans * grasp_pose.
Here is how we shift the pickup location away from the robot by 10cm to obtain our place location:
1 //create a place location, offset by 10 cm from the pickup location
2 geometry_msgs::PoseStamped place_location;
3 place_location.header.frame_id = processing_call.response.graspable_objects.at(0).reference_frame_id;
4 //identity pose
5 place_location.pose.orientation.w = 1;
6 place_location.header.stamp = ros::Time::now();
7 place_location.pose.position.x += 0.1;
Object Place
Now that we have our desired place location, let's send it to the Place action:
1 //put the object down
2 ROS_INFO("Calling the place action");
3 object_manipulation_msgs::PlaceGoal place_goal;
4 //place at the prepared location
5 place_goal.place_locations.push_back(place_location);
6 //the collision names of both the objects and the table
7 //same as in the pickup action
8 place_goal.collision_object_name =
9 processing_call.response.collision_object_names.at(0);
10 place_goal.collision_support_surface_name =
11 processing_call.response.collision_support_surface_name;
12 //information about which grasp was executed on the object,
13 //returned by the pickup action
14 place_goal.grasp = pickup_result.grasp;
15 //use the right rm to place
16 place_goal.arm_name = "right_arm";
17 //padding used when determining if the requested place location
18 //would bring the object in collision with the environment
19 place_goal.place_padding = 0.02;
20 //how much the gripper should retreat after placing the object
21 place_goal.desired_retreat_distance = 0.1;
22 place_goal.min_retreat_distance = 0.05;
23 //we will be putting down the object along the "vertical" direction
24 //which is along the z axis in the base_link frame
25 direction.header.stamp = ros::Time::now();
26 direction.header.frame_id = "base_link";
27 direction.vector.x = 0;
28 direction.vector.y = 0;
29 direction.vector.z = -1;
30 place_goal.approach.direction = direction;
31 //request a vertical put down motion of 10cm before placing the object
32 place_goal.approach.desired_distance = 0.1;
33 place_goal.approach.min_distance = 0.05;
34 //we are not using tactile based placing
35 place_goal.use_reactive_place = false;
36 //send the goal
37 place_client.sendGoal(place_goal);
38 while (!place_client.waitForResult(ros::Duration(10.0)))
39 {
40 ROS_INFO("Waiting for the place action...");
41 }
42 object_manipulation_msgs::PlaceResult place_result =
43 *(place_client.getResult());
44 if (place_client.getState() !=
45 actionlib::SimpleClientGoalState::SUCCEEDED)
46 {
47 ROS_ERROR("Place failed with error code %d",
48 place_result.manipulation_result.value);
49 return -1;
50 }
Note the following:
- just like in the Pickup action, we supply the collision names of both the grasped object and the support surface. They serve similar roles as in the pickup action.
- mirroring the pickup action, we have an "approach direction", which will be used by the gripper holding the object to approach the drop location, and a "retreat" which will be performed by the gripper after releasing the object. We specify the approach direction in the place request, but the retreat direction is a characteristic of the gripper. We only specify how much we want the gripper to retreat along this direction after releasing the object.
- just like for pickup, we specify desired approach and retreat distances, but those might not be feasible due to obstacles in the environment or kinematic constraints.
in this example, we approach the table by going "down", along negative z in the base_link frame. We ask for the approach motion to be 10cm, but are willing to accept the place location even if only 5cm of approach are possible
- we also ask for the gripper to retreat for 10cm after releasing the object, but are willing to accept the place location if at least 5cm are possible.
the additional parameter place_padding is used to decide if the requested place location brings the object in collision with the environment. The object's collision model is padded by this amount when the check is performed; use a smaller padding to fit objects into smaller spaces, or a larger padding to be safer when avoiding collisions.
Putting It All Together
After assembling the code above into your main function, you should end up with something resembling the complete file included below. Add the appropriate entry in your CMakeLists.txt to build this source file into an executable, then:
- make sure the robot is positioned as described at the beginning of this tutorial
start the manipulation pipeline (see the relevant tutorial here)
- run this executable
The robot should pick up the object, move it by 10cm, then place it back down.
Troubleshooting
You can find a list of useful rviz markers for monitoring pick and place operations on the troubleshooting page.
Next Steps
From here, you should be able to integrate pick and place functionality into your desired application. You can find more details in the individual documentation pages of the manipulation pipeline stacks:
Additional examples in both C++ and Python can be found in the pr2_pick_and_place_demos package.
The complete source code
Here is the complete source code for this tutorial.
1 #include <ros/ros.h>
2
3 #include <actionlib/client/simple_action_client.h>
4 #include <tabletop_object_detector/TabletopDetection.h>
5 #include <tabletop_collision_map_processing/TabletopCollisionMapProcessing.h>
6 #include <object_manipulation_msgs/PickupAction.h>
7 #include <object_manipulation_msgs/PlaceAction.h>
8
9 int main(int argc, char **argv)
10 {
11 //initialize the ROS node
12 ros::init(argc, argv, "pick_and_place_app");
13 ros::NodeHandle nh;
14
15 //set service and action names
16 const std::string OBJECT_DETECTION_SERVICE_NAME =
17 "/object_detection";
18 const std::string COLLISION_PROCESSING_SERVICE_NAME =
19 "/tabletop_collision_map_processing/tabletop_collision_map_processing";
20 const std::string PICKUP_ACTION_NAME =
21 "/object_manipulator/object_manipulator_pickup";
22 const std::string PLACE_ACTION_NAME =
23 "/object_manipulator/object_manipulator_place";
24
25 //create service and action clients
26 ros::ServiceClient object_detection_srv;
27 ros::ServiceClient collision_processing_srv;
28 actionlib::SimpleActionClient<object_manipulation_msgs::PickupAction>
29 pickup_client(PICKUP_ACTION_NAME, true);
30 actionlib::SimpleActionClient<object_manipulation_msgs::PlaceAction>
31 place_client(PLACE_ACTION_NAME, true);
32
33 //wait for detection client
34 while ( !ros::service::waitForService(OBJECT_DETECTION_SERVICE_NAME,
35 ros::Duration(2.0)) && nh.ok() )
36 {
37 ROS_INFO("Waiting for object detection service to come up");
38 }
39 if (!nh.ok()) exit(0);
40 object_detection_srv =
41 nh.serviceClient<tabletop_object_detector::TabletopDetection>
42 (OBJECT_DETECTION_SERVICE_NAME, true);
43
44 //wait for collision map processing client
45 while ( !ros::service::waitForService(COLLISION_PROCESSING_SERVICE_NAME,
46 ros::Duration(2.0)) && nh.ok() )
47 {
48 ROS_INFO("Waiting for collision processing service to come up");
49 }
50 if (!nh.ok()) exit(0);
51 collision_processing_srv =
52 nh.serviceClient
53 <tabletop_collision_map_processing::TabletopCollisionMapProcessing>
54 (COLLISION_PROCESSING_SERVICE_NAME, true);
55
56 //wait for pickup client
57 while(!pickup_client.waitForServer(ros::Duration(2.0)) && nh.ok())
58 {
59 ROS_INFO_STREAM("Waiting for action client " << PICKUP_ACTION_NAME);
60 }
61 if (!nh.ok()) exit(0);
62
63 //wait for place client
64 while(!place_client.waitForServer(ros::Duration(2.0)) && nh.ok())
65 {
66 ROS_INFO_STREAM("Waiting for action client " << PLACE_ACTION_NAME);
67 }
68 if (!nh.ok()) exit(0);
69
70
71
72 //call the tabletop detection
73 ROS_INFO("Calling tabletop detector");
74 tabletop_object_detector::TabletopDetection detection_call;
75 //we want recognized database objects returned
76 //set this to false if you are using the pipeline without the database
77 detection_call.request.return_clusters = true;
78 //we want the individual object point clouds returned as well
79 detection_call.request.return_models = true;
80 detection_call.request.num_models = 1;
81 if (!object_detection_srv.call(detection_call))
82 {
83 ROS_ERROR("Tabletop detection service failed");
84 return -1;
85 }
86 if (detection_call.response.detection.result !=
87 detection_call.response.detection.SUCCESS)
88 {
89 ROS_ERROR("Tabletop detection returned error code %d",
90 detection_call.response.detection.result);
91 return -1;
92 }
93 if (detection_call.response.detection.clusters.empty() &&
94 detection_call.response.detection.models.empty() )
95 {
96 ROS_ERROR("The tabletop detector detected the table, "
97 "but found no objects");
98 return -1;
99 }
100
101
102 //call collision map processing
103 ROS_INFO("Calling collision map processing");
104 tabletop_collision_map_processing::TabletopCollisionMapProcessing
105 processing_call;
106 //pass the result of the tabletop detection
107 processing_call.request.detection_result =
108 detection_call.response.detection;
109 //ask for the existing map and collision models to be reset
110 processing_call.request.reset_collision_models = true;
111 processing_call.request.reset_attached_models = true;
112 //ask for the results to be returned in base link frame
113 processing_call.request.desired_frame = "base_link";
114 if (!collision_processing_srv.call(processing_call))
115 {
116 ROS_ERROR("Collision map processing service failed");
117 return -1;
118 }
119 //the collision map processor returns instances of graspable objects
120 if (processing_call.response.graspable_objects.empty())
121 {
122 ROS_ERROR("Collision map processing returned no graspable objects");
123 return -1;
124 }
125
126
127
128 //call object pickup
129 ROS_INFO("Calling the pickup action");
130 object_manipulation_msgs::PickupGoal pickup_goal;
131 //pass one of the graspable objects returned
132 //by the collision map processor
133 pickup_goal.target = processing_call.response.graspable_objects.at(0);
134 //pass the name that the object has in the collision environment
135 //this name was also returned by the collision map processor
136 pickup_goal.collision_object_name =
137 processing_call.response.collision_object_names.at(0);
138 //pass the collision name of the table, also returned by the collision
139 //map processor
140 pickup_goal.collision_support_surface_name =
141 processing_call.response.collision_support_surface_name;
142 //pick up the object with the right arm
143 pickup_goal.arm_name = "right_arm";
144 //we will be lifting the object along the "vertical" direction
145 //which is along the z axis in the base_link frame
146 geometry_msgs::Vector3Stamped direction;
147 direction.header.stamp = ros::Time::now();
148 direction.header.frame_id = "base_link";
149 direction.vector.x = 0;
150 direction.vector.y = 0;
151 direction.vector.z = 1;
152 pickup_goal.lift.direction = direction;
153 //request a vertical lift of 10cm after grasping the object
154 pickup_goal.lift.desired_distance = 0.1;
155 pickup_goal.lift.min_distance = 0.05;
156 //do not use tactile-based grasping or tactile-based lift
157 pickup_goal.use_reactive_lift = false;
158 pickup_goal.use_reactive_execution = false;
159 //send the goal
160 pickup_client.sendGoal(pickup_goal);
161 while (!pickup_client.waitForResult(ros::Duration(10.0)))
162 {
163 ROS_INFO("Waiting for the pickup action...");
164 }
165 object_manipulation_msgs::PickupResult pickup_result =
166 *(pickup_client.getResult());
167 if (pickup_client.getState() != actionlib::SimpleClientGoalState::SUCCEEDED)
168 {
169 ROS_ERROR("The pickup action has failed with result code %d",
170 pickup_result.manipulation_result.value);
171 return -1;
172 }
173
174
175 //create a place location, offset by 10 cm from the pickup location
176 geometry_msgs::PoseStamped place_location;
177 place_location.header.frame_id = processing_call.response.graspable_objects.at(0).reference_frame_id;
178 //identity pose
179 place_location.pose.orientation.w = 1;
180 place_location.header.stamp = ros::Time::now();
181 place_location.pose.position.x += 0.1;
182
183
184
185 //put the object down
186 ROS_INFO("Calling the place action");
187 object_manipulation_msgs::PlaceGoal place_goal;
188 //place at the prepared location
189 place_goal.place_locations.push_back(place_location);
190 //the collision names of both the objects and the table
191 //same as in the pickup action
192 place_goal.collision_object_name =
193 processing_call.response.collision_object_names.at(0);
194 place_goal.collision_support_surface_name =
195 processing_call.response.collision_support_surface_name;
196 //information about which grasp was executed on the object,
197 //returned by the pickup action
198 place_goal.grasp = pickup_result.grasp;
199 //use the right rm to place
200 place_goal.arm_name = "right_arm";
201 //padding used when determining if the requested place location
202 //would bring the object in collision with the environment
203 place_goal.place_padding = 0.02;
204 //how much the gripper should retreat after placing the object
205 place_goal.desired_retreat_distance = 0.1;
206 place_goal.min_retreat_distance = 0.05;
207 //we will be putting down the object along the "vertical" direction
208 //which is along the z axis in the base_link frame
209 direction.header.stamp = ros::Time::now();
210 direction.header.frame_id = "base_link";
211 direction.vector.x = 0;
212 direction.vector.y = 0;
213 direction.vector.z = -1;
214 place_goal.approach.direction = direction;
215 //request a vertical put down motion of 10cm before placing the object
216 place_goal.approach.desired_distance = 0.1;
217 place_goal.approach.min_distance = 0.05;
218 //we are not using tactile based placing
219 place_goal.use_reactive_place = false;
220 //send the goal
221 place_client.sendGoal(place_goal);
222 while (!place_client.waitForResult(ros::Duration(10.0)))
223 {
224 ROS_INFO("Waiting for the place action...");
225 }
226 object_manipulation_msgs::PlaceResult place_result =
227 *(place_client.getResult());
228 if (place_client.getState() !=
229 actionlib::SimpleClientGoalState::SUCCEEDED)
230 {
231 ROS_ERROR("Place failed with error code %d",
232 place_result.manipulation_result.value);
233 return -1;
234 }
235
236 //success!
237 ROS_INFO("Success! Object moved.");
238 return 0;
239 }