Contents
Overview
Estimates the drone's position based on sent navdata, sent control commands and PTAM. Requires messages to be sent on both /ardrone/navdata (>100Hz) and /ardrone/image_raw (>10Hz), i.e. a connected drone with running ardrone_autonomy node, or a .bag replay of at least those two channels.
ardrone_autonomy should be started with:
rosrun ardrone_autonomy ardrone_driver _navdata_demo:=0 _loop_rate:=500
To properly estimate PTAM's scale, it is best to fly up and down a little bit (e.g. 1m up and 1m down) immediately after initialization. The methods used are described in the following publications:
Camera-Based Navigation of a Low-Cost Quadrocopter (J. Engel, J. Sturm, D. Cremers)
Accurate Figure Flying with a Quadrocopter Using Onboard Visual and Inertial Sensing (J. Engel, J. Sturm, D. Cremers)
This node in particular implements
- the described EKF for state-estimation, including the dynamic model of the drone
- time delay compensation
- scale estimation
- incorporation of PTAM
Functionality
There are two windows, one shows the video and PTAM's map points, the other one the map. To issue key commands, focus the respective window and hit a key. This generates a command on /tum_ardrone/com, which in turn is parsed and does something:
Video Window
Key Assignment:
r -> "p reset": resets PTAM. u -> "p toggleUI": toggles debug info to be displayed space -> "p space": takes first / second keyframe for PTAM's initialization k -> "p keyframe": forces PTAM to take a keyframe. l -> "toggleLog": starts / stops extensive logging of all kinds of values to a file. m -> "p toggleLockMap": locks map, equivalent to parameter PTAMMapLock. May be overwritten by the dyn. config parameter setting. n -> "p toggleLockSync": locks sync, equivalent to parameter PTAMSyncLock. May be overwritten by the dyn. config parameter setting.
Setting target using the mouse:
Clicking on the video window will generate way points, which are sent to drone_autopilot (if running):
- left-click: fly (x,y,0)m relative to the current position. The image-center is (0,0), borders are 2m respectively.
- right-click: fly (0,0,y)m and rotate yaw by x degree. The Image-center is (0,0), borders are 2m and 90 degree respectively.
Map Window
Key Assignment:
r -> "f reset": resets EKF and PTAM. u -> "m toggleUI": toggles debug info to be displayed v -> "m resetView": resets viewpoint of viewer l -> "toggleLog": starts / stops extensive logging of all kinds of values to a file. v -> "m clearTrail": clears green drone-trail.
Tips for good PTAM and Scale Estimation Performance
- PTAM works best in indoor environments, with loads of "structure" in the field of view of the camera. That is cupboards, desks, furniture, objects, texture etc.
PTAM does not work well with Trees & Plants, in particular in combination with wind, as they tend to violate the "static-world" assumption.
- PTAM cannot track well through rotation without adequate translation. Hence try to not change the drone's yaw-rotation too much.
- The keypoints should ideally be at a distance of 2m to 10m. For smaller distances, the drone moves too quickly. For larger distances, the pose estimate gets too inaccurate, although this has not been tested well.
Ideally, keypoints should be found at different depths, i.e. NOT lie on the same plane (-> a single textured wall is, contrary to popular belief, not good for PTAM's performance when using a small field of view). This is due to the resulting difficulty to distinguish between e.g. vertical motion and pitch-rotation.
- Flying up and down makes the scale estimate more accurate, as the main source of metric information is the ultrasound altimeter. Only flying horizontally (in particular over uneven ground (i.e. altimeter gets useless)) leads to bad scale estimates, and hence a bad flight performance.
Recording & Playing back flights
You can record data from a (manual) flight using
rosbag record -O flight.bag /ardrone/image_raw /ardrone/navdata /cmd_vel
and then play it back using
rosbag play -l flight.bag
This can be used to test the stateestimation etc. without risking to crash your drone / actually flying at the same time. Remember to shut down the ardrone_driver node and setting the Control Source to "None" (or shutting down drone_gui) when replaying a flight, otherwise the video / navdata / control commands are going to mix.
Parameters
~publishFreq: frequency, at which the drone's estimated position is calculated & published. Default: 30Hz
~calibFile: camera calibration file. If not set, the defaults are used (camcalib/ardroneX_default.txt).
Topics
- reads /ardrone/navdata
- reads /ardrone/image_raw
- reads /cmd_vel
- writes /ardrone/predictedPose
reads & writes /tum_ardrone/com
Dynamically Reconfigurable Parameters
See the dynamic_reconfigure package for details on dynamically reconfigurable parameters.
UseControlGains: whether to use control gains for EKF prediction. UsePTAM: whether to use PTAM pose estimates as EKF update UseNavdata: whether to use Navdata information for EKF update => If UsePTAM and UseNavdata are set to false, the EKF is never updated and acts as a pure simulator, predicting the pose based on the control commands received (on /cmd_vel). Good for experimenting. PTAMMapLock: lock PTAM map (no more KF) PTAMSyncLock: lock PTAM map sync (fix scale and pose offsets etc.) PTAMMaxKF: maximum amount of KF PTAM takes. PTAMMinKFDist: min. distance between two KF (in meters) PTAMMinKFWiggleDist: min. distance between two KF (relative to mean scene depth). PTAMMinKFTimeDiff: min time between two KF. => PTAM takes a new KF if (PTAMMinKFTimeDiff && (PTAMMinKFDist || PTAMMinKFWiggleDist)), and tracking is good etc. RescaleFixOrigin: If the scale of the Map is reestimated, only one point in the mapping PTAM <-> World remains fixed. => If RescaleFixOrigin == false, this is the current pose of the drone (to avoid sudden, large "jumps"). This however makes the map "drift". => If RescaleFixOrigin == true, by default this is the initialization point where the second KF has been taken (drone position may jump suddenly, but map remains fixed.). The fixpoint may be set by the command "lockScaleFP". c1 ... c8: prediction model parameters of the EKF. See the publications.