-
Notifications
You must be signed in to change notification settings - Fork 197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
2-D global (3 DoF) localization questions #18
Comments
Hello :)
The default configurations of drl should be generic enough to be usable for your 3 DoF localization use case. Let me known how drl performed in your use case. P.S. Have a nice day :) |
Good morning Carlos! I've specified the critical parameters and implemented the algorithm following your suggestions. However, there seems to be a problem with the The tf tree, modified launch file, .yaml file, .pgm file, and 3 DoF localization rosbag are uploaded for your reference. Any idea for this problem? Cheers :) Best Regards, |
Good night :) I looked at the files you uploaded and tested the rosbag.
Then:
drl uses the ros tf conventions for navigation, which state that:
In your case, if you do not have odometry information, you just specify an identity transformation with a static transform publisher. Have a nice day :) |
Hi Carlos! I followed your steps and there is no more warning about the
I try to fine tune the parameters following your suggestions about initial algorithms parameterization. However, the results seem no difference. I even manually set the ground truth initial 3DoF pose in the I also tried another repetitive environment and the problems are kind of the same: So I'm wondering is there any parameter that need specific tuning? Or is it due to the reason that the experimental corridor environment is too repetitive/ambiguous for the robot to match the features? Best Regards, |
Hello :)
drl publishes two topics with the inliers and outliers that you can view in rviz (inliers -> points that are considered correctly aligned, outliers -> sensor points that are too far from any map data). This can give you a good idea of how drl evaluates the overlap of the sensor data with the map. For initial alignment, the critical parameters are the keypoint detector and descriptor, along with the sample_consensus_initial_alignment modules. For tracking, the critical parameters are the correspondence distance, the maximum number of iterations and the time limit given to iterative closest point. If you have a sensor with a lot of noise, you may need to increase the distance of the outlier detector. If you expect that the robot will only see a small part of the map (due to occlusions, people walking around), you should increase the maximum allowed percentage of outliers (1 -> 100%, I do not recommend set it higher than 0.9 -> I think an overlap of at least 10% is required -> the confidence in any given map matching localization system is inversely proportional to the % of outliers -> the higher the overlap between sensor data and the map, the most confidant you can be that the localization system is working properly). For testing the tracking only, you can give an initial alignment pose using rviz (in the map tf frame). You can see which modules are being loaded in the dynamic_robot_localization_system.launch, which by default uses:
More information about the parameters is available in drl_configs.yaml. You can adjust the logging level of drl and pcl using these parameters. Have a nice day, |
Good afternoon Carlos! I've looked through all your suggestions. Then I've recorded a new rosbag including the odometry topic as So I'm wondering what could be the possible solutions for my case? Using the same rosbag, I've included a video of implementing the AMCL algorithm with the specified ground truth initial pose. I was hoping I could achieve something similiar with your algorithm, or even better than AMCL.
By the way, I'm wondering what is the function of the logging level? I tried to adjust them, but the results seem no difference. Cheers! :) Best Regards, |
Good night :) drl follows the ros navigation recommendations and since it does not need the twist information present in nav_msgs/Odometry, it retrieves the 6 DoF pose of the odometry from the tf base_link -> odom
The odometry parameter present in the dynamic_robot_localization_system.launch was for the laserscan_to_pointcloud assembler adjust how many scans it assembles for each point cloud it publishes (which is subscribed by drl). You must not set the odometry topic for the pose_to_tf_publisher. Below is a video showing how drl performed in the second bag file (drl stopped tracking when the robot reached the wall because the laser scans in the \scan topic were missing temporally for a few seconds). You can see that drl is much more accurate and efficient than amcl (when using lookup tables, drl took less than 5 ms to process your laser data -> you can inspect this information in the localization_times topic). The logging levels allow you to control the level of detail of the messages that are shown in the terminal. P.S. Have a nice day :) |
Hi Carlos! Following your suggestions, I've removed the odometry_topic in
I was wondering, should it be
Now I can see the laser scans moving in the Besides, I'm also wondering where exactly should I set the initial pose? Is that I only need to set in dynamic_robot_localization_system.launch or should I also set it in pose_to_tf_publisher.launch? Or should I manually set it in the Rviz interface by using the Thanks a lot :) Best Regards, |
Good night :) The pose_to_tf_publisher package was initially developed for publishing the tf odom -> map from pose stamped msgs (which led to the names you see in the launch file), but it evolved to publish other things, namely tf from odometry, which should be between tf frames base_link -> odom You also cannot have two sources of tf base_link -> odom I did not tune drl for your rosbags. You can set the initial pose of the robot in its respective section in dynamic_robot_localization_system.launch, which will forward it to drl and pose_to_tf_publisher. Have a nice day :) |
Hi Carlos: Good Monday! Thanks for your patient explanations. I've looked through your suggestions. I choose to use the odom.launch instead of the
However, I still can not get similar localization result as yours. I found the laser scans moving but sometimes oscillating. See below: I'm still wondering what could be the possible reason for the different localization performances with the same rosbag on your side and on my side. I can not figure this point out. Am I lefting any configuration behind? Have a nice day! :) Best Regards, |
Good night :) First, confirm that you have the latest version of drl:
Then, be careful how you start the launch files.
Otherwise you might not be running the launch files that you modified. The oscillations seen in the video I made (from 10s to 15s) were due to the laser field of view changing drastically because the wall in front of the robot temporary became invisible to the lidar, and since the robot was seeing just the two parallel corridor walls, one degree of freedom became unconstrained by the sensor data, allowing the matching algorithms to slide the sensor data along the corridor. You can see that in the beginning of the video the oscillations did not happen because the sensor data had several wall corners that constrained the 3 DoF pose of the robot, allowing stable matching. For minimizing the oscillations further, the map should have walls with 1 pixel thick, otherwise the matching algorithms may oscillate the sensor data within the walls. You can set the ros_verbosity_level to DEBUG for seeing in the terminal what drl is doing and diagnose the problem and if necessary also set pcl_verbosity_level to DEBUG. You should known that drl was preconfigured with timeouts for the registration algorithms: If your PC is less powerful than mine, then you may need to increase these timeouts for giving more time to the algorithms to converge to better solutions and tune the convergence thresholds, such as:
Have a nice day :) |
Good Morning Carlos! First of all, sincerely thanks again for your detailed and patient explanations. I followed and tried your suggestions. However, I still can not obtain the ideal localization results on my laptop with I've also recorded the whole DEBUG process, which can be seen in here I think probably it is because of my laptop is not powerful enough to implement the DRL. Hence, I borrowed another laptop, installed and implemented DRL. It works!!! :) Now I have another question. I found that if the initial pose is not given, it will be quite hard for the DRL to successfully localize the robot. I've recorded another two rosbag, namely the test2 and test3. The ground truth initial pose for these two rosbags are (44, 18.8, 0) and (1.5, 8, 0). If with initial pose, DRL can quickly and accurately localize the robot, as shown: If without initial pose, DRL fails to localize the robot in most cases (more than 90% will fail on my laptop), as shown: So I'm wondering, is it because of the feature matching technique that DRL uses that performs poorly in such repetitive and ambiguous environments? Could you run these two rosbags without giving the initial pose on your laptop and check the performance of DRL ? Thanks again and cheers! Best Regards, |
Hello :) I made some improvements to drl and its default configurations.
However, feature matching is more reliable for small maps with unique environment geometry. While testing the new rosbags and using rviz to set the initial pose, I notice that you need to add to odom.launch the line below, otherwise the pose_to_tf_publisher that is sending the tf base_link -> odom will receive the initial pose from rviz and cause problems.
I also added the option to disable the discard of older poses. This way, you can restart the rosbags without needing to restart drl and odom.launch
to: According to these benchmarks, my CPU is around 40% faster than yours. But you can tune drl to use less CPU, for example:
Have a nice day :) |
Good Afternoon Carlos! Thanks a lot for your step by step guidance and explanations. Following your suggestions:
As for the localization problem in repetitive environments, I agree that:
However, I can not crop the map with gimp to the regions in which I have sensor data in the rosbags because in the real Global localization case no one knows where exactly is the robot (which is what I suppose the Global localization algorithm is doing). Thanks again! Best Regards, |
Hello :) I developed and tested drl initially for ship building environments within the FP7 Carlos project. In the project use cases the robot would be operating inside a large room. In your use case, you have the robot moving inside long corridors of a large building, so a different approach to global localization will likely be required, which means you have the opportunity to research in this area :) As a quick test, you could try other algorithms for global localization, for example running amcl in the beginning until it converged to an acceptable solution and then send the initial pose of amcl to drl for achieving better tracking accuracy. If for some reason drl loses tracking after a while, you could revert to amcl for finding a new initial pose, which later on would be used by drl. This approach would be a supervisor for 2 localization systems (for alternating between the 2). Have a nice day :) |
Hi Carlos. Sincerely thanks again for all your help and guidances! Take care and all the best :) Best Regards, |
Hello :) Let me know how drl performed when integrated with amcl or your own algorithms. All the best :) |
Hi Carlos! @carlosmccosta How are you recently? I hope everything's fine. Thanks a lot :) Best Regards, |
Hello :) A few links as a starting point:
If you find packages that work well for global localization, please let me known :) Have a nice day :) |
Dear Carlos, Noted with thanks! Thanks and take care! Best Regards, |
@WilliamWoo45 and @carlosmccosta
Hello ,I am also facing a similar issue of repetitive and ambiguous environments and also featureless environments,it would be really gratful of you ,if by any chance you have come across any localization algorithm which performs well in such scenarios.if you could just provide a link ,it would be nice.(The above links mentioned earlier,do not really fit my case.) |
Hi Carlos!
First of all, thanks for sharing your research work, which is quite fascinating.
I've successfully installed from the source and recently I'm looking through your codes. I try to implement your algorithm on my own 2-D laser data to accomplish 3 DoF (position_x, position_y, angle_yaw) localization tasks. However, I have a few questions which I hope you could clarify for me:
By the way, I wrongly posted the issue #17 and you can delete it. Sorry about this.
Thanks a lot and looking forward to your early reply!
Best Regards,
William
The text was updated successfully, but these errors were encountered: