To be able to navigate through *3D* environments with wheeled platforms, meaning the ground can have positive or negative slopes, obstacles can be at any height and holes or edges may be present, *3D* information is required. This implementation uses a 360º 3D LIDAR (Velodyne Puck VLP-16 or Robosense RS-LiDAR-16) and a frontal depth camera (Intel RealSense D435). Other equivalent sensors may be used instead.
To be able to navigate through *3D* environments with wheeled platforms, meaning the ground can have positive or negative slopes, obstacles can be at any height and holes or edges may be present, *3D* information is required. This implementation uses a 360º 3D LIDAR (Velodyne Puck VLP-16 or Robosense RS-LiDAR-16) and a frontal depth camera (Intel RealSense D435). Other equivalent sensors may be used instead.
A set of filter nodes have been developed for both sensors, to take into account obstacles, holes and ramps. The result of these filters act as input to feed the 2D navigation costmaps, so it’s a way of projecting useful 3D information into 2D.
A set of filter nodes have been developed for both sensors, to take into account obstacles, holes and ramps. The result of these filters act as input to feed the 2D navigation costmaps, so it’s a way of projecting useful 3D information into 2D.
Next figure shows the basic structure of filter nodes for both the 3D LIDAR and the depth camera.
Next figure shows the basic structure of filter nodes for both the 3D LIDAR and the depth camera.
<imgsrc="images/3d_nav.jpg"alt="Basic structure of the filter nodes for both the 3D LIDAR and the depth camera sensors"width="960"height="260">
<imgsrc="images/3d_navigation.png"alt="Basic structure of the filter nodes for both the 3D LIDAR and the depth camera sensors">
The filters used for the 3D LIDAR are:
The filters used for the 3D LIDAR are:
*[iri_point_cloud_angle_filter](https://gitlab.iri.upc.edu/labrobotica/ros/navigation/3d_navigation/iri_point_cloud_angle_filter): it filters some angular sections of the LIDAR data (to avoid detection of the robot itself for example). Follow the link for a more detailed information.
*[lidar_angle_filter](https://gitlab.iri.upc.edu/labrobotica/ros/navigation/3d_navigation/iri_point_cloud_angle_filter): it filters some angular sections of the LIDAR data (to avoid detection of the robot itself for example). Follow the link for a more detailed information.
*[iri_lidar_obstacle_detector](https://gitlab.iri.upc.edu/labrobotica/ros/navigation/3d_navigation/iri_lidar_obstacle_detection): it uses the pointcloud data to detect the slope (both positive and negative) of the surrounding environment and remove from the output pointcloud those regions that are traversable by the robot. Follow the link for a more detailed information.
*[lidar_obstacle_detector](https://gitlab.iri.upc.edu/labrobotica/ros/navigation/3d_navigation/iri_lidar_obstacle_detection): it uses the pointcloud data to detect the slope (both positive and negative) of the surrounding environment and remove from the output pointcloud those regions that are traversable by the robot. It is also capable of detecting holes. Follow the link for a more detailed information.
The filters used to pre-process the pointcloud from the depth camera before actual detection are:
The filters used to pre-process the pointcloud from the depth camera before actual detection are:
*[pc_throttle](https://gitlab.iri.upc.edu/labrobotica/ros/navigation/3d_navigation/iri_average_point_cloud): it reduces the rate of the incoming pointcloud to a more suitable rate for the navigation stack to reduce the computational cost of the whole filter chain. Follow the link for a more detailed information.
*[pc_throttle](https://gitlab.iri.upc.edu/labrobotica/ros/navigation/3d_navigation/iri_average_point_cloud): it reduces the rate of the incoming pointcloud to a more suitable rate for the navigation stack to reduce the computational cost of the whole filter chain. Follow the link for a more detailed information.
*[average_point_cloud](https://gitlab.iri.upc.edu/labrobotica/ros/navigation/3d_navigation/iri_average_point_cloud): it averages a number of pointclouds using a moving window to reduce the noise. Follow the link for a more detailed information.
*[iri_average_point_cloud](https://gitlab.iri.upc.edu/labrobotica/ros/navigation/3d_navigation/iri_average_point_cloud): it averages a number of pointclouds using a moving window to reduce the noise. Follow the link for a more detailed information.
*[voxel_grid](http://wiki.ros.org/pcl_ros/Tutorials/filters): it down-samples the input pointcloud to reduce the number of points to handle in the next filters. Follow the link for a more detailed information.
*[passthrough](http://wiki.ros.org/pcl_ros/Tutorials/filters): It limits the spatial region (x,y, and z) to consider in the next filters. Follow the link for a more detailed information.
*[VoxelGrid](http://wiki.ros.org/pcl_ros/Tutorials/filters): it down-samples the input pointcloud to reduce the number of points to handle in the next filters. Follow the link for a more detailed information.
*[statistical_outlier_removal](http://wiki.ros.org/pcl_ros/Tutorials/filters): it eliminates all the points that are further than the average distance of their neightbors. Follow the link for a more detailed information.
*[PassThrough](http://wiki.ros.org/pcl_ros/Tutorials/filters): It limits the spatial region (x,y, and z) to consider in the next filters. Follow the link for a more detailed information.
*[radius_outlier_removal](http://wiki.ros.org/pcl_ros/Tutorials/filters): it eliminates all the points that don't have enough neightbors. Follow the link for a more detailed information.
*[StatisticalOutlierRemoval](http://wiki.ros.org/pcl_ros/Tutorials/filters): it eliminates all the points that are further than the average distance of their neightbors. Follow the link for a more detailed information.
*[RadiusOutlierRemoval](http://wiki.ros.org/pcl_ros/Tutorials/filters): it eliminates all the points that don't have enough neightbors. Follow the link for a more detailed information.
The node used to detect ramps and positive obstacles using the depth camera is:
The node used to detect ramps and positive obstacles using the depth camera is:
*[iri_obstacle_detection_normals](https://gitlab.iri.upc.edu/labrobotica/ros/navigation/3d_navigation/iri_obstacle_detection_normals): it computes the normal of all the points using their neightbors to fit a plane, and use them to detect the slope to check whether it is traversable by the robot or not. Follow the link for a more detailed information.
*[obstacle_normals_detection](https://gitlab.iri.upc.edu/labrobotica/ros/navigation/3d_navigation/iri_obstacle_detection_normals): it computes the normal of all the points using their neightbors to fit a plane, and use them to detect the slope to check whether it is traversable by the robot or not. Follow the link for a more detailed information.
The node used to detect holes and negative obstacles using the depth camera is:
The node used to detect holes and negative obstacles using the depth camera is:
*[iri_point_cloud_hole_detection](https://gitlab.iri.upc.edu/labrobotica/ros/navigation/3d_navigation/iri_point_cloud_hole_detection): it detects the number of points at ground level at a given region in front of the robot. If the number of points does not reach a certain threshold, a hole is detected. Follow the link for a more detailed information.
*[point_cloud_hole_detection](https://gitlab.iri.upc.edu/labrobotica/ros/navigation/3d_navigation/iri_point_cloud_hole_detection): it detects the number of points at ground level at a given region in front of the robot. If the number of points does not reach a certain threshold, a hole is detected. Follow the link for a more detailed information.
# How to install the packages
# How to install the packages
From an existing workspace ([tutorial](http://wiki.ros.org/catkin/Tutorials/create_a_workspace))
From an existing workspace ([tutorial](http://wiki.ros.org/catkin/Tutorials/create_a_workspace))
Merge your workspace with the following rosinstall files following the steps in this [tutorial](http://wiki.ros.org/wstool#Merge_in_Additional_rosinstall_Files).
Merge your workspace with the following rosinstall files following the steps in this [tutorial](http://wiki.ros.org/wstool#Merge_in_Additional_rosinstall_Files).