diff --git a/README.md b/README.md new file mode 100644 index 0000000000000000000000000000000000000000..45a4d30923f252a600417461373c1e56fb1b284e --- /dev/null +++ b/README.md @@ -0,0 +1,216 @@ +# OpenPose ROS +ROS wrapper for OpenPose ( https://github.com/CMU-Perceptual-Computing-Lab/openpose ) allows the estimation of 2d multi-person pose from a single RGB camera (see commit number `a1e0a5f4136e702b5731a268c2993fb75ca4753c` ). Based on https://github.com/stevenjj/openpose_ros . + +When a depth image is synchronized with the RGB image (RGB-D image), a 3d extractor node has been implemented to obtain 3d pose estimation from the 2d pose estimation given by OpenPose through the projection of the 2d pose estimation onto the point-cloud of the depth image. Also, a visualization node for the 3d results has been implemented. + + + +# Installing Openpose ROS Wrapper + +1. Install openpose and its dependencies ( https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/installation.md) + +**NOTE**: OpenCV 3.2 recommended, as OpenCv 2.4 might cause some errors. + +2. Enable the package running in openpose_ros directory: +```` +./install_openpose_and_enable_package.sh +```` +3. If it succeds, compile: +```` +cd ~/catkin_ws/src +catkin build +cd .. +source devel/setup.bash +```` + +# Running OpenPose ROS Wrapper and 3d Pose Extractor +The package can be divided into two modules that work independently. One for 2d pose detections, with a visualization tool like the one in OpenPose but implemented in ROS. And another for 3d pose detections, with a visualization node to view the results with RViz. We use the same node to get OpenPose 2d detections for both modules, but we have duplicated it and the services it provides with different names to avoid trouble while calling it with the 2d visualization tool and the 3d extractor node simultaneously. + +## 2d Detection Module +This module is composed of a service node for getting 2d detections and a node for the output visualization. + +First of all, you might want to change some things in the code to adapt it to your necessities: + +* Go to `/openpose_ros_pkg/src/openpose_ros_node_firephinx.cpp`, and change `/usb_cam/image_raw` for the topic your camera is publishing the `sensor_msgs::Image` messages to: + + + +* You may change the output image resolution. To do so, `/openpose_ros_pkg/src/openpose_ros_node_firephinx.cpp` and change: + + + +Once you have set those parameters repeat step 3 of installation. + +Now you can run the code. First connect a RGB camera and run the corresponding ROS drivers to start to publish the images (they must be image_raw). For example you can connect a webcam and use https://github.com/ros-drivers/usb_cam. With this drivers run: +```` +roslaunch usb_cam usb_cam-test.launch +```` + +You should get the something like: + + + +Then, initialize the 2d detection service node: +```` +rosrun openpose_ros_pkg openpose_ros_node +```` +If everything works fine you should see the following output in your shell: +```` +[ INFO] [1501140533.950685432]: Initialization Successful! +```` +Finally, to get the 2d poses of the images from the camera and visualize the output, run: +````md +rosrun openpose_ros_pkg openpose_ros_node_firephinx +```` +You should obtain something similar to: + + + +### Topics + +If everything is running correctly, the package should be publishing in the topics: +```` +/openpose_ros/input_image +/openpose_ros/detected_poses_image +/openpose_ros/detected_poses_keypoints +```` + +* **/openpose_ros/input_image:** The images the 2d detection node is taking to make the segmentation are published here. + + + +* **/openpose_ros/detected_poses_image:** Images with the segmentation skeleton drawn in it are published here. + + + +* **/openpose_ros/detected_poses_keypoints:** In this topic, the messages with the 2d detections keypoints (openpose bodyjoints) are being published. The messages have the following fields: + * num_people_detected: number of people that are in the image. + * person_ID: ID number assigned to each person. + * bodypart (i.e. nose): Each bodypart has the following fields: + * x: x openpose keypoint pixel coordinate. + * y: y openpose keypoint pixel coordinate. + * confidence: confidence of the detection. + +If you write the data in a csv it should be like this: + + + +## 3d Detection Module +This module is composed of the same 2d extractor node described in the previous section, a node for getting 3d pose detection and a node for visualization of the output in RViz. We can see the resulting 3d human skeleton and the resulting 3d detections for the joints or both at the same time with the visualization node. An RGB-D camera is needed to run this module. + +First of all you might want to change some things in the code to adapt it to your necessities: + +* Go to `/skeleton_extract_3d/launch/openpose_skeleton_extract.launch`. You will see this: + + + + *Here you should change `/usb_cam_3d/image_raw/` for the topic your camera will be publishing the `sensor_msgs::Image` messages (RGB images). You should also change `/kinect2/qhd/points` for the topic your camera will be publishing the `sensor_msgs::Pointcloud2` messages (depth images). + +* Go to `/skeleton_extract_3d/src/skeleton_extract_3d_node.cpp` and set the input resolution of the images (the resolution of the depth and the RGB images must be the same): + + + +* Go to `/skeleton_extract_3d/src/skeleton_extract_3d_visualization_node.cpp`. You might want to change the color, shape, size etc. of the markers. To see the options you have go to http://wiki.ros.org/rviz/DisplayTypes/Marker. To set the options of the bodyjoints: + + + +To set the options of the skeleton, go to: + + + +Once you have set the options repeat step 3 of the installation process. Now that you have configured it, you can run the code. First of all connect your RGB-D and run the corresponding ROS drivers. + +For example you can use a KinectOne and https://github.com/code-iai/iai_kinect2 ROS drivers. To initialize the camera with this drivers run: +```` +roslaunch kinect2_bridge kinect2_bridge.launch +rosrun kinect2_viewer kinect2_viewer +```` + +Then you will see the camera output: + + + +Once your camera is publishing, launch the 2D extractor node and the 3D extractor node by running: +```` +roslaunch roslaunch skeleton_extract_3d openpose_skeleton_extract.launch +```` +If everything is working fine you should have something similar to: + + + +Then you can run the visualization node: +```` +rosrun skeleton_extract_3d skeleton_extract_3d_visualization_node +```` +**NOTE**: To have the fixed frame for visualization you must run: +```` +rosrun kinect2_bridge kinect2_bridge _publish_tf:=true +```` +Now open rviz and select as fixed frame the one you have set for the markers. For example, I have chosen: +```` +kinect2_ir_optical_frame +```` +Select the topics: +```` +/openpose_ros/skeleton_3d/visualization_markers +/openpose_ros/skeleton_3d/visualization_skeleton +```` +and you should have something similar to: + + + +You can also select the pointcloud at the same time: + + + +### Topics +```` +/openpose_ros/skeleton_3d/detected_poses_image +/openpose_ros/skeleton_3d/detected_poses_keypoints +/openpose_ros/skeleton_3d/detected_poses_keypoints_3d +/openpose_ros/skeleton_3d/input_pointcloud +/openpose_ros/skeleton_3d/input_rgb +/openpose_ros/skeleton_3d/visualization_markers +/openpose_ros/skeleton_3d/visualization_skeleton +```` + +* **/openpose_ros/skeleton_3d/detected_poses_image:** the same kind of messages are published in this topic as in topic `/openpose_ros/detected_poses_image` in the 2d module. + +* **/openpose_ros/skeleton_3d/detected_poses_keypoints:** the same kind of messages are published in this topic as in topic +`/openpose_ros/detected_poses_keypoints` in the 2d module. + +* **/openpose_ros/skeleton_3d/detected_poses_keypoints_3d:** the 3d detections are published in this topic. The fields are the same as the mesagges published in `/openpose_ros/skeleton_3d/detected_poses_keypoints` but the fields of each bodypart change. Now they are: + * x: x real world coordinate of the joint. + * y: y real world coordinate of the joint. + * z: depth real world coordinate of the joint. + * confidence: confidence of the 2d detections. + + If you write the message in a .csv, it should look like this: + +  + +* **/openpose_ros/skeleton_3d/input_pointcloud:** Here is published the point-cloud that is synchronized with the RGB image from where we extract the x,y, z real world coordinates of the keypoints. + + + +* **/openpose_ros/skeleton_3d/input_rgb:** the RGB image that we use to make the 2d detections is published in this topic and it is synchronized with the input point-cloud. + + + +* **/openpose_ros/skeleton_3d/visualization_markers:** the markers to visualize in RViz the 3d detections of the joints are published in this topic. + + + +* **/openpose_ros/skeleton_3d/visualization_skeleton:** the skeleton to visualize in RViz the 3d detections is published in this topic. + + + +# Author +Miguel Arduengo GarcÃa + +Undergraduate student at CFIS-UPC (Universitat Politècnica de Catalunya) + +# Citation +Please cite the tech report if it helps in your publications: + +Tech report soon uploaded diff --git a/openpose_ros_msgs/CMakeLists.txt b/openpose_ros_msgs/CMakeLists.txt index a4805af7f015d469c75d2fe7f2a008cc60a26dc6..84781355f022084a5d54b3da0758a6bd87ee4f9a 100644 --- a/openpose_ros_msgs/CMakeLists.txt +++ b/openpose_ros_msgs/CMakeLists.txt @@ -16,6 +16,8 @@ add_message_files( FILES BodypartDetection.msg PersonDetection.msg + BodypartDetection_3d.msg + PersonDetection_3d.msg ) ## Generate services in the 'srv' folder diff --git a/openpose_ros_msgs/msg/BodypartDetection.msg b/openpose_ros_msgs/msg/BodypartDetection.msg index 41e069a8044219c9000a0ed800defb24c544ec27..83e5cf028867181b2a32ac18aa5d10aaf8dd638e 100644 --- a/openpose_ros_msgs/msg/BodypartDetection.msg +++ b/openpose_ros_msgs/msg/BodypartDetection.msg @@ -1,3 +1,3 @@ uint32 x uint32 y -float32 confidence \ No newline at end of file +float32 confidence diff --git a/openpose_ros_msgs/msg/BodypartDetection_3d.msg b/openpose_ros_msgs/msg/BodypartDetection_3d.msg new file mode 100644 index 0000000000000000000000000000000000000000..072d967e2b5ff7d378ad9d61cbeeeb4c6e56d535 --- /dev/null +++ b/openpose_ros_msgs/msg/BodypartDetection_3d.msg @@ -0,0 +1,4 @@ +float32 x +float32 y +float32 z +float32 confidence diff --git a/openpose_ros_msgs/msg/PersonDetection.msg b/openpose_ros_msgs/msg/PersonDetection.msg index 8a76f105999531720c91175dbd16384e3ca722b1..ea9e46b1fa72d50f0cb194f7ac63751089f9eb6b 100644 --- a/openpose_ros_msgs/msg/PersonDetection.msg +++ b/openpose_ros_msgs/msg/PersonDetection.msg @@ -1,3 +1,5 @@ +uint32 num_people_detected +uint32 person_ID BodypartDetection nose BodypartDetection neck BodypartDetection right_shoulder @@ -16,4 +18,4 @@ BodypartDetection right_eye BodypartDetection left_eye BodypartDetection right_ear BodypartDetection left_ear -BodypartDetection chest \ No newline at end of file +BodypartDetection chest diff --git a/openpose_ros_msgs/msg/PersonDetection_3d.msg b/openpose_ros_msgs/msg/PersonDetection_3d.msg new file mode 100644 index 0000000000000000000000000000000000000000..d7a031667fcd4c734e1ea2057182d2e122986b16 --- /dev/null +++ b/openpose_ros_msgs/msg/PersonDetection_3d.msg @@ -0,0 +1,21 @@ +uint32 num_people_detected +uint32 person_ID +BodypartDetection_3d nose +BodypartDetection_3d neck +BodypartDetection_3d right_shoulder +BodypartDetection_3d right_elbow +BodypartDetection_3d right_wrist +BodypartDetection_3d left_shoulder +BodypartDetection_3d left_elbow +BodypartDetection_3d left_wrist +BodypartDetection_3d right_hip +BodypartDetection_3d right_knee +BodypartDetection_3d right_ankle +BodypartDetection_3d left_hip +BodypartDetection_3d left_knee +BodypartDetection_3d left_ankle +BodypartDetection_3d right_eye +BodypartDetection_3d left_eye +BodypartDetection_3d right_ear +BodypartDetection_3d left_ear +BodypartDetection_3d chest diff --git a/openpose_ros_pkg/CMakeLists.txt b/openpose_ros_pkg/CMakeLists.txt index b645830c7b01368449490d2d2eff7c6abe7fa3b4..802d08d267f9db068d783f027eac5f59b85ef68f 100644 --- a/openpose_ros_pkg/CMakeLists.txt +++ b/openpose_ros_pkg/CMakeLists.txt @@ -43,7 +43,7 @@ catkin_package( ########### include_directories( # include -# ${catkin_INCLUDE_DIRS} + ${catkin_INCLUDE_DIRS} ${OPENPOSE_INCLUDE_DIRS} ) @@ -71,6 +71,15 @@ target_link_libraries(openpose_ros_node ${catkin_LIBRARIES} ) add_dependencies(openpose_ros_node ${catkin_EXPORTED_TARGETS}) +add_executable(openpose_ros_node_3d src/openpose_ros_node_3d.cpp) +target_link_libraries(openpose_ros_node_3d ${catkin_LIBRARIES} + ${OPENPOSE_LIBRARY} + ${CAFFE_LIBRARY} + ${CUDA_LIBRARY} + ) +add_dependencies(openpose_ros_node_3d ${catkin_EXPORTED_TARGETS}) + + add_executable(openpose_ros_node_firephinx src/openpose_ros_node_firephinx.cpp) target_link_libraries(openpose_ros_node_firephinx ${catkin_LIBRARIES} ${OPENPOSE_LIBRARY} @@ -83,4 +92,4 @@ add_dependencies(openpose_ros_node_firephinx ${catkin_EXPORTED_TARGETS}) add_executable(test_openpose_ros_service_call src/test_openpose_ros_service_call.cpp) target_link_libraries(test_openpose_ros_service_call ${catkin_LIBRARIES}) -add_dependencies(test_openpose_ros_service_call ${catkin_EXPORTED_TARGETS}) \ No newline at end of file +add_dependencies(test_openpose_ros_service_call ${catkin_EXPORTED_TARGETS}) diff --git a/openpose_ros_pkg/src/openpose_ros_node.cpp b/openpose_ros_pkg/src/openpose_ros_node.cpp index 9d62b8887cd24ba44aa1e48937110a4cda01b666..293203bb8b396990772b36e16bfbbc0e511d2a8a 100644 --- a/openpose_ros_pkg/src/openpose_ros_node.cpp +++ b/openpose_ros_pkg/src/openpose_ros_node.cpp @@ -22,6 +22,7 @@ #include <sensor_msgs/Image.h> #include <openpose_ros_msgs/GetPersons.h> + std::shared_ptr<op::PoseExtractor> g_pose_extractor; std::shared_ptr<op::PoseRenderer> poseRenderer; std::map<unsigned int, std::string> g_bodypart_map; @@ -30,7 +31,9 @@ op::Point<int> output_size; int g_num_scales; double g_scale_gap; + ros::Publisher image_skeleton_pub; + //! //! \brief getParam Get parameter from node handle //! \param nh The nodehandle @@ -54,6 +57,27 @@ T getParam(const ros::NodeHandle& nh, const std::string& param_name, T default_v return value; } + +//declare publisher keypoints_pub +ros::Publisher keypoints_pub; + +template <typename K> +K getParamK(const ros::NodeHandle& nh, const std::string& param_name, K default_value) +{ + K value; + if (nh.hasParam(param_name)) + { + nh.getParam(param_name, value); + } + else + { + ROS_WARN_STREAM("Parameter '" << param_name << "' not found, defaults to '" << default_value << "'"); + value = default_value; + } + return value; +} + + op::PoseModel stringToPoseModel(const std::string& pose_model_string) { if (pose_model_string == "COCO") @@ -86,9 +110,11 @@ std::map<unsigned int, std::string> getBodyPartMapFromPoseModel(const op::PoseMo } } + openpose_ros_msgs::BodypartDetection getBodyPartDetectionFromArrayAndIndex(const op::Array<float>& array, size_t idx) { openpose_ros_msgs::BodypartDetection bodypart; + bodypart.x = array[idx]; bodypart.y = array[idx+1]; bodypart.confidence = array[idx+2]; @@ -118,6 +144,7 @@ bool detectPosesCallback(openpose_ros_msgs::GetPersons::Request& req, openpose_r return false; } + cv::Mat image = cv_ptr->image; // std::string path = ros::package::getPath("openpose_ros_pkg"); @@ -197,8 +224,18 @@ bool detectPosesCallback(openpose_ros_msgs::GetPersons::Request& req, openpose_r for (size_t person_idx = 0; person_idx < num_people; person_idx++) { ROS_INFO(" Person ID: %zu", person_idx); - // Initialize all bodyparts with nan + openpose_ros_msgs::PersonDetection person_msg; + + //add number of people detected + person_msg.num_people_detected = num_people; + + //add person ID + person_msg.person_ID = person_idx; + + + // Initialize all bodyparts with nan + // (commented this line) --> openpose_ros_msgs::PersonDetection person_msg; person_msg.nose = getNANBodypart(); person_msg.neck = getNANBodypart(); person_msg.right_shoulder = getNANBodypart(); @@ -224,6 +261,7 @@ bool detectPosesCallback(openpose_ros_msgs::GetPersons::Request& req, openpose_r size_t final_idx = 3*(person_idx*num_bodyparts + bodypart_idx); std::string body_part_string = g_bodypart_map[bodypart_idx]; + openpose_ros_msgs::BodypartDetection bodypart_detection = getBodyPartDetectionFromArrayAndIndex(poses, final_idx); if (body_part_string == "Nose") person_msg.nose = bodypart_detection; @@ -254,7 +292,12 @@ bool detectPosesCallback(openpose_ros_msgs::GetPersons::Request& req, openpose_r ROS_INFO(" (x, y, confidence): %i, %i, %f", bodypart_detection.x, bodypart_detection.y, bodypart_detection.confidence); } + + //publish keypoints data of person_msg + keypoints_pub.publish(person_msg); + res.detections.push_back(person_msg); + } ROS_INFO("Detected %d persons", (int) res.detections.size()); @@ -284,7 +327,6 @@ int main(int argc, char** argv) unsigned int num_gpu_start = getParam(local_nh, "num_gpu_start", 0); - std::string package_path = ros::package::getPath("openpose_ros_pkg"); std::string folder_location = package_path + "/../openpose/models/"; @@ -293,11 +335,14 @@ int main(int argc, char** argv) g_bodypart_map = getBodyPartMapFromPoseModel(pose_model); ros::NodeHandle nh; - ros::ServiceServer service = nh.advertiseService("detect_poses", detectPosesCallback); + ros::ServiceServer service = nh.advertiseService("detect_poses", detectPosesCallback); image_skeleton_pub = nh.advertise<sensor_msgs::Image>( "/openpose_ros/detected_poses_image", 0 ); + //declare publisher of type openpose_ros_msgs::PersonDetection in topic /openpose_ros/detected_poses_keypoints + keypoints_pub = nh.advertise<openpose_ros_msgs::PersonDetection>( "/openpose_ros/detected_poses_keypoints" , 0 ); + g_pose_extractor = std::shared_ptr<op::PoseExtractorCaffe>( /* new op::PoseExtractorCaffe(g_net_input_size, net_output_size, output_size, g_num_scales, g_scale_gap, pose_model, model_folder, num_gpu_start));*/ diff --git a/openpose_ros_pkg/src/openpose_ros_node_3d.cpp b/openpose_ros_pkg/src/openpose_ros_node_3d.cpp new file mode 100644 index 0000000000000000000000000000000000000000..bf86168179cfd4608e10f79d8a32a44e22568191 --- /dev/null +++ b/openpose_ros_pkg/src/openpose_ros_node_3d.cpp @@ -0,0 +1,363 @@ +//#define USE_CAFFE +/*#include <openpose/pose/poseExtractor.hpp> +#include <openpose/pose/poseExtractorCaffe.hpp> +#include <openpose/pose/poseParameters.hpp> + +#include <openpose/core/headers.hpp> +#include <openpose/filestream/headers.hpp> +#include <openpose/gui/headers.hpp> +#include <openpose/pose/headers.hpp> +#include <openpose/utilities/headers.hpp> +*/ +#include <openpose/headers.hpp> + +#include <std_srvs/Empty.h> +#include <ros/node_handle.h> +#include <ros/service_server.h> +#include <ros/init.h> +#include <cv_bridge/cv_bridge.h> + +#include <ros/package.h> + +#include <sensor_msgs/Image.h> +#include <openpose_ros_msgs/GetPersons.h> + + +std::shared_ptr<op::PoseExtractor> g_pose_extractor; +std::shared_ptr<op::PoseRenderer> poseRenderer; +std::map<unsigned int, std::string> g_bodypart_map; +op::Point<int> g_net_input_size; +op::Point<int> output_size; +int g_num_scales; +double g_scale_gap; + + +ros::Publisher image_skeleton_pub; + +//! +//! \brief getParam Get parameter from node handle +//! \param nh The nodehandle +//! \param param_name Key string +//! \param default_value Default value if not found +//! \return The parameter value +//! +template <typename T> +T getParam(const ros::NodeHandle& nh, const std::string& param_name, T default_value) +{ + T value; + if (nh.hasParam(param_name)) + { + nh.getParam(param_name, value); + } + else + { + ROS_WARN_STREAM("Parameter '" << param_name << "' not found, defaults to '" << default_value << "'"); + value = default_value; + } + return value; +} + +//declare publisher keypoints_pub +ros::Publisher keypoints_pub; + +template <typename K> +K getParamK(const ros::NodeHandle& nh, const std::string& param_name, K default_value) +{ + K value; + if (nh.hasParam(param_name)) + { + nh.getParam(param_name, value); + } + else + { + ROS_WARN_STREAM("Parameter '" << param_name << "' not found, defaults to '" << default_value << "'"); + value = default_value; + } + return value; +} + +op::PoseModel stringToPoseModel(const std::string& pose_model_string) +{ + if (pose_model_string == "COCO") + return op::PoseModel::COCO_18; + else if (pose_model_string == "MPI") + return op::PoseModel::MPI_15; + else if (pose_model_string == "MPI_4_layers") + return op::PoseModel::MPI_15_4; + else + { + ROS_ERROR("String does not correspond to any model (COCO, MPI, MPI_4_layers)"); + return op::PoseModel::COCO_18; + } +} + +std::map<unsigned int, std::string> getBodyPartMapFromPoseModel(const op::PoseModel& pose_model) +{ + if (pose_model == op::PoseModel::COCO_18) + { + return op::POSE_COCO_BODY_PARTS; + } + else if (pose_model == op::PoseModel::MPI_15 || pose_model == op::PoseModel::MPI_15_4) + { + return op::POSE_MPI_BODY_PARTS; + } + else + { + ROS_FATAL("Invalid pose model, not map present"); + exit(1); + } +} + + +openpose_ros_msgs::BodypartDetection getBodyPartDetectionFromArrayAndIndex(const op::Array<float>& array, size_t idx) +{ + openpose_ros_msgs::BodypartDetection bodypart; + + bodypart.x = array[idx]; + bodypart.y = array[idx+1]; + bodypart.confidence = array[idx+2]; + return bodypart; +} + +openpose_ros_msgs::BodypartDetection getNANBodypart() +{ + openpose_ros_msgs::BodypartDetection bodypart; + bodypart.confidence = NAN; + return bodypart; +} + +bool detectPosesCallback(openpose_ros_msgs::GetPersons::Request& req, openpose_ros_msgs::GetPersons::Response& res) +{ + ROS_INFO("detectPosesCallback"); + + // Convert ROS message to opencv image + cv_bridge::CvImagePtr cv_ptr; + try + { + cv_ptr = cv_bridge::toCvCopy(req.image, req.image.encoding); + } + catch (cv_bridge::Exception& e) + { + ROS_ERROR("detectPosesCallback cv_bridge exception: %s", e.what()); + return false; + } + + + cv::Mat image = cv_ptr->image; + +// std::string path = ros::package::getPath("openpose_ros_pkg"); +// std::string filename = path + "/examples/COCO_val2014_000000000192.jpg"; +// cv::Mat image = op::loadImage(filename, CV_LOAD_IMAGE_COLOR); + + ROS_INFO("Parsed image"); + ROS_INFO_STREAM("Perform forward pass with the following settings:"); + ROS_INFO_STREAM("- net_input_size: " << g_net_input_size.x << " " << g_net_input_size.y); + ROS_INFO_STREAM("- num_scales: " << g_num_scales); + ROS_INFO_STREAM("- scale_gap: " << g_scale_gap); + ROS_INFO_STREAM("- image_size: " << image.size()); + op::CvMatToOpInput cv_mat_to_op_input(g_net_input_size, g_num_scales, g_scale_gap); +// g_pose_extractor->forwardPass(cv_mat_to_op_input.format(image), image.size()); + + ROS_INFO("Initialized Net Size"); + + op::Array<float> netInputArray; + std::vector<float> scaleRatios; + + std::tie(netInputArray, scaleRatios) = cv_mat_to_op_input.format(image); + ROS_INFO("Preparing for forward pass"); + + + g_pose_extractor->forwardPass(netInputArray, {image.cols, image.rows}, scaleRatios); + + ROS_INFO("g_pose_extractor->forwardPass done"); + + + + + //const op::Array<float> poses; + const op::Array<float> poses = g_pose_extractor->getPoseKeypoints(); + + // VISUALIZE OUTPUT + //const auto poseKeypoints = g_pose_extractor->getPoseKeypoints(); + + op::Point<int> outputSize(output_size.x, output_size.y); + op::CvMatToOpOutput cvMatToOpOutput{outputSize}; + op::OpOutputToCvMat opOutputToCvMat{outputSize}; + + const op::Point<int> windowedSize = outputSize; + op::FrameDisplayer frameDisplayer{windowedSize, "OpenPose Example"}; + + double scaleInputToOutput; + op::Array<float> outputArray; + std::tie(scaleInputToOutput, outputArray) = cvMatToOpOutput.format(image); + + //poseRenderer->renderPose(outputArray, poseKeypoints); + poseRenderer->renderPose(outputArray, poses); + auto outputImage = opOutputToCvMat.formatToCvMat(outputArray); + + + sensor_msgs::Image ros_image; + cv_bridge::CvImagePtr cv_ptr_out = cv_bridge::toCvCopy(req.image, req.image.encoding); + cv_ptr_out->image = outputImage; + ros_image = *(cv_ptr_out->toImageMsg()); + + + //frameDisplayer.displayFrame(outputImage, 0); // Alternative: cv::imshow(outputImage) + cv::waitKey(0) + image_skeleton_pub.publish(ros_image); + + // End Visualize Output + + + if (!poses.empty() && poses.getNumberDimensions() != 3) + { + ROS_ERROR("pose.getNumberDimensions(): %d != 3", (int) poses.getNumberDimensions()); + return false; + } + + int num_people = poses.getSize(0); + int num_bodyparts = poses.getSize(1); + + ROS_INFO("num people: %d", num_people); + + for (size_t person_idx = 0; person_idx < num_people; person_idx++) + { + ROS_INFO(" Person ID: %zu", person_idx); + + openpose_ros_msgs::PersonDetection person_msg; + + //add number of people detected + person_msg.num_people_detected = num_people; + + //add person ID + person_msg.person_ID = person_idx; + + // Initialize all bodyparts with nan + // (commented this line) --> openpose_ros_msgs::PersonDetection person_msg; + person_msg.nose = getNANBodypart(); + person_msg.neck = getNANBodypart(); + person_msg.right_shoulder = getNANBodypart(); + person_msg.right_elbow = getNANBodypart(); + person_msg.right_wrist = getNANBodypart(); + person_msg.left_shoulder = getNANBodypart(); + person_msg.left_elbow = getNANBodypart(); + person_msg.left_wrist = getNANBodypart(); + person_msg.right_hip = getNANBodypart(); + person_msg.right_knee = getNANBodypart(); + person_msg.right_ankle = getNANBodypart(); + person_msg.left_hip = getNANBodypart(); + person_msg.left_knee = getNANBodypart(); + person_msg.left_ankle = getNANBodypart(); + person_msg.right_eye = getNANBodypart(); + person_msg.left_eye = getNANBodypart(); + person_msg.right_ear = getNANBodypart(); + person_msg.left_ear = getNANBodypart(); + person_msg.chest = getNANBodypart(); + + for (size_t bodypart_idx = 0; bodypart_idx < num_bodyparts; bodypart_idx++) + { + size_t final_idx = 3*(person_idx*num_bodyparts + bodypart_idx); + + std::string body_part_string = g_bodypart_map[bodypart_idx]; + + openpose_ros_msgs::BodypartDetection bodypart_detection = getBodyPartDetectionFromArrayAndIndex(poses, final_idx); + + if (body_part_string == "Nose") person_msg.nose = bodypart_detection; + else if (body_part_string == "Neck") person_msg.neck = bodypart_detection; + else if (body_part_string == "RShoulder") person_msg.right_shoulder = bodypart_detection; + else if (body_part_string == "RElbow") person_msg.right_elbow = bodypart_detection; + else if (body_part_string == "RWrist") person_msg.right_wrist = bodypart_detection; + else if (body_part_string == "LShoulder") person_msg.left_shoulder = bodypart_detection; + else if (body_part_string == "LElbow") person_msg.left_elbow = bodypart_detection; + else if (body_part_string == "LWrist") person_msg.left_wrist = bodypart_detection; + else if (body_part_string == "RHip") person_msg.right_hip = bodypart_detection; + else if (body_part_string == "RKnee") person_msg.right_knee = bodypart_detection; + else if (body_part_string == "RAnkle") person_msg.right_ankle = bodypart_detection; + else if (body_part_string == "LHip") person_msg.left_hip = bodypart_detection; + else if (body_part_string == "LKnee") person_msg.left_knee = bodypart_detection; + else if (body_part_string == "LAnkle") person_msg.left_ankle = bodypart_detection; + else if (body_part_string == "REye") person_msg.right_eye = bodypart_detection; + else if (body_part_string == "LEye") person_msg.left_eye = bodypart_detection; + else if (body_part_string == "REar") person_msg.right_ear = bodypart_detection; + else if (body_part_string == "LEar") person_msg.left_ear = bodypart_detection; + else if (body_part_string == "Chest") person_msg.chest = bodypart_detection; + else + { + ROS_ERROR("Unknown bodypart %s, this should never happen!", body_part_string.c_str()); + } + + ROS_INFO(" body part: %s", body_part_string.c_str()); + ROS_INFO(" (x, y, confidence): %i, %i, %f", bodypart_detection.x, bodypart_detection.y, bodypart_detection.confidence); + + } + + //publish keypoints data of person_msg + keypoints_pub.publish(person_msg); + + res.detections.push_back(person_msg); + + } + + ROS_INFO("Detected %d persons", (int) res.detections.size()); + + return true; +} + +int main(int argc, char** argv) +{ + ros::init(argc, argv, "openpose_ros_service_node_3d"); + + ros::NodeHandle local_nh("~"); +// g_net_input_size = op::Point(getParam(local_nh, "net_input_width", 656), getParam(local_nh, "net_input_height", 368)); + + g_net_input_size.x = getParam(local_nh, "net_input_width", 656); + g_net_input_size.y = getParam(local_nh, "net_input_height", 368); + + op::Point<int> net_output_size(getParam(local_nh, "net_output_width", 656), getParam(local_nh, "net_output_height", 368)); + //op::Point<int> output_size(getParam(local_nh, "output_width", 1280), getParam(local_nh, "output_height", 720)); + //op::Point<int> output_size(getParam(local_nh, "output_width", 1024), getParam(local_nh, "output_height", 1024)); + + output_size.x = getParam(local_nh, "output_width", 1024); + output_size.y = getParam(local_nh, "output_height", 1024); + + g_num_scales = getParam(local_nh, "num_scales", 1); + g_scale_gap = getParam(local_nh, "scale_gap", 0.3); + unsigned int num_gpu_start = getParam(local_nh, "num_gpu_start", 0); + + + std::string package_path = ros::package::getPath("openpose_ros_pkg"); + std::string folder_location = package_path + "/../openpose/models/"; + + std::string model_folder = getParam(local_nh, "model_folder", folder_location); + op::PoseModel pose_model = stringToPoseModel(getParam(local_nh, "pose_model", std::string("COCO"))); + g_bodypart_map = getBodyPartMapFromPoseModel(pose_model); + + ros::NodeHandle nh; + + ros::ServiceServer service = nh.advertiseService("detect_poses_3d", detectPosesCallback); + + image_skeleton_pub = nh.advertise<sensor_msgs::Image>( "/openpose_ros/skeleton_3d/detected_poses_image", 0 ); + + //declare publisher of type openpose_ros_msgs::PersonDetection in topic /openpose_ros/detected_poses_keypoints + keypoints_pub = nh.advertise<openpose_ros_msgs::PersonDetection>( "/openpose_ros/skeleton_3d/detected_poses_keypoints" , 0 ); + + g_pose_extractor = std::shared_ptr<op::PoseExtractorCaffe>( +/* new op::PoseExtractorCaffe(g_net_input_size, net_output_size, output_size, g_num_scales, + g_scale_gap, pose_model, model_folder, num_gpu_start));*/ + + new op::PoseExtractorCaffe(g_net_input_size, net_output_size, output_size, g_num_scales, pose_model, + model_folder, num_gpu_start)); + + poseRenderer = std::shared_ptr<op::PoseRenderer>( + new op::PoseRenderer(net_output_size, output_size, pose_model, nullptr, true, 0.6)); + //poseRenderer{netOutputSize, outputSize, poseModel, nullptr, !FLAGS_disable_blending, (float)FLAGS_alpha_pose}; + + g_pose_extractor->initializationOnThread(); + poseRenderer->initializationOnThread(); + + ROS_INFO("Initialization Successful!"); + ros::spin(); + + return 0; +} + + diff --git a/openpose_ros_pkg/src/openpose_ros_node_firephinx.cpp b/openpose_ros_pkg/src/openpose_ros_node_firephinx.cpp index acb66c89f38ac423cf011e6763a5a42dd2264808..6369de9621e539e4ba5741fbff773b5364adfae1 100644 --- a/openpose_ros_pkg/src/openpose_ros_node_firephinx.cpp +++ b/openpose_ros_pkg/src/openpose_ros_node_firephinx.cpp @@ -31,6 +31,14 @@ #include <openpose/pose/headers.hpp> #include <openpose/utilities/headers.hpp> +#include <ros/ros.h> +#include <ros/package.h> +#include <sensor_msgs/Image.h> +#include <cv_bridge/cv_bridge.h> +#include <opencv2/highgui/highgui.hpp> +#include <openpose_ros_msgs/GetPersons.h> + + //#include <openpose/headers.hpp> @@ -42,7 +50,7 @@ DEFINE_int32(logging_level, 4, "The logging level. Inte " 255 will not output any. Current OpenPose library messages are in the range 0-4: 1 for" " low priority messages and 4 for important ones."); // Camera Topic -DEFINE_string(camera_topic, "/multisense/left/image_rect_color_rotated_180", "Image topic that OpenPose will process."); +DEFINE_string(camera_topic, "/usb_cam/image_raw", "Image topic that OpenPose will process."); // OpenPose std::string package_path = ros::package::getPath("openpose_ros_pkg"); std::string model_folder_location = package_path + "/../openpose/models/"; @@ -145,6 +153,22 @@ int openPoseROSTutorial() op::Point<int> netInputSize; netInputSize.x = NET_RES_X; //656; netInputSize.y = NET_RES_Y; //368; + + // Declare Node Handle + ros::NodeHandle nh; + + // Declare Service + ros::ServiceClient client = nh.serviceClient<openpose_ros_msgs::GetPersons>("detect_poses"); + openpose_ros_msgs::GetPersons srv; + + // Declare Publisher + ros::Publisher input_image_pub = nh.advertise<sensor_msgs::Image>( "/openpose_ros/input_image", 0 ); + + // Initialize cv_ptr + sensor_msgs::Image ros_image; + ros_image.encoding = sensor_msgs::image_encodings::BGR8; + cv_bridge::CvImagePtr cv_ptr; + cv_ptr = cv_bridge::toCvCopy(ros_image, sensor_msgs::image_encodings::BGR8); // netInputSize //const auto netInputSize = op::flagsToPoint(FLAGS_net_resolution, "656x368"); @@ -191,7 +215,10 @@ int openPoseROSTutorial() if(cvImagePtr != nullptr) { cv::Mat inputImage = cvImagePtr->image; - + + // Convert to ros Image msg + ros_image = *(cvImagePtr->toImageMsg()); + // Step 2 - Format input image to OpenPose input and output formats op::Array<float> netInputArray; std::vector<float> scaleRatios; @@ -201,6 +228,25 @@ int openPoseROSTutorial() std::tie(scaleInputToOutput, outputArray) = cvMatToOpOutput.format(inputImage); // Step 3 - Estimate poseKeypoints ROS_INFO("Performing Forward Pass"); + + // Begin Service Call + // You need to run the node openpose_ros_node for the service to work + srv.request.image = ros_image; + if (client.call(srv)) + { + + //publish input image, subscribe /openpose_ros/input_image to get the input image + input_image_pub.publish(ros_image); + + //subscribe to /openpose_ros/detected_poses_image to get the images + //subscribe to /openpose_ros/detected_poses_keypoints to get the keypoints in (x,y,score) format + ROS_INFO("Call Successful"); + } + else + { + ROS_ERROR("Failed to call service detect_poses"); + } + poseExtractorCaffe.forwardPass(netInputArray, {inputImage.cols, inputImage.rows}, scaleRatios); std::cout << "Forward Pass Success" << std::endl; @@ -241,4 +287,4 @@ int main(int argc, char** argv) ros::init(argc, argv, "openpose_ros_node"); return openPoseROSTutorial(); -} \ No newline at end of file +} diff --git a/readme_images/Fig1.png b/readme_images/Fig1.png new file mode 100644 index 0000000000000000000000000000000000000000..b9e2a78a96479eda447853d8de410e6aa1b2fe95 Binary files /dev/null and b/readme_images/Fig1.png differ diff --git a/readme_images/Fig10.png b/readme_images/Fig10.png new file mode 100644 index 0000000000000000000000000000000000000000..485cf879273fd4fbc3d38a8d273027dda2bac24e Binary files /dev/null and b/readme_images/Fig10.png differ diff --git a/readme_images/Fig11.png b/readme_images/Fig11.png new file mode 100644 index 0000000000000000000000000000000000000000..c21246a9de88285d46440e9f6fc971233844ca47 Binary files /dev/null and b/readme_images/Fig11.png differ diff --git a/readme_images/Fig12.png b/readme_images/Fig12.png new file mode 100644 index 0000000000000000000000000000000000000000..e4eb5bb37016cc8056b31b5cef12ed0d9530d232 Binary files /dev/null and b/readme_images/Fig12.png differ diff --git a/readme_images/Fig13.png b/readme_images/Fig13.png new file mode 100644 index 0000000000000000000000000000000000000000..c780af6104e838e03a06f454332e77a6f98a3a75 Binary files /dev/null and b/readme_images/Fig13.png differ diff --git a/readme_images/Fig14.png b/readme_images/Fig14.png new file mode 100644 index 0000000000000000000000000000000000000000..ab45a3557ddebb70d65648d7f3fe76f1d96cde95 Binary files /dev/null and b/readme_images/Fig14.png differ diff --git a/readme_images/Fig15.png b/readme_images/Fig15.png new file mode 100644 index 0000000000000000000000000000000000000000..b7e839350ea3a84d13c72eee30235dc4211a64d0 Binary files /dev/null and b/readme_images/Fig15.png differ diff --git a/readme_images/Fig16.png b/readme_images/Fig16.png new file mode 100644 index 0000000000000000000000000000000000000000..bd2107bc232f844a62ccfcc792afa67c84b4661b Binary files /dev/null and b/readme_images/Fig16.png differ diff --git a/readme_images/Fig17.png b/readme_images/Fig17.png new file mode 100644 index 0000000000000000000000000000000000000000..d5b33701df31f237d468ca02e9f15081dfa34a13 Binary files /dev/null and b/readme_images/Fig17.png differ diff --git a/readme_images/Fig18.png b/readme_images/Fig18.png new file mode 100644 index 0000000000000000000000000000000000000000..00d04ff1f1f08dbea07798696e52957cb5641fd8 Binary files /dev/null and b/readme_images/Fig18.png differ diff --git a/readme_images/Fig19.png b/readme_images/Fig19.png new file mode 100644 index 0000000000000000000000000000000000000000..a6d4d0d7a2666d218718a469b917054ce36e983b Binary files /dev/null and b/readme_images/Fig19.png differ diff --git a/readme_images/Fig2.png b/readme_images/Fig2.png new file mode 100644 index 0000000000000000000000000000000000000000..4c91f95762a04644ebf4dc92b9b79b4927a396fb Binary files /dev/null and b/readme_images/Fig2.png differ diff --git a/readme_images/Fig20.png b/readme_images/Fig20.png new file mode 100644 index 0000000000000000000000000000000000000000..b84c69e1fd1d61f3a70b1c9aaf11061641b69171 Binary files /dev/null and b/readme_images/Fig20.png differ diff --git a/readme_images/Fig21.png b/readme_images/Fig21.png new file mode 100644 index 0000000000000000000000000000000000000000..bfe0dd0c84228bad140ddface2f86ecb9d084078 Binary files /dev/null and b/readme_images/Fig21.png differ diff --git a/readme_images/Fig3.png b/readme_images/Fig3.png new file mode 100644 index 0000000000000000000000000000000000000000..33e84d57e2551d04d9331ea9d9b753ed3543cef1 Binary files /dev/null and b/readme_images/Fig3.png differ diff --git a/readme_images/Fig4.png b/readme_images/Fig4.png new file mode 100644 index 0000000000000000000000000000000000000000..68f4389937e0d13e43eee481a162c836197001b2 Binary files /dev/null and b/readme_images/Fig4.png differ diff --git a/readme_images/Fig5.png b/readme_images/Fig5.png new file mode 100644 index 0000000000000000000000000000000000000000..97160653ff29f45670faaf4890bb546ff3aa725e Binary files /dev/null and b/readme_images/Fig5.png differ diff --git a/readme_images/Fig6.png b/readme_images/Fig6.png new file mode 100644 index 0000000000000000000000000000000000000000..6fac96af30f6d8c4e959c3fa9083c69c0aa3410f Binary files /dev/null and b/readme_images/Fig6.png differ diff --git a/readme_images/Fig7.png b/readme_images/Fig7.png new file mode 100644 index 0000000000000000000000000000000000000000..d0927de772dd3576fbda3333649337195908964a Binary files /dev/null and b/readme_images/Fig7.png differ diff --git a/readme_images/Fig8.png b/readme_images/Fig8.png new file mode 100644 index 0000000000000000000000000000000000000000..6409838593d992ac09be2b106ce8fefd8d0196e0 Binary files /dev/null and b/readme_images/Fig8.png differ diff --git a/readme_images/Fig9.png b/readme_images/Fig9.png new file mode 100644 index 0000000000000000000000000000000000000000..f57a5e6c6e4c551c16233d7269424a6761ebbf7c Binary files /dev/null and b/readme_images/Fig9.png differ diff --git a/skeleton_extract_3d/CMakeLists.txt b/skeleton_extract_3d/CMakeLists.txt index 40fa05c9b8c1bf470dac0197fd142e5396b73e50..b4c316bda1c4424756d79a6ed77347e481338727 100644 --- a/skeleton_extract_3d/CMakeLists.txt +++ b/skeleton_extract_3d/CMakeLists.txt @@ -12,6 +12,7 @@ find_package(catkin REQUIRED COMPONENTS cv_bridge message_filters openpose_ros_msgs + visualization_msgs ) find_package(PCL 1.7 REQUIRED) @@ -22,9 +23,14 @@ catkin_package( include_directories( ${PCL_INCLUDE_DIRS} + ${catkin_INCLUDE_DIRS} ) add_executable(skeleton_extract_3d_node src/skeleton_extract_3d_node.cpp) target_link_libraries(skeleton_extract_3d_node ${catkin_LIBRARIES} ${PCL_LIBRARIES}) add_dependencies(skeleton_extract_3d_node ${catkin_EXPORTED_TARGETS}) + +add_executable(skeleton_extract_3d_visualization_node src/skeleton_extract_3d_visualization_node.cpp) +target_link_libraries(skeleton_extract_3d_visualization_node ${catkin_LIBRARIES}) +add_dependencies(skeleton_extract_3d_visualization_node ${catkin_EXPORTED_TARGETS}) diff --git a/skeleton_extract_3d/launch/openpose_skeleton_extract.launch b/skeleton_extract_3d/launch/openpose_skeleton_extract.launch index 966a1b954a2808d452de03ad363db01cfdb3b79e..67befb1e9ccaff430baf8994a2ac8f52796908ab 100644 --- a/skeleton_extract_3d/launch/openpose_skeleton_extract.launch +++ b/skeleton_extract_3d/launch/openpose_skeleton_extract.launch @@ -1,11 +1,11 @@ <launch> <!-- Launch openpose service--> - <node name="openpose_ros_node" pkg="openpose_ros_pkg" type="openpose_ros_node" output="screen" /> + <node name="openpose_ros_node_3d" pkg="openpose_ros_pkg" type="openpose_ros_node_3d" output="screen" /> <!-- Launch the skeleton extractor node --> - <node name="skeleton_extract_3d_node" pkg="skeleton_extract_3d" type="skeleton_extract_3d_node" > - <remap from="~point_cloud" to="/multisense/organized_image_points2_color_reverse_order" /> - <remap from="~image" to="/multisense/left/image_rect_color_rotated_180" /> - <remap from="~skeleton_2d_detector" to="detect_poses" /> + <node name="skeleton_extract_3d_node" pkg="skeleton_extract_3d" type="skeleton_extract_3d_node"> + <remap from="~point_cloud" to="/kinect2/qhd/points" /> + <remap from="~image" to="/kinect2/qhd/image_color" /> + <remap from="~skeleton_2d_detector" to="detect_poses_3d" /> </node> -</launch> \ No newline at end of file +</launch> diff --git a/skeleton_extract_3d/package.xml b/skeleton_extract_3d/package.xml index 0d9abaeb8f21454b83b8816aeaa5c79bd6da1dd4..6b9e5d3f4c5a3ce1b56c76d0fbf870fb084e7740 100644 --- a/skeleton_extract_3d/package.xml +++ b/skeleton_extract_3d/package.xml @@ -48,7 +48,8 @@ <build_depend>pcl_conversions</build_depend> <build_depend>cv_bridge</build_depend> <build_depend>message_filters</build_depend> - <build_depend>openpose_ros_msgs</build_depend> + <build_depend>openpose_ros_msgs</build_depend> + <build_depend>visualization_msgs</build_depend> <run_depend>roscpp</run_depend> <run_depend>sensor_msgs</run_depend> @@ -56,7 +57,8 @@ <run_depend>pcl_conversions</run_depend> <run_depend>cv_bridge</run_depend> <run_depend>message_filters</run_depend> - <run_depend>openpose_ros_msgs</run_depend> + <run_depend>openpose_ros_msgs</run_depend> + <run_depend>visualization_msgs</run_depend> <!-- The export tag contains other, unspecified, tags --> <export> diff --git a/skeleton_extract_3d/src/skeleton_extract_3d_node.cpp b/skeleton_extract_3d/src/skeleton_extract_3d_node.cpp index b9ed547783cb751d980c03f17b660988b95241a2..aae828e5eb70fb256dd96f40bb5b229871252c2d 100644 --- a/skeleton_extract_3d/src/skeleton_extract_3d_node.cpp +++ b/skeleton_extract_3d/src/skeleton_extract_3d_node.cpp @@ -12,10 +12,182 @@ #include <cv_bridge/cv_bridge.h> #include <openpose_ros_msgs/GetPersons.h> + +#include <pcl/common/common.h> +#include <pcl/filters/passthrough.h> +#include <iostream> + +#include <vector> +#include <cmath> + +#include <openpose_ros_msgs/BodypartDetection_3d.h> +#include <openpose_ros_msgs/PersonDetection_3d.h> + + +// Set resolution + +#define width 960 +#define height 540 + + + // Declare Publishers ros::Publisher pc_pub; ros::Publisher image_pub; + +// Declare 3d keypoints publisher +ros::Publisher keypoints_3d_pub; + + +// Function to initialize all skeletal detections + +openpose_ros_msgs::BodypartDetection_3d getNANBodypart() +{ + openpose_ros_msgs::BodypartDetection_3d bodypart_depth; + bodypart_depth.x = NAN; + bodypart_depth.y = NAN; + bodypart_depth.z = NAN; + bodypart_depth.confidence = NAN; + return bodypart_depth; +} + + +// Function for when a bodypart is not detected +void notDetectedBodyPart(std::string bodypart_name) +{ + std::cerr << bodypart_name << " not detected!!" << std::endl; + + std::cerr << bodypart_name << " pixel coordinates (x,y,z): " << std::endl; + std::cerr << "( "<< "nan" << ", " + << "nan" << ", " + << "nan" << ")" << std::endl; + + std::cerr << bodypart_name << " real world coordinates (x,y,z): " << std::endl; + std::cerr << "( " << "nan" << ", " + << "nan" << ", " + << "nan" << ")" << std::endl; + +} + + +// Function to make the average of a vector + +double Average(std::vector<double> v) +{ +double total = 0.0; +double size = 0.0; +for (int n = 0; n < v.size(); n++){total += v[n];size++;} + +return total/size; + +} + + +// Function to get 3d detections + +openpose_ros_msgs::BodypartDetection_3d get3dcoordinates(const openpose_ros_msgs::BodypartDetection bodypart_2d , const pcl::PointCloud<pcl::PointXYZ>::Ptr temp_cloud, const double maxx, const double minx, const double maxy, const double miny, const double maxz, const double minz, const std::string bodypart_name) +{ + openpose_ros_msgs::BodypartDetection_3d bodypart_depth; + + // Include 2d confidence 3d detections message + bodypart_depth.confidence = bodypart_2d.confidence; + + // If not detected bodypart + if (std::isnan(bodypart_depth.confidence) || bodypart_depth.confidence == 0.0 || (bodypart_2d.x == 0 && bodypart_depth.y == 0) || bodypart_2d.x > width || bodypart_2d.y > height ) + { + notDetectedBodyPart(bodypart_name); + bodypart_depth.x = NAN; + bodypart_depth.y = NAN; + bodypart_depth.z = NAN; + } + + // If detected bodypart + else + { + + // Get keypoint pixel coordinates + unsigned long long int x_pixel = bodypart_2d.x; + unsigned long long int y_pixel = bodypart_2d.y; + + // Vector for storing the keypoint index and the surrounding indices ones + std::vector<unsigned long long int> indices; + int index=0; + + // Number of colums and rows of indices surrounding keypoint to get (both must be even) + int rows = 3; + int columns = 3; + + // Store in the vector the indices surrounding the keypoint + for (int i = -(rows-1)/2 ; i <= (rows-1)/2; i++) + { + for (int j = -(columns-1)/2; j <= (columns-1)/2; j++) + { + index = width*(y_pixel+i)+x_pixel+j+1; + indices.push_back(index); + } + } + + // Vector for storing possible world coordinates of indices in the cluster + std::vector<double> possible_x; + std::vector<double> possible_y; + std::vector<double> possible_z; + + // Get coordinates if are valid + for(int n=0; n < indices.size(); n++) + { + if (not std::isnan(temp_cloud->points[indices[n]].x) && not std::isnan(temp_cloud->points[indices[n]].y) && not std::isnan(temp_cloud->points[indices[n]].z)) + { + if (temp_cloud->points[indices[n]].x <= maxx && temp_cloud->points[indices[n]].x >= minx) + { + if (temp_cloud->points[indices[n]].y <= maxy && temp_cloud->points[indices[n]].y >= miny) + { + if (temp_cloud->points[indices[n]].z <= maxz && temp_cloud->points[indices[n]].z >= minz) + { + possible_x.push_back(temp_cloud->points[indices[n]].x); + possible_y.push_back(temp_cloud->points[indices[n]].y); + possible_z.push_back(temp_cloud->points[indices[n]].z); + } + } + } + + } + } + + // Check if vectors are empty + if (possible_x.size() == 0 || possible_y.size() == 0 || possible_z.size() == 0) + { + notDetectedBodyPart(bodypart_name); + bodypart_depth.x = NAN; + bodypart_depth.y = NAN; + bodypart_depth.z = NAN; + } + else + { + // Make the mean for each coordinate + bodypart_depth.x = Average(possible_x); + bodypart_depth.y = Average(possible_y); + bodypart_depth.z = Average(possible_z); + + + std::cerr << bodypart_name << " pixel coordinates (x,y,z): " << std::endl; + std::cerr << "( "<< bodypart_2d.x << ", " + << bodypart_2d.y << ", " + << bodypart_depth.z << ")" << std::endl; + + std::cerr << bodypart_name << " real world coordinates (x,y,z): " << std::endl; + std::cerr << "( " << bodypart_depth.x << ", " + << bodypart_depth.y << ", " + << bodypart_depth.z << ")" << std::endl; + + + } + + } + +return bodypart_depth; + +} // Declare Service Client ros::ServiceClient client; openpose_ros_msgs::GetPersons srv; @@ -26,28 +198,204 @@ typedef message_filters::sync_policies::ApproximateTime<sensor_msgs::PointCloud2 void callback(const sensor_msgs::PointCloud2ConstPtr& cloud_msg, const sensor_msgs::ImageConstPtr& image_msg){ ROS_INFO("Cloud and Image Messages Received!"); ROS_INFO(" Cloud Time Stamp: %f", cloud_msg->header.stamp.toSec()); - ROS_INFO(" Image Time Stamp: %f", image_msg->header.stamp.toSec()); + ROS_INFO(" Image Time Stamp: %f", image_msg->header.stamp.toSec()); + + // Publish input pointcloud and image + pc_pub.publish(*cloud_msg); + image_pub.publish(*image_msg); + srv.request.image = *image_msg; if (client.call(srv)) { ROS_INFO("ROS Service call Successful"); // Prepare a new ROS Message for all skeletal detections - // Prepare a new point cloud to publish - // for each detection, + + // Initialize message for skeletal detections + openpose_ros_msgs::PersonDetection_3d person_msg; + + // Number of people detected + int num_people = srv.response.detections.size(); + + // Number of bodyparts, we suppose we are working with COCO + int num_bodyparts = 18; + + // for each detection (person), + + for (size_t person_idx = 0; person_idx < num_people; person_idx++) + { + // Prepare a new ROS Message for this skeletal detection - // Prepare a new color to use + + // Add number of people detected + person_msg.num_people_detected = num_people; + + // Add person ID + person_msg.person_ID = person_idx; + + // Initialize all bodyparts (x,y,z=0 & confidence = nan) + + person_msg.nose = getNANBodypart(); + person_msg.neck = getNANBodypart(); + person_msg.right_shoulder = getNANBodypart(); + person_msg.right_elbow = getNANBodypart(); + person_msg.right_wrist = getNANBodypart(); + person_msg.left_shoulder = getNANBodypart(); + person_msg.left_elbow = getNANBodypart(); + person_msg.left_wrist = getNANBodypart(); + person_msg.right_hip = getNANBodypart(); + person_msg.right_knee = getNANBodypart(); + person_msg.right_ankle = getNANBodypart(); + person_msg.left_hip = getNANBodypart(); + person_msg.left_knee = getNANBodypart(); + person_msg.left_ankle = getNANBodypart(); + person_msg.right_eye = getNANBodypart(); + person_msg.left_eye = getNANBodypart(); + person_msg.right_ear = getNANBodypart(); + person_msg.left_ear = getNANBodypart(); + person_msg.chest = getNANBodypart(); + + + // Declare pcl<xyz> pointcloud + pcl::PointCloud<pcl::PointXYZ>::Ptr temp_cloud (new pcl::PointCloud<pcl::PointXYZ>); + + //Load pointcloud data from cloud_msg (sensor_msgs::Pointcloud2) to temp_cloud of type pcl::Pointcloud<pcl::PointXYZ> + pcl::fromROSMsg (*cloud_msg, *temp_cloud); + + // Get max and min coordinates + + // Declare pcl points + pcl::PointXYZ min; + pcl::PointXYZ max; + + // Get min and max depth coordinate + pcl::getMinMax3D<pcl::PointXYZ>(*temp_cloud,min,max); + + // Get minimun x,y,z in the screen + std::cerr << "Minimum pointcloud (x,y,z) coordinates: " << std::endl; + std::cerr << " (" << min.x << ", " + << min.y << ", " + << min.z << " )"<< std::endl; + + // Get maximum x,y,z in the screen + std::cerr << "Maximum pointcloud (x,y,z) coordinates: " << std::endl; + std::cerr << " (" << max.x << ", " + << max.y << ", " + << max.z << " )"<< std::endl; // for each body part - // grab a small area of pixels surrounding the x,y image coordinate - //for each pixel in that area, check if it is a valid point: (the point exists in the organized point cloud and that its z-depth is less than Z_MAX) - // mean-shift valid 3D points and store largest cluster + for (size_t bodypart_idx = 0; bodypart_idx < num_bodyparts; bodypart_idx++) + { + // Get corresponding 2d skeleton detection for person with ID idx + openpose_ros_msgs::PersonDetection skeleton_detections = srv.response.detections[person_idx]; + + // Initialize bodypart msg + openpose_ros_msgs::BodypartDetection bodypart_detections; - // + if (bodypart_idx == 0) + { + bodypart_detections = skeleton_detections.nose; + person_msg.nose = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Nose"); + } + else if (bodypart_idx == 1) + { + bodypart_detections = skeleton_detections.neck; + person_msg.neck = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Neck"); + } + else if (bodypart_idx == 2) + { + bodypart_detections = skeleton_detections.right_shoulder; + person_msg.right_shoulder = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Right shoulder"); + } + else if (bodypart_idx == 3) + { + bodypart_detections = skeleton_detections.right_elbow; + person_msg.right_elbow = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Right elbow"); + } + else if (bodypart_idx == 4) + { + bodypart_detections = skeleton_detections.right_wrist; + person_msg.right_wrist = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Right wrist"); + } + else if (bodypart_idx == 5) + { + bodypart_detections = skeleton_detections.left_shoulder; + person_msg.left_shoulder = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Left shoulder"); + } + else if (bodypart_idx == 6) + { + bodypart_detections = skeleton_detections.left_elbow; + person_msg.left_elbow = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Left elbow"); + } + else if (bodypart_idx == 7) + { + bodypart_detections = skeleton_detections.left_wrist; + person_msg.left_wrist = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Left wrist"); + } + else if (bodypart_idx == 8) + { + bodypart_detections = skeleton_detections.right_hip; + person_msg.right_hip = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Right hip"); + } + else if (bodypart_idx == 9) + { + bodypart_detections = skeleton_detections.right_knee; + person_msg.right_knee = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Right knee"); + } + else if (bodypart_idx == 10) + { + bodypart_detections = skeleton_detections.right_ankle; + person_msg.right_ankle = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Right ankle"); + } + else if (bodypart_idx == 11) + { + bodypart_detections = skeleton_detections.left_hip; + person_msg.left_hip = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Left hip"); + } + else if (bodypart_idx == 12) + { + bodypart_detections = skeleton_detections.left_knee; + person_msg.left_knee = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Left knee"); + } + else if (bodypart_idx == 13) + { + bodypart_detections = skeleton_detections.left_ankle; + person_msg.left_ankle = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Left ankle"); + } + else if (bodypart_idx == 14) + { + bodypart_detections = skeleton_detections.right_eye; + person_msg.right_eye = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Right eye"); + } + else if (bodypart_idx == 15) + { + bodypart_detections = skeleton_detections.left_eye; + person_msg.left_eye = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Left eye"); + } + else if (bodypart_idx == 16) + { + bodypart_detections = skeleton_detections.right_ear; + person_msg.right_ear = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Right ear"); + } + else if (bodypart_idx == 17) + { + bodypart_detections = skeleton_detections.left_ear; + person_msg.left_ear = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Left ear"); + } +/* else if (bodypart_idx == 18) + { + bodypart_detections = skeleton_detections.chest; + person_msg.chest = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Chest"); + } +*/ + } -// ROS_INFO("Hello!: %ld", (long int)srv.response.sum); + // Publish 3D detection + keypoints_3d_pub.publish(person_msg); + + } } else { @@ -69,6 +417,17 @@ int main(int argc, char** argv){ message_filters::Subscriber<sensor_msgs::Image> image_sub(nh, "image", 1); client = nh.serviceClient<openpose_ros_msgs::GetPersons>("skeleton_2d_detector"); + + // Pointcloud publisher topic /openpose_ros/input_pointcloud + pc_pub = nh.advertise<sensor_msgs::PointCloud2>( "/openpose_ros/skeleton_3d/input_pointcloud", 0); + + // Image publisher topic /openpose_ros/input_rgb + image_pub = nh.advertise<sensor_msgs::Image>( "/openpose_ros/skeleton_3d/input_rgb", 0); + + // Keypoints in 3D topic /openpose_ros/detected_poses_keypoints_3d + keypoints_3d_pub = nh.advertise<openpose_ros_msgs::PersonDetection_3d>( "/openpose_ros/skeleton_3d/detected_poses_keypoints_3d", 0); + + // ApproximateTime takes a queue size as its constructor argument, hence MySyncPolicy(10) message_filters::Synchronizer<MySyncPolicy> sync(MySyncPolicy(10), cloud_sub, image_sub); sync.registerCallback(boost::bind(&callback, _1, _2)); @@ -78,4 +437,4 @@ int main(int argc, char** argv){ return 0; } - \ No newline at end of file + diff --git a/skeleton_extract_3d/src/skeleton_extract_3d_visualization_node.cpp b/skeleton_extract_3d/src/skeleton_extract_3d_visualization_node.cpp new file mode 100644 index 0000000000000000000000000000000000000000..3325077db6651a59e5920c816f05c9e34e6a328f --- /dev/null +++ b/skeleton_extract_3d/src/skeleton_extract_3d_visualization_node.cpp @@ -0,0 +1,339 @@ +#include <sensor_msgs/Image.h> +#include <sensor_msgs/PointCloud2.h> + +#include <ros/ros.h> +#include <ros/publisher.h> +#include <visualization_msgs/Marker.h> +#include <visualization_msgs/MarkerArray.h> + +#include <vector> +#include <string> +#include <sstream> +#include <iostream> +#include <math.h> + +#include <openpose_ros_msgs/BodypartDetection_3d.h> +#include <openpose_ros_msgs/PersonDetection_3d.h> + +// Declare marker and skeleton publishers + +ros::Publisher marker_pub; +ros::Publisher skeleton_pub; + +// Function to check if 3d detection is not NAN +bool PointISValid(const openpose_ros_msgs::BodypartDetection_3d bodypart){ + if (std::isnan(bodypart.x) || std::isnan(bodypart.y) || std::isnan(bodypart.z)){return false;} + else{return true;} +} + +// Function to add joint coordinates to each point marker +geometry_msgs::Point AddPoint(const openpose_ros_msgs::BodypartDetection_3d bodypart){ + + geometry_msgs::Point p; + p.x = bodypart.x; + p.y = bodypart.y; + p.z = bodypart.z; + + return p; +} + + +// Declare class to subscribe to 3d detections and publish visualization markers +class SubscribeAndPublish +{ +public: + SubscribeAndPublish() + { + //Topics you want to publish + marker_pub = n_.advertise<visualization_msgs::Marker>("/openpose_ros/skeleton_3d/visualization_markers", 1); + skeleton_pub = n_.advertise<visualization_msgs::Marker>("/openpose_ros/skeleton_3d/visualization_skeleton", 1); + + //Topics you want to subscribe + sub_ = n_.subscribe("/openpose_ros/skeleton_3d/detected_poses_keypoints_3d", 1, &SubscribeAndPublish::callback, this); + } + + // Declare callback for subscriber + void callback(const openpose_ros_msgs::PersonDetection_3d& person_msg) + { + + ROS_INFO("3D Detection Received!"); + + + // Declare marker for the body joints + visualization_msgs::Marker marker; + + // Set boyjoints markers + marker.header.frame_id = "/kinect2_ir_optical_frame"; + marker.id = person_msg.person_ID; + marker.ns = "joints"; + marker.header.stamp = ros::Time(); + // Markers will be spheres + marker.type = visualization_msgs::Marker::SPHERE_LIST; + marker.action = visualization_msgs::Marker::ADD; + marker.scale.x = 0.05; + marker.scale.y = 0.05; + marker.scale.z = 0.05; + // Joints are red + marker.color.a = 1.0; + marker.color.r = 1.0; + marker.color.g = 0.0; + marker.color.b = 0.0; + + // Set marker duration in 150ms + marker.lifetime = ros::Duration(0.15); + + // Declare line strip marker for the skeleton + visualization_msgs::Marker skeleton; + + skeleton.id = person_msg.person_ID; + skeleton.header.frame_id = "/kinect2_ir_optical_frame"; + skeleton.ns = "skeleton"; + skeleton.header.stamp = ros::Time(); + // Skeleton will be lines + skeleton.type = visualization_msgs::Marker::LINE_LIST; + skeleton.scale.x = 0.03; + skeleton.scale.y = 0.03; + skeleton.scale.z = 0.03; + // Skeleton is blue + skeleton.color.a = 1.0; + skeleton.color.r = 0.0; + skeleton.color.g = 0.0; + skeleton.color.b = 1.0; + + // Set skeleton lifetime + skeleton.lifetime = ros::Duration(0.15); + + // Assign 3D joints coordinates to each marker + // Check if two consecutive markers exist, if true, draw a line + for (size_t bodypart_idx = 0; bodypart_idx < 18 ; bodypart_idx++){ + + if (bodypart_idx == 0) + { + if (PointISValid(person_msg.nose)) + {marker.points.push_back(AddPoint(person_msg.nose));} + } + else if (bodypart_idx == 1) + { + if (PointISValid(person_msg.neck)){ + marker.points.push_back(AddPoint(person_msg.neck)); + // Draw line between neck and nose + if (PointISValid(person_msg.nose)){ + skeleton.points.push_back(AddPoint(person_msg.nose)); + skeleton.points.push_back(AddPoint(person_msg.neck)); + } + } + + } + else if (bodypart_idx == 2) + { + if (PointISValid(person_msg.right_shoulder)){ + marker.points.push_back(AddPoint(person_msg.right_shoulder)); + // Draw line between right shoulder and neck + if (PointISValid(person_msg.neck)){ + skeleton.points.push_back(AddPoint(person_msg.neck)); + skeleton.points.push_back(AddPoint(person_msg.right_shoulder)); + } + } + } + else if (bodypart_idx == 3) + { + if (PointISValid(person_msg.right_elbow)){ + marker.points.push_back(AddPoint(person_msg.right_elbow)); + // Draw line between right shoulder and right elbow + if (PointISValid(person_msg.right_shoulder)){ + skeleton.points.push_back(AddPoint(person_msg.right_shoulder)); + skeleton.points.push_back(AddPoint(person_msg.right_elbow)); + } + } + + } + else if (bodypart_idx == 4) + { + if (PointISValid(person_msg.right_wrist)){ + marker.points.push_back(AddPoint(person_msg.right_wrist)); + // Draw line between right elbow and right wrist + if (PointISValid(person_msg.right_elbow)){ + skeleton.points.push_back(AddPoint(person_msg.right_elbow)); + skeleton.points.push_back(AddPoint(person_msg.right_wrist)); + } + } + } + else if (bodypart_idx == 5) + { + if (PointISValid(person_msg.left_shoulder)){ + marker.points.push_back(AddPoint(person_msg.left_shoulder)); + // Draw line between left shoulder and neck + if (PointISValid(person_msg.neck)){ + skeleton.points.push_back(AddPoint(person_msg.neck)); + skeleton.points.push_back(AddPoint(person_msg.left_shoulder)); + } + } + + } + else if (bodypart_idx == 6) + { + if (PointISValid(person_msg.left_elbow)){ + marker.points.push_back(AddPoint(person_msg.left_elbow)); + // Draw line between left shoulder and left elbow + if (PointISValid(person_msg.left_shoulder)){ + skeleton.points.push_back(AddPoint(person_msg.left_shoulder)); + skeleton.points.push_back(AddPoint(person_msg.left_elbow)); + } + } + } + else if (bodypart_idx == 7) + { + if (PointISValid(person_msg.left_wrist)){ + marker.points.push_back(AddPoint(person_msg.left_wrist)); + // Draw line between left elbow and left wrist + if (PointISValid(person_msg.left_elbow)){ + skeleton.points.push_back(AddPoint(person_msg.left_elbow)); + skeleton.points.push_back(AddPoint(person_msg.left_wrist)); + } + } + } + else if (bodypart_idx == 8) + { + if (PointISValid(person_msg.right_hip)){ + marker.points.push_back(AddPoint(person_msg.right_hip)); + // Draw line between right hip and neck + if (PointISValid(person_msg.neck)){ + skeleton.points.push_back(AddPoint(person_msg.neck)); + skeleton.points.push_back(AddPoint(person_msg.right_hip)); + } + } + } + else if (bodypart_idx == 9) + { + if (PointISValid(person_msg.right_knee)){ + marker.points.push_back(AddPoint(person_msg.right_knee)); + // Draw line between right hip and right knee + if (PointISValid(person_msg.right_hip)){ + skeleton.points.push_back(AddPoint(person_msg.right_hip)); + skeleton.points.push_back(AddPoint(person_msg.right_knee)); + } + } + } + else if (bodypart_idx == 10) + { + if (PointISValid(person_msg.right_ankle)){ + marker.points.push_back(AddPoint(person_msg.right_ankle)); + // Draw line between right ankle and right knee + if (PointISValid(person_msg.right_hip)){ + skeleton.points.push_back(AddPoint(person_msg.right_knee)); + skeleton.points.push_back(AddPoint(person_msg.right_ankle)); + } + } + } + else if (bodypart_idx == 11) + { + if (PointISValid(person_msg.left_hip)){ + marker.points.push_back(AddPoint(person_msg.left_hip)); + // Draw line between left hip and neck + if (PointISValid(person_msg.neck)){ + skeleton.points.push_back(AddPoint(person_msg.neck)); + skeleton.points.push_back(AddPoint(person_msg.left_hip)); + } + } + } + else if (bodypart_idx == 12) + { + if (PointISValid(person_msg.left_knee)){ + marker.points.push_back(AddPoint(person_msg.left_knee)); + // Draw line between left knee and left hip + if (PointISValid(person_msg.left_hip)){ + skeleton.points.push_back(AddPoint(person_msg.left_hip)); + skeleton.points.push_back(AddPoint(person_msg.left_knee)); + } + } + } + else if (bodypart_idx == 13) + { + if (PointISValid(person_msg.left_ankle)){ + marker.points.push_back(AddPoint(person_msg.left_ankle)); + // Draw line between left knee and left ankle + if (PointISValid(person_msg.left_knee)){ + skeleton.points.push_back(AddPoint(person_msg.left_knee)); + skeleton.points.push_back(AddPoint(person_msg.left_ankle)); + } + } + } + else if (bodypart_idx == 14) + { + if (PointISValid(person_msg.right_eye)){ + marker.points.push_back(AddPoint(person_msg.right_eye)); + // Draw line between right eye and nose + if (PointISValid(person_msg.nose)){ + skeleton.points.push_back(AddPoint(person_msg.nose)); + skeleton.points.push_back(AddPoint(person_msg.right_eye)); + } + } + } + else if (bodypart_idx == 15) + { + if (PointISValid(person_msg.left_eye)){ + marker.points.push_back(AddPoint(person_msg.left_eye)); + // Draw line between left eye and nose + if (PointISValid(person_msg.nose)){ + skeleton.points.push_back(AddPoint(person_msg.nose)); + skeleton.points.push_back(AddPoint(person_msg.left_eye)); + } + } + } + else if (bodypart_idx == 16) + { + if (PointISValid(person_msg.right_ear)){ + marker.points.push_back(AddPoint(person_msg.right_ear)); + // Draw line between right eye and right ear + if (PointISValid(person_msg.right_eye)){ + skeleton.points.push_back(AddPoint(person_msg.right_eye)); + skeleton.points.push_back(AddPoint(person_msg.right_ear)); + } + } + } + else if (bodypart_idx == 17) + { + if (PointISValid(person_msg.left_ear)){ + marker.points.push_back(AddPoint(person_msg.left_ear)); + // Draw line between left eye and left ear + if (PointISValid(person_msg.left_eye)){ + skeleton.points.push_back(AddPoint(person_msg.left_eye)); + skeleton.points.push_back(AddPoint(person_msg.left_ear)); + } + } + } +/* else if (bodypart_idx == 18) + { + if (PointISValid(person_msg.chest)) + {marker.points.push_back(AddPoint(person_msg.chest));} + } +*/ + } + + // Publish markers + marker_pub.publish(marker); + skeleton_pub.publish(skeleton); + } + +private: + ros::NodeHandle n_; + ros::Publisher marker_pub; + ros::Publisher skeleton_pub; + ros::Subscriber sub_; + +};//End of class SubscribeAndPublish + +int main(int argc, char **argv) +{ + //Initiate ROS + ros::init(argc, argv, "skeleton_extract_3d_visualization_node"); + + //Create an object of class SubscribeAndPublish that will take care of everything + SubscribeAndPublish SAPObject; + + ros::spin(); + + return 0; +} + +