ROS IMU services, client and server node templates - ros

I would like to create IMU ROS services. I have a Server(in my case is a microcontroller ESC32) that can obtain IMU reading and should pass to the Client (in my case is Raspberry PI 4B) the IMU data when requested for further processing.So I just need to pass the raw IMU data. I would like to start with some ros services template for the IMU server and client node. This is my .srv file
float64 x_orient_in
float64 y_orient_in
float64 z_orient_in
float64 w_orient_in
float64 x_veloc_in
float64 y_veloc_in
float64 z_veloc_in
float64 x_accel_in
float64 y_accel_in
float64 z_accel_in
---
float64 x_orient_out
float64 y_orient_out
float64 z_orient_out
float64 w_orient_out
float64 x_veloc_out
float64 y_veloc_out
float64 z_veloc_out
float64 x_accel_out
float64 y_accel_out
float64 z_accel_out
bool success
This is my server cpp
#include "ros/ros.h"
#include <sensor_msgs/Imu.h>
#include "ros_services/ImuValue.h"
bool get_val(ros_services::ImuValue::Request &req, ros_services::ImuValue::Response &res)
{
ROS_INFO("sending back response");
res.x_orient_out= req.x_orient_in;
res.y_orient_out= req.y_orient_in;
res.z_orient_out= req.z_orient_in;
res.w_orient_out= req.w_orient_in;
res.x_veloc_out= req.x_veloc_in;
res.y_veloc_out= req.y_veloc_in;
res.z_veloc_out= req.z_veloc_in;
res.x_accel_out= req.x_accel_in;
res.x_accel_out= req.x_accel_in;
res.x_accel_out= req.x_accel_in;
}
int main(int argc, char **argv)
{
ros::init(argc, argv, "imu_status_server");
ros::NodeHandle n;
ros::ServiceServer service = n.advertiseService("imu_status_server", get_val);
ROS_INFO("Starting server...");
ros::spin();
return 0;
}
And my IMU ros topic is this
header:
seq: 45672
stamp:
secs: 956
nsecs: 962000000
frame_id: "thrbot/imu_link"
orientation:
x: 0.0697171053094
y: 0.00825242210747
z: 0.920964387685
w: -0.383270164991
orientation_covariance: [0.0001, 0.0, 0.0, 0.0, 0.0001, 0.0, 0.0, 0.0, 0.0001]
angular_velocity:
x: 0.00156996015527
y: 0.0263644782572
z: -0.0617661883137
angular_velocity_covariance: [1.1519236000000001e-07, 0.0, 0.0, 0.0, 1.1519236000000001e-07, 0.0, 0.0, 0.0, 1.1519236000000001e-07]
linear_acceleration:
x: 1.32039048367
y: -0.376341478938
z: 9.70643773249
linear_acceleration_covariance: [1.6e-05, 0.0, 0.0, 0.0, 1.6e-05, 0.0, 0.0, 0.0, 1.6e-05]

I would heavily suggest checking out the ros wiki tutorials. They have a lot of info that can also save you some time. That being said, you would just set it up like any other service call. You will probably also have to create your own message that might look something like this:
float64 x_accel
float64 y_accel
float64 z_accel
---
bool success
Then you just need a server node to handling incoming requests like this:
#include "ros/ros.h"
#include "your_service/service_file.h"
bool get_val(your_service::service_file::Request &req,
your_service::service_file::Response &res)
{
res.x_accel = get_accel_x(); //Defined somewhere else by you
res.y_accel = get_accel_y(); //Defined somewhere else by you
res.z_accel = get_accel_z(); //Defined somewhere else by you
return true;
}
int main(int argc, char **argv)
{
ros::init(argc, argv, "imu_status_server");
ros::NodeHandle n;
ros::ServiceServer service = n.advertiseService("imu_status_server", add);
ROS_INFO("Starting server...");
ros::spin();
return 0;
}
Finally, the client can make calls to the server like this
ros::ServiceClient client = n.serviceClient<beginner_tutorials::AddTwoInts>("imu_status_server");
your_service::service_file srv;
client.call(srv);
std::cout << "Got accel x: " << srv.response.x_accel << std::endl;

Related

How to use RealSense's spatial_filter on an OpenCV Mat?

I want to apply RealSense library's depth filtering (rs2::spatial_filter) on an OpenCV Mat, but it seems like the filter is not being applied. The original depth image and the supposedly filtered depth image look exactly the same.
To load raw depth data into rs2::frame, I used modified #Groch88's answer. One of the changes I made was changing the depth format from RS2_FORMAT_Z16 to RS2_FORMAT_DISTANCE (to be able to load a float depth map) and not loading the RGB part of the frame. The whole source is below.
Why do the original and the filtered images look exactly the same? Am I missing something obvious?
main.cpp:
#include <iostream>
#include <opencv2/opencv.hpp>
#include <librealsense2/rs.hpp>
#include "rsImageConverter.h"
int main()
{
cv::Mat rawDepthImg = load_raw_depth("/path/to/depth/image"); // loads float depth image
rsImageConverter ic{rawDepthImg.cols, rawDepthImg.rows, sizeof(float)};
if ( !ic.convertFrame(rawDepthImg.data, rawDepthImg.data) )
{
fprintf(stderr, "Could not load depth.\n");
exit(1);
}
rs2::frame rsDepthFrame = ic.getDepth();
// Filter
// https://dev.intelrealsense.com/docs/post-processing-filters
rs2::spatial_filter spat_filter;
spat_filter.set_option(RS2_OPTION_FILTER_MAGNITUDE, 2.0f);
spat_filter.set_option(RS2_OPTION_FILTER_SMOOTH_ALPHA, 0.5f);
spat_filter.set_option(RS2_OPTION_FILTER_SMOOTH_DELTA, 20.0f);
// Apply filtering
rs2::frame rsFilteredDepthFrame;
rsFilteredDepthFrame = spat_filter.process(rsDepthFrame);
// Copy filtered depth to OpenCV Mat
cv::Mat filteredDepth = cv::Mat::zeros(rawDepthImg.size(), CV_32F);
memcpy(filteredDepth.data, rsFilteredDepthFrame.get_data(), rawDepthImg.cols * rawDepthImg.rows * sizeof(float));
// Display (un)filtered images
cv::imshow("Original depth", rawDepthImg); // Original image is being shown
cv::imshow("Filtered depth", filteredDepth); // A depth image that looks exactly like the original unfiltered depth map is shown
cv::imshow("Diff", filteredDepth - rawDepthImg); // A black image is being shown
cv::waitKey(0);
return 0;
}
rsImageConverter.h (edited version of #Doch88's code):
#include <librealsense2/rs.hpp>
#include <librealsense2/hpp/rs_internal.hpp>
class rsImageConverter
{
public:
rsImageConverter(int w, int h, int bppDepth);
bool convertFrame(uint8_t* depth_data, uint8_t* color_data);
rs2::frame getDepth() const;
private:
int w = 640;
int h = 480;
int bppDepth = sizeof(float);
rs2::software_device dev;
rs2::software_sensor depth_sensor;
rs2::stream_profile depth_stream;
rs2::syncer syncer;
rs2::frame depth;
int ind = 0;
};
rsImageConverter.cpp (edited version of #Doch88's code):
#include "rsImageConverter.h"
rsImageConverter::rsImageConverter(int w, int h, int bppDepth) :
w(w),
h(h),
bppDepth(bppDepth),
depth_sensor(dev.add_sensor("Depth")) // initializing depth sensor
{
rs2_intrinsics depth_intrinsics{ w, h, (float)(w / 2), (float)(h / 2), (float) w , (float) h , RS2_DISTORTION_BROWN_CONRADY ,{ 0,0,0,0,0 } };
depth_stream = depth_sensor.add_video_stream({ RS2_STREAM_DEPTH, 0, 0,
w, h, 60, bppDepth,
RS2_FORMAT_DISTANCE, depth_intrinsics });
depth_sensor.add_read_only_option(RS2_OPTION_DEPTH_UNITS, 1.f); // setting depth units option to the virtual sensor
depth_sensor.open(depth_stream);
depth_sensor.start(syncer);
}
bool rsImageConverter::convertFrame(uint8_t* depth_data, uint8_t* color_data)
{
depth_sensor.on_video_frame({ depth_data, // Frame pixels
[](void*) {}, // Custom deleter (if required)
w*bppDepth, bppDepth, // Stride and Bytes-per-pixel
(rs2_time_t)ind * 16, RS2_TIMESTAMP_DOMAIN_HARDWARE_CLOCK, ind, // Timestamp, Frame# for potential sync services
depth_stream });
ind++;
rs2::frameset fset = syncer.wait_for_frames();
depth = fset.first_or_default(RS2_STREAM_DEPTH);
return depth;
}
rs2::frame rsImageConverter::getDepth() const
{
return depth;
}

Can opencv draw a float-coordinate rect in cv::Mat?

I am tring to use cv::rectangle() to draw a rect in cv::Mat, but could I draw a rect whose four points coordinate value with a float percision ? (like what Qt do in QPainter).
Try Blend2d library it fast and integates easyly with OpenCV:
Simple example:
#include <blend2d.h>
#include "opencv2/opencv.hpp"
int main(int argc, char* argv[])
{
BLImage img(480, 480, BL_FORMAT_PRGB32);
BLContext ctx(img);
// Read an image from file.
cv::Mat I = cv::imread("F:/ImagesForTest/lena.jpg");
cv::cvtColor(I, I, cv::COLOR_RGB2RGBA);
BLImage texture;
//BLResult err = texture.readFromFile("texture.jpeg");
texture.create(512, 512, BL_FORMAT_XRGB32);
memcpy((uchar*)texture.impl->pixelData, (uchar*)I.data, 512 * 512 * 4);
// Create a pattern and use it to fill a rounded-rect.
BLPattern pattern(texture);
ctx.setFillStyle(pattern);
ctx.setCompOp(BL_COMP_OP_SRC_COPY);
ctx.fillAll();
// Coordinates can be specified now or changed later.
BLGradient linear(BLLinearGradientValues(0, 0, 0, 480));
// Color stops can be added in any order.
linear.addStop(0.0, BLRgba32(0xFFFFFFFF));
linear.addStop(0.5, BLRgba32(0xFF5FAFDF));
linear.addStop(1.0, BLRgba32(0xFF2F5FDF));
// `setFillStyle()` can be used for both colors and styles.
ctx.setFillStyle(linear);
ctx.setCompOp(BL_COMP_OP_MODULATE);
ctx.fillRoundRect(40.0, 40.0, 400.0, 400.0, 45.5);
ctx.setStrokeStyle(BLRgba32(0xFFFF0000));
ctx.setStrokeWidth(3);
ctx.strokeLine(0,0,480,480);
ctx.end();
//BLImageCodec codec;
//codec.findByName("BMP");
//img.writeToFile("bl-getting-started-2.bmp", codec);
cv::Mat cvImg(img.height(), img.width(), CV_8UC4, img.impl->pixelData);
cv::imshow("res", cvImg);
cv::waitKey(0);
return 0;
}

OpenGL in Dart not showing triangle

I've written a simple "hello word triangle" in OpenGL/C++, it works perfectly. Then, when i was sure that the code worked, I decided to switch to dart and convert all my code.
These are the the pulgins that i've used: OpenGL - GLFW(mingw x64 dll)
And this is the OpenGL code without all the GLFW abstraction:
int shader;
#override
void onCreate() {
List<double> pos = [
-0.5, -0.5,
0.0, 0.5,
0.5, -0.5,
];
Pointer<Float> pPos = allocate<Float>(count: pos.length);
for(int i = 0; i < pos.length; i++)
pPos.elementAt(i).value = pos[i];
Pointer<Uint32> buffer = allocate();
glGenBuffers(1, buffer);
glBindBuffer(GL_ARRAY_BUFFER, buffer.value);
glBufferData(GL_ARRAY_BUFFER, 6 * sizeOf<Float>(), pPos, GL_STATIC_DRAW);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeOf<Float>(), 0);
glEnableVertexAttribArray(0);
String vs =
"""
#version 330 core
layout(location = 0) in vec4 position;
void main()
{
gl_Position = position;
}
""";
String fs =
"""
#version 330 core
layout(location = 0) out vec4 color;
void main()
{
color = vec4(1.0, 0.0, 0.0, 1.0);
}
""";
shader = _createShader(vs, fs);
}
#override
void onUpdate() {
glDrawArrays(GL_TRIANGLES, 0, 3);
}
These are the functions to compile the shaders:
int _compileShader(String source, int type) {
int id = glCreateShader(type);
Pointer<Pointer<Utf8>> ptr = allocate<Pointer<Utf8>>();
ptr.elementAt(0).value = NativeString.fromString(source).cast();
glShaderSource(id, 1, ptr, nullptr);
glCompileShader(id);
Pointer<Int32> result = allocate<Int32>();
glGetShaderiv(id, GL_COMPILE_STATUS, result);
if(result.value == GL_FALSE) {
Pointer<Int32> length = allocate<Int32>();
glGetShaderiv(id, GL_INFO_LOG_LENGTH, length);
Pointer<Utf8> message = allocate<Utf8>(count: length.value);
glGetShaderInfoLog(id, length.value, length, message);
glDeleteShader(id);
print("Failed to compile ${type == GL_VERTEX_SHADER ? "vertex" : "fragment"} shader");
print(message.ref);
return 0;
}
return id;
}
int _createShader(String vertexShader, String fragmentShader) {
int program = glCreateProgram();
int vs = _compileShader(vertexShader, GL_VERTEX_SHADER);
int fs = _compileShader(fragmentShader, GL_FRAGMENT_SHADER);
glAttachShader(program, vs);
glAttachShader(program, fs);
glLinkProgram(program);
glValidateProgram(program);
glDeleteShader(vs);
glDeleteShader(fs);
return program;
}
Since this exact code works perfectly in C++ the only thing I can think is that I've messed un some poiner witht ffi. Can you find any mistake I don't see?
EDIT:
One could think that OpenGL cannot interact with glfw but glClear does work
This is the github repository if you need to investigate more code
I've been doing something very similar and have the same problem. I thought I'd tracked it down to the call to glDrawArrays which crashed with a blue screen of death on Windows and a thread stuck in device driver error message. However, with further investigation I believe it is caused by the attribute array:
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeOf<Float>(), 0);
glEnableVertexAttribArray(0);
Commenting out those lines lets it run without crashing but no triangle appears. Uncommenting results in the BSOD.
I also found an interesting post from 2013 which appears to be relevant: Link
That's as far as I know at the moment.

Having issues with opencv/dnn with caffe model

I am trying to make use of openpose example in opencv using caffe model and opencv/dnn.hpp
tutorial I have been following - https://www.learnopencv.com/deep-learning-based-human-pose-estimation-using-opencv-cpp-python/
we require 2 files for the network as said in the tutorial :
1 - prototxt - https://github.com/spmallick/learnopencv/blob/master/OpenPose/pose/coco/pose_deploy_linevec.prototxt
2 - caffe model - posefs1.perception.cs.cmu.edu/OpenPose/models/pose/coco/pose_iter_440000.caffemodel
ros node that I made following the tutorial :
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/dnn/dnn.hpp>
#include <sensor_msgs/image_encodings.h>
#include <ros/ros.h>
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <iostream>
using namespace std;
using namespace cv;
using namespace cv::dnn;
static const std::string OPENCV_WINDOW = "Image window";
#define COCO
#ifdef COCO
const int POSE_PAIRS[17][2] =
{
{1,2}, {1,5}, {2,3},
{3,4}, {5,6}, {6,7},
{1,8}, {8,9}, {9,10},
{1,11}, {11,12}, {12,13},
{1,0},{0,14},
{14,16}, {0,15}, {15,17}
};
static const std::string protoFile = "pose/coco/pose_deploy_linevec.prototxt";
static const std::string weightsFile = "pose/coco/pose_iter_440000.caffemodel";
int nPoints = 18;
#endif
class ImageConverter
{
ros::NodeHandle nh_;
image_transport::ImageTransport it_;
image_transport::Subscriber image_sub_;
public:
ImageConverter()
: it_(nh_)
{
image_sub_ = it_.subscribe("/zed/rgb/image_raw_color", 1, &ImageConverter::imageCb, this);
}
~ImageConverter()
{
cv::destroyWindow(OPENCV_WINDOW);
}
void imageCb(const sensor_msgs::ImageConstPtr& msg)
{
cv_bridge::CvImagePtr cv_ptr;
try
{
cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
return;
}
if (cv_ptr->image.rows > 60 && cv_ptr->image.cols > 60)
detect_people(cv_ptr->image);
cv::waitKey(3);
}
void detect_people(cv::Mat msg)
{
int inWidth = msg.cols;
int inHeight = msg.rows;
float thresh = 0.1;
cv::Mat frame;
msg.copyTo(frame);
cv::Mat frameCopy = frame.clone();
int frameWidth = frame.cols;
int frameHeight = frame.rows;
cv::dnn::Net net = cv::dnn::readNetFromCaffe("pose_deploy_linevec.prototxt" ,"pose_iter_440000.caffemodel");
cv::Mat inpBlob = blobFromImage(frame, 1.0/255, cv::Size(inWidth, inHeight), cv::Scalar(0, 0, 0), false, false);
net.setInput(inpBlob);
cv::Mat output = net.forward();
int H = output.size[2];
int W = output.size[3];
std::vector<cv::Point> points(nPoints);
for (int n=0; n < nPoints; n++)
{
// Probability map of corresponding body's part.
cv::Mat probMap(H, W, CV_32F, output.ptr(0,n));
cv::Point2f p(-1,-1);
cv::Point maxLoc;
double prob;
cv::minMaxLoc(probMap, 0, &prob, 0, &maxLoc);
if (prob > thresh)
{
p = maxLoc;
p.x *= (float)frameWidth / W ;
p.y *= (float)frameHeight / H ;
cv::circle(frameCopy, cv::Point((int)p.x, (int)p.y), 8, Scalar(0,255,255), -1);
cv::putText(frameCopy, cv::format("%d", n), cv::Point((int)p.x, (int)p.y), cv::FONT_HERSHEY_COMPLEX, 1, cv::Scalar(0, 0, 255), 2);
}
points[n] = p;
}
int nPairs = sizeof(POSE_PAIRS)/sizeof(POSE_PAIRS[0]);
for (int n = 0; n < nPairs; n++)
{
// lookup 2 connected body/hand parts
Point2f partA = points[POSE_PAIRS[n][0]];
Point2f partB = points[POSE_PAIRS[n][1]];
if (partA.x<=0 || partA.y<=0 || partB.x<=0 || partB.y<=0)
continue;
cv::line(frame, partA, partB, cv::Scalar(0,255,255), 8);
cv::circle(frame, partA, 8, cv::Scalar(0,0,255), -1);
cv::circle(frame, partB, 8, cv::Scalar(0,0,255), -1);
}
cv::imshow("Output-Skeleton", frame);
}
};
int main(int argc, char** argv)
{
ros::init(argc, argv, "image_converter");
ros::NodeHandle nh_;
ros::Publisher pub;
ImageConverter ic;
ros::spin();
return 0;
}
The code is compiled without any errors, but while I run the code it gives the following error msg :
I get the following error when I run the node
error - OpenCV Error: Unspecified error (FAILED: fs.is_open(). Can't open "pose_deploy_linevec.prototxt") in ReadProtoFromTextFile, file /tmp/binarydeb/ros-kinetic-opencv3-3.3.1/modules/dnn/src/caffe/caffe_io.cpp, line 1119
terminate called after throwing an instance of 'cv::Exception'
what(): /tmp/binarydeb/ros-kinetic-opencv3-3.3.1/modules/dnn/src/caffe/caffe_io.cpp:1119: error: (-2) FAILED: fs.is_open(). Can't open "pose_deploy_linevec.prototxt" in function ReadProtoFromTextFile
Aborted (core dumped)
please help me solve this issue.
This issue is probably with the windows users only.
Solve the issue by:
Using/Adding absolute path when calling prototxt file.
Add the extension too. For Example:
"pose/coco/pose_deploy_linevec.prototxt.txt"
Spent 3 hours debugging this myself. Hope it helped someone else.
You are selecting the wrong file path.
Just replace this line:
static const std::string protoFile = "pose/coco/pose_deploy_linevec.prototxt";
with the path of the prototxt file in your laptop like this:
static const std::string protoFile = "C:/Users/lenovo/Desktop/learnopencv-master/OpenPose/pose/coco/pose_deploy_linevec.prototxt";

Convert OpenCV Mat to Frame type of Realsense in Librealsense SDK

I have RGB data as rs2::frame, I converted it to cv::Mat and send via TCP connection, on the server (receiver) side I am storing buffer into a cv::Mat. My question is How can I convert cv::Mat to rs2::frame on the receiver side, so I can use SDK functions that support rs2::frame type?
You need to simulate a software device in order to have a rs2::frame.
Following this example you can write your own class that creates the synthetic streams, taking data from the cv::Mat instances.
For example, here's something I have done to solve the problem.
rsImageConverter.h
#include <librealsense2/rs.hpp>
#include <librealsense2/hpp/rs_internal.hpp>
class rsImageConverter
{
public:
rsImageConverter(int w, int h, int bpp);
bool convertFrame(uint8_t* depth_data, uint8_t* color_data);
rs2::frame getDepth() const;
rs2::frame getColor() const;
private:
int w = 640;
int h = 480;
int bpp = 2;
rs2::software_device dev;
rs2::software_sensor depth_sensor;
rs2::software_sensor color_sensor;
rs2::stream_profile depth_stream;
rs2::stream_profile color_stream;
rs2::syncer syncer;
rs2::frame depth;
rs2::frame color;
int ind = 0;
};
rsImageConverter.cpp
#include "rsimageconverter.h"
rsImageConverter::rsImageConverter(int w, int h, int bpp) :
w(w),
h(h),
bpp(bpp),
depth_sensor(dev.add_sensor("Depth")), // initializing depth sensor
color_sensor(dev.add_sensor("Color")) // initializing color sensor
{
rs2_intrinsics depth_intrinsics{ w, h, (float)(w / 2), (float)(h / 2), (float) w , (float) h , RS2_DISTORTION_BROWN_CONRADY ,{ 0,0,0,0,0 } };
depth_stream = depth_sensor.add_video_stream({ RS2_STREAM_DEPTH, 0, 0,
w, h, 60, bpp,
RS2_FORMAT_Z16, depth_intrinsics });
depth_sensor.add_read_only_option(RS2_OPTION_DEPTH_UNITS, 0.001f); // setting depth units option to the virtual sensor
rs2_intrinsics color_intrinsics = { w, h,
(float)w / 2, (float)h / 2,
(float)w / 2, (float)h / 2,
RS2_DISTORTION_BROWN_CONRADY ,{ 0,0,0,0,0 } };
color_stream = color_sensor.add_video_stream({ RS2_STREAM_COLOR, 0, 1, w,
h, 60, bpp,
RS2_FORMAT_RGB8, color_intrinsics });
dev.create_matcher(RS2_MATCHER_DLR_C); // create the matcher with the RGB frame
depth_sensor.open(depth_stream);
color_sensor.open(color_stream);
depth_sensor.start(syncer);
color_sensor.start(syncer);
depth_stream.register_extrinsics_to(color_stream, { { 1,0,0,0,1,0,0,0,1 },{ 0,0,0 } });
}
bool rsImageConverter::convertFrame(uint8_t* depth_data, uint8_t* color_data)
{
depth_sensor.on_video_frame({ depth_data, // Frame pixels
[](void*) {}, // Custom deleter (if required)
w*bpp, bpp, // Stride and Bytes-per-pixel
(rs2_time_t)ind * 16, RS2_TIMESTAMP_DOMAIN_HARDWARE_CLOCK, ind, // Timestamp, Frame# for potential sync services
depth_stream });
color_sensor.on_video_frame({ color_data, // Frame pixels from capture API
[](void*) {}, // Custom deleter (if required)
w*bpp, bpp, // Stride and Bytes-per-pixel
(rs2_time_t)ind * 16, RS2_TIMESTAMP_DOMAIN_HARDWARE_CLOCK, ind, // Timestamp, Frame# for potential sync services
color_stream });
ind++;
rs2::frameset fset = syncer.wait_for_frames();
depth = fset.first_or_default(RS2_STREAM_DEPTH);
color = fset.first_or_default(RS2_STREAM_COLOR);
return (depth && color); // return true if everything went good
}
rs2::frame rsImageConverter::getDepth() const
{
return depth;
}
rs2::frame rsImageConverter::getColor() const
{
return color;
}
And then you can use it like this (assuming depth and rgb as two cv::Mat, where depth is converted in CV_16U and rgb is CV_8UC3 with a conversion from BGR to RGB):
rsImageConverter* converter = new rsImageConverter(640, 480, 2);
...
if(converter->convertFrame(depth.data, rgb.data))
{
rs2::frame rs2depth = converter->getDepth();
rs2::frame rs2rgb = converter->getColor();
... // Here you use these frames
}
By the way, I designed this class with the use of both depth and RGB. To convert only one of these you can simply pass an empty frame to the other argument, or change the class.

Resources