I am currently trying to use ROS' cv_bridge to convert to and from Opencv Mat images and ROS sensor_msgs/Images. I am not putting this question in the ROS answer site but here because I've already read in this answer that apparently in this conversion, cv_bridge does not put or fill or take the header message (with the timestamp) of the ROS Image.
So my remaining question is more on the OpenCV's side:
Does OpenCV Mat Images have some timestamp info embedded in them? If so, how can I access it?
OpenCV Mat images don't have any timing info built into them. You can see the class reference for them here.
You can, however, get the timestamp from your video capture source. It has a property CAP_PROP_POS_MSEC that returns the position the current frame is in a video source. You can use that to put into your ROS message header, although, you might have to do some extra work to convert the time from the video into the same timebase as ROS.
Related
I would like to generate a point cloud from stereo videos using the ZED SDK from Stereolabs.
What I have now is some rosbags with left and right images (and other data from different sensors).
My problem comes when I extract the images and I create the videos from them, what I get are the videos in some format (e.g. .mp4) using ffmpeg, but the ZED SDK needs a .svo format, and I don't know how to generate it.
Does it exist some way to obtain ".svo" videos from rosbags?
Also, I would like to ask, (once I get the .svo files) how could I get the point cloud using the SDK if I am not able to use a graphic interface? I am working from a DGX workstation by using ROS (Melodic and Ubuntu 18.04) in Docker and I am not able to make rviz and any graphic tool to work inside the Docker image, so I think I should do the point cloud generation "automated", but I don't know how.
I have to say that this is my first project using ROS, ZED SDK and Docker, so that's why I am asking this (maybe) basics questions.
Thank you in advance.
You can't. The .svo file format is a propriety file format that can only be recorded to by using a ZED and their SDK (or wrapper), can only be read by their SDK/wrapper, and only be exported by their SDK/wrapper.
To provide some helpful direction, I suggest that all functionality & processing you would like to get out of the images, by processing with or making use of the SDK features, can be done with open source 3rd party trusted community software projects. Examples include OpenCV (which bundles many other AI/DNN object detection or position estimation or 3D world reconstruction algorithms), PCL, or their wrappers in ROS, or other excellent algorithms whose chief API and reference is their ROS node.
The Raspberry Pi Camera v1 contains a OmniVision OV5647 sensors which offers up to 10bit raw RGB data. Using opencv's cvQueryFrame I get only 8bit data. I am only interested in grayscale imagery - how do I get 10bit data?
There may be simpler options available, but here are a couple of possible ideas. I have not coded or tested either, like I normally would - sorry.
Option 1.
Use "Video for Linux" (v4l2) and open the camera, do the ioctl()s and manage the buffers yourself - great link here.
Option 2.
Use popen() to start raspivid and tell it you want the raw option (--raw) and grab the raw data off the end of the JPEG with information on Bayer decoding from - here. Other, somewhat simpler to follow information available at section 5.11 here.
Assuming you want to capture RAW data from still images and not necessarily video, you have 2 options I know of:
Option 1: picamera
picamera is a Python library that will let you capture data to a stream. Be sure to read the docs as it's pretty tricky to work with.
Option 2: raspistill
You can also shell out to raspistill to capture your image file, and the process that however you want - if you want to process the raw data (captured raspistill --raw), you can use picamraw on- or offboard the Pi.
Even though we're a heavily Python shop, my team went with option 2 (in combination with picamraw, which we released ourselves) because picamera was not stable enough.
Is it possible to process(filter) HDR images through Core Image? I couldn't find much documentation on this, so I was wondering if someone possibly had an answer to it. I do know that it is possible to do the working space computations with RGBAh when you initialize a CIContext, so I figured that if we could do computations with floating point image formats, that it should be possible..
What, if it is not possible, are alternatives if you want to produce HDR effects on iOS?
EDIT: I thought I'd try to be a bit more concise. It is to my understanding that HDR images can be clamped and saved as .jpg, .png, and other image formats by clamping the pixel values. However, I'm more interested in doing tone mapping through Core Image on a HDR image that has not been converted yet. The issue is encoding a CIImage with a HDR image, supposedly with the .hdr extention.
EDIT2: Maybe it would be useful to useful to use CGImageCreate , along with CGDataProviderCreateWithFilename ?
I hope you have basic understanding of how HDR works. An HDR file is generated by capturing 2 or more images at different exposures and combining it. So even if there's something like .HDR file, it would be a container format with more than one jpg in it. Technically you can not give two image files at once as an input to a generic CIFilter.
And in iOS, as I remember, it's not possible to access original set of photos of an HDR but the processed final output. Even if you could, you'd have to manually do the HDR process and generate a single HDR png/jpg anyway before feeding it to a CIFilter.
Since there are people who ask for a CI HDR Algorithm, I decided to share my code on github. See:
https://github.com/schulz0r/CoreImage-HDR
It is the Robertson HDR algorithm, so you cannot use RAW images. Please see the unit tests if you want to know how to get the camera response and obtain the hdr image. CoreImage saturates pixel values outside [0.0 ... 1.0], so the HDR is scaled into said interval.
Coding with metal always causes messy code for me, so I decided to use MetalKitPlus which you have to include in your project. You can find it here:
https://github.com/LRH539/MetalKitPlus
I think you have to check out the dev/v2.0.0 branch. I will merge this into master in the future.
edit: Just clone the master branch of MetalKitPlus. Also, I added a more detailed description to my CI-HDR project.
You can now(iOS 10+) capture Raw images(coded on 12 bits) and then filter them the way you like using CIFilter. You might not get a dynamic range as wide as the one you get by using bracketed captures; nevertheless, it is still wider than capturing 8-bits images.
Check Apple's documentation for capturing and processing RAW images.
I also recommend you watch wwdc2016 video by Apple(move to the raw processing part).
I have a .swf file, which i want to embed in my opencv and overlay over camera stream and display it to the user. Until now i have not found a solution by simple google search. I would appreciate if anyone has any idea how to approach this.
Thanks
OpenCV doesn't deal with .swf files, so you need to use some other technology like FFMPEG or GStreamer to retrieve the frames and decode them to BGR to be able to create a valid IplImage (or cv::Mat if you are insterested in the C++ interface).
GStreamer also provides a simple mechanism to stream video over the network.
I need some ideas about how to stream video feed coming from opencv to a webpage. I currently have gStreamer, but I don't know if this is the right tool for the job. Any advice on using gStreamer or any hyperlinks to tutorials would be helpful and appreciated!
Thanks!
OpenCV doesn't provide an interface for streaming video, which means that you'll need to use some other techonology for this purpose.
I've used GStreamer in several professional projects: this is the droid you are looking for.
I do not have any experience w/ streaming OpenCV output to a website. However I'm sure this is possible using gstreamer.
Using a gstreamer stream, it is possible to get data and convert the data in to OpenCV format. I recommend you read up on GstAppSink and GstBuffer.
Basically, if I remember correctly, you must run a pipeline in the a background thread. Then using some function in gst_app_sink, you can get the buffer data from the sink.
A quick lookup on the issue, you had to use GST_BUFFER_DATA for this
I remember having to convert the result from yCBCr to bgr, a collegue had problems as the conversion of opencv was inadequate. So you might have to write your own. (This was back in the IplImage* days)