opencv VideoCapture.set greyscale? - opencv

I would avoid to convert each frame taken by video camera with cvtColor(frame, image, CV_RGB2GRAY);
Is there anyway to set VideoCapture to get directly in greyscale?
Example:
VideoCapture cap(0);
cap.set(CV_CAP_PROP_FRAME_WIDTH,420);
cap.set(CV_CAP_PROP_FRAME_HEIGHT,340);
cap.set(CV_CAP_GREYSCALE,1); //< ???

If your camera supports YUV420 then you could just take the Y channel:
http://en.wikipedia.org/wiki/YUV
How to do that is well explained here:
Access to each separate channel in OpenCV
Warning: the Y channel might not be the first Mat you get with split() so you should do an imshow() of all of them separately and chose the one that looks like the "real" gray image. The others will just be waaaay out of contrast so it'll be obvious. For me it was the second mat.
Usually, any camera should be able to do YUV420 since sending frames directly in RGB is slower so YUV is pretty much used by nearly every camera. :)

This is impossible. Here's list of all codes:
CV_CAP_PROP_POS_MSEC - position in milliseconds from the file beginning
CV_CAP_PROP_POS_FRAMES - position in frames (only for video files)
CV_CAP_PROP_POS_AVI_RATIO - position in relative units (0 - start of the file, 1 - end of the file)
CV_CAP_PROP_FRAME_WIDTH - width of frames in the video stream (only for cameras)
CV_CAP_PROP_FRAME_HEIGHT - height of frames in the video stream (only for cameras)
CV_CAP_PROP_FPS - frame rate (only for cameras)
CV_CAP_PROP_FOURCC - 4-character code of codec (only for cameras).
Or (if it's possible, using some utilities) you can setup your camera to show only grayscale image.
To convert colored image to grayscale you have to call cvtColor with code CV_BGR2GRAY. This shouldn't take much time.

This is not possible if you use v4l (the default cv capture method on desktop Linux). The CV_CAP_PROP_FORMAT exists but is simply ignored. You have to convert the images to grayscale manually. If your device supports it, you may want to reimplement cap_v4l.cpp in order to interface v4l to set the format to grayscale.
On Android this is possible with the following native code (for the 0th device):
#include<opencv2/highgui/highgui.hpp>
#include<opencv2/highgui/highgui_c.h>
cv::VideoCapture camera(0);
camera->open(0);
cv::Mat dest(480,640,CV_8UC1);
if(camera->grab())
camera->retrieve(dest,CV_CAP_ANDROID_GREY_FRAME);
Here, passing CV_CAP_ANDROID_GREY_FRAME to the channel parameter of cv::VideoCapture::retrieve(cv::Mat,int) causes the YUV NV21 (a.k.a yuv420sp) image to be color converted to grayscale. This is just a mapping of the Y channel to the grayscale image, which does not involve any actual conversion or memcpy, therefore very fast. You can check this behavior in https://github.com/Itseez/opencv/blob/master/modules/videoio/src/cap_android.cpp#L407 and the "color conversion" in https://github.com/Itseez/opencv/blob/master/modules/videoio/src/cap_android.cpp#L511. I agree that this behavior is not documented at all and is very awkward, but it saved a lot of CPU time for me.

If you use <raspicam/raspicam_cv.h>]1 you can do it.
you need to open a device like this:
RaspiCam_Cv m_rapiCamera;
Set any parametters that you need using below code:
m_rapiCamera.set(CV_CAP_PROP_FORMAT, CV_8UC1);
And then open the stream like below:
m_rapiCamera.open();
And you will get only one channel.
Good luck!

Related

Image Capture & Analysis iOS

I'm having a bit of a headache with this. I am using the iPhone to camera to capture live images using AVFoundation. In the captureOutput function I am creating a UIImage from the sampleBuffer as per apples developer notes - if I save this image to the camera roll I can view it and it looks as it expected (I am not doing this every time captureOutput is called!). However I do not want to save it - instead I want to have a look at some of its pixel values.
So again using apples notes I am get pixel values at X & Y points. I also noted that the colour order is BGRA (not RGBA) so I can get these values and they all look OK only they appear to be wrong...
If I save the exact same image to the camera roll, email it to myself, then put it into my xcode project so I can create a UIImage from this in my App which I can then pass through the exact same set of methods I get a different set of figures for the pixel RGB data (I did switch back to RGBA for this image but even allowing for BGRA being wrong the numbers don't match) - to confirm I created the same routines on a PC using C# used the same image and got the same figures.
Is there a difference between a UIImage created in memory and one that is then saved and reloaded?
Thanks for any advice!

How to find the frame type(i frame,p frame, b frame)?

i am working in opencv 2.4.7 , is there any function to determine whether the captured frame is I frame, P frame , B frame?
if not which libs i must use to identify the frames
Caveat - I don't use opencv.
There doesn't seem to be a well-documented library function for it, but would either of these at least suggest a route?
How to get number of I/P/B frames of a video file?
OpenCV. How to identify i-frames only for a video file encoded in MPEG format

OpenCV + Linux + badly supported camera = wrong pixel format

I'm trying to grab frames from a web cam using OpenCV. I also tried 'cheese'. Both give me a pretty weird picture: distorted, wrong colors. Using mplayer I was able to figure out the correct codec "yuy2". Even mplayer sometimes would select the wrong codec ("yuv"), which makes it look just like using OpenCV / cheese to capture an image.
Can I somehow tell OpenCV which codec to use?
Thanks!
in the latest version of opencv you can set the capture format form the camera with the same fourcc style code you would use for video. See http://docs.opencv.org/modules/highgui/doc/reading_and_writing_images_and_video.html#videocapture
it may still take a bit of trial-and-error, terms like YUV, YUYV, YUY2 are used a bit loosely by the camera maker, the driver maker, the operating system, the directshow layer and opencv !
OpenCV automatically selects the first available capture backend (see here). It can be that it is not using V4L2 automatically.
Also set both -D WITH_V4L=ON and -D WITH_LIBV4L=ON if building from source.
In order to set the pixel format to be used set the CAP_PROP_FOURCC property of the capture:
capture = cv2.VideoCapture(self.cam_id, cv2.CAP_V4L2)
scapture.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'))
width = 1920
height = 1080
capture.set(cv2.CAP_PROP_FRAME_WIDTH, width)
capture.set(cv2.CAP_PROP_FRAME_HEIGHT, height)

cv::VideoCapture in android native code

I am using a cv::VideoCapture in native code and I am having issues with it :
In android java code, Videocapture gives a Yuv420 frame, in native code it's a BGR one. Since I need a gray image, having a Yuv image would be better (I read there was no cost in converting Yuv to GRAY).
Here are my questions :
A im using a Asus TF201, acquiring a frame takes about 26ms which is a lot... as the standard android camera API gives Yuv does the native version of VideoCapture performs a conversion ? (which would explain the time cost)
Is it possible to change the format with CV_CAP_PROP_FORMAT ? Whenever I try mycapture.get(CV_CAP_PROP_FORMAT) my app crashes...
EDIT : Andrey Kamaev answered this one. I have to use grab/retrieve methods adding a argument in the second one :
capture.retrieve(frame, CV_CAP_ANDROID_GRAY_FRAME);
Thanks
Look at the OpenCV samples for Android. Most of them are getting gray image from a VideoCapture object:
capture.retrieve(mGray, Highgui.CV_CAP_ANDROID_GREY_FRAME);
Internally this gray image is "converted" from yuv420 frame in the most efficient way - even without extra copying.

iOS: Video as GL texture with alpha transparency

I'm trying to figure out the best approach to display a video on a GL texture while preserving the transparency of the alpha channel.
Information about video as GL texture is here: Is it possible using video as texture for GL in iOS? and iOS4: how do I use video file as an OpenGL texture?.
Using ffmpeg to help with alpha transparency, but not app store friendly is here:
iPhone: Display a semi-transparent video on top of a UIView?
The video source would be filmed in front of a green screen for chroma keying. The video could be untouched to leave the green screen or processed in a video editing suite and exported to Quicktime Animation or Apple Pro Res 4444 with Alpha.
There are multiple approaches that I think could potentially work, but I haven't found a full solution.
Realtime threshold processing of the video looking for green to remove
Figure out how to use the above mentioned Quicktime codecs to preserve the alpha channel
Blending two videos together: 1) Main video with RGB 2) separate video with alpha mask
I would love to get your thoughts on the best approach for iOS and OpenGL ES 2.0
Thanks.
The easiest way to do chroma keying for simple blending of a movie and another scene would be to use the GPUImageChromaKeyBlendFilter from my GPUImage framework. You can supply the movie source as a GPUImageMovie, and then blend that with your background content. The chroma key filter allows you to specify a color, a proximity to that color, and a smoothness of blending to use in the replacement operation. All of this is GPU-accelerated via tuned shaders.
Images, movies, and the live cameras can be used as sources, but if you wish to render this with OpenGL ES content behind your movie, I'd recommend rendering your OpenGL ES content to a texture-backed FBO and passing that texture in via a GPUImageTextureInput.
You could possibly use this to output a texture containing your movie frames with the keyed color replaced by a constant color with a 0 alpha channel, as well. This texture could be extracted using a GPUImageTextureOutput for later use in your OpenGL ES scene.
Apple showed a sample app at WWDC in 2011 called ChromaKey that shows how to handle frames of video passed to an OpenGL texture, manipulated, and optionally written out to a video file.
(In a very performant way)
It's written to use a feed from the video camera, and uses a very crude chromakey algorithm.
As the other poster said, you'll probably want to skip the chromakey code and do the color knockout yourself beforehand.
It shouldn't be that hard to rewrite the Chromakey sample app to use a video file as input instead of a camera feed, and it's quite easy to disable the chormakey code.
You'd need to modify the setup on the video input to expect RGBA data instead of RGB or Y/UV. The sample app is set up to use RGB, but I've seen other example apps from Apple that use Y/UV instead.
Have a look at the free "APNG" app on the app store. It shows how an animated PNG (.apng) can be rendered directly to an iOS view. The key is that APNG supports an alpha channel in the file format, so you don't need to mess around with chroma tricks that will not really work for all your video content. This approach is more efficient that multiple layers or chroma tricks since another round of processing is not needed each time a texture is displayed in a loop.
If you want to have a look at a small example xcode project that displays an alpha channel animation on the side of a spinning cube with OpenGL ES2, it can be found at Load OpenGL textures with alpha channel on iOS. The example code shows a simple call to glTexImage2D() that uploads a texture to the graphics card once for each display link callback.

Resources