I am using a cv::VideoCapture in native code and I am having issues with it :
In android java code, Videocapture gives a Yuv420 frame, in native code it's a BGR one. Since I need a gray image, having a Yuv image would be better (I read there was no cost in converting Yuv to GRAY).
Here are my questions :
A im using a Asus TF201, acquiring a frame takes about 26ms which is a lot... as the standard android camera API gives Yuv does the native version of VideoCapture performs a conversion ? (which would explain the time cost)
Is it possible to change the format with CV_CAP_PROP_FORMAT ? Whenever I try mycapture.get(CV_CAP_PROP_FORMAT) my app crashes...
EDIT : Andrey Kamaev answered this one. I have to use grab/retrieve methods adding a argument in the second one :
capture.retrieve(frame, CV_CAP_ANDROID_GRAY_FRAME);
Thanks
Look at the OpenCV samples for Android. Most of them are getting gray image from a VideoCapture object:
capture.retrieve(mGray, Highgui.CV_CAP_ANDROID_GREY_FRAME);
Internally this gray image is "converted" from yuv420 frame in the most efficient way - even without extra copying.
Related
I had no luck getting h264 videos with RGBA pixels to work on iOS (tested on iOS 10.2) Is it possible? The Apple docs doesn't say much about it: https://developer.apple.com/library/content/documentation/Miscellaneous/Conceptual/iPhoneOSTechOverview/MediaLayer/MediaLayer.html
I don't have much interesting code to share since it's just that the AVPlayer doesn't display videos with RGBA pixels.
It's not possible to decode H264 into exactly 'RGBA' however:
a AVPlayerItemVideoOutput can be set to:kCVPixelFormatType_32BGRA using the kCVPixelBufferPixelFormatTypeKey
and
VTDecompressionSessionCreate also allows : kCVPixelFormatType_32BGRA.
Then when rendering I swizzle the pixels like this: gl_FragColor.bgra = texture2D(SamplerRGB, v_textureCoordinate);
So the answer is yes but you have to do the rendering and swizzle.
/A
(Edit. added links to code):
Here is a really great project by Apple that will get you going.. Real-time Video Processing Using AVPlayerItemVideoOutput
Just change it from YUV to RGB and bypass the 'colorConversionMatrix' part of the shader.
Good luck!
The first question I want to ask is:
when I link opencv to my iOS project like this:
There is distortion when loading the picture to the App:
and the oringal picture is:
You see the difference here? Why it happened? And is there any ways to avoid it?
And another question is:
Is there any image processing books/tutorials/websites in opencv using Mat style codes? I read an image processing book using IplImage style codes, so you see my way to do image processing in iOS is first load a IplImage picture and convert it to Mat and finally convert it to UIImage and show it to the view. But Mat is newer than IplImage, anything recommend for me?
Thank you very much!
I'm doing Image Processing with gles2.0.
The Effects are writen as shaders.
And for acceleration on iOS, the results are drawn to textures(With frame buffer binded) which are created by cv (core video) functions.
All things are ok, if I use the version without cv-accerated ( OpenGL ES2.0 functions only).
But there is a problem on the cv-accerated version:
when the input-picture is very small ( such as 200*200 pixel), many unexpected lines would appear after processing with several filters.
It takes me a long time to solve this problem but it's still there.
glFinish() is called before each needed function, so this is not the point.
Thans for your help!
Here is the screenshot
I'm trying to grab frames from a web cam using OpenCV. I also tried 'cheese'. Both give me a pretty weird picture: distorted, wrong colors. Using mplayer I was able to figure out the correct codec "yuy2". Even mplayer sometimes would select the wrong codec ("yuv"), which makes it look just like using OpenCV / cheese to capture an image.
Can I somehow tell OpenCV which codec to use?
Thanks!
in the latest version of opencv you can set the capture format form the camera with the same fourcc style code you would use for video. See http://docs.opencv.org/modules/highgui/doc/reading_and_writing_images_and_video.html#videocapture
it may still take a bit of trial-and-error, terms like YUV, YUYV, YUY2 are used a bit loosely by the camera maker, the driver maker, the operating system, the directshow layer and opencv !
OpenCV automatically selects the first available capture backend (see here). It can be that it is not using V4L2 automatically.
Also set both -D WITH_V4L=ON and -D WITH_LIBV4L=ON if building from source.
In order to set the pixel format to be used set the CAP_PROP_FOURCC property of the capture:
capture = cv2.VideoCapture(self.cam_id, cv2.CAP_V4L2)
scapture.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'))
width = 1920
height = 1080
capture.set(cv2.CAP_PROP_FRAME_WIDTH, width)
capture.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
I would avoid to convert each frame taken by video camera with cvtColor(frame, image, CV_RGB2GRAY);
Is there anyway to set VideoCapture to get directly in greyscale?
Example:
VideoCapture cap(0);
cap.set(CV_CAP_PROP_FRAME_WIDTH,420);
cap.set(CV_CAP_PROP_FRAME_HEIGHT,340);
cap.set(CV_CAP_GREYSCALE,1); //< ???
If your camera supports YUV420 then you could just take the Y channel:
http://en.wikipedia.org/wiki/YUV
How to do that is well explained here:
Access to each separate channel in OpenCV
Warning: the Y channel might not be the first Mat you get with split() so you should do an imshow() of all of them separately and chose the one that looks like the "real" gray image. The others will just be waaaay out of contrast so it'll be obvious. For me it was the second mat.
Usually, any camera should be able to do YUV420 since sending frames directly in RGB is slower so YUV is pretty much used by nearly every camera. :)
This is impossible. Here's list of all codes:
CV_CAP_PROP_POS_MSEC - position in milliseconds from the file beginning
CV_CAP_PROP_POS_FRAMES - position in frames (only for video files)
CV_CAP_PROP_POS_AVI_RATIO - position in relative units (0 - start of the file, 1 - end of the file)
CV_CAP_PROP_FRAME_WIDTH - width of frames in the video stream (only for cameras)
CV_CAP_PROP_FRAME_HEIGHT - height of frames in the video stream (only for cameras)
CV_CAP_PROP_FPS - frame rate (only for cameras)
CV_CAP_PROP_FOURCC - 4-character code of codec (only for cameras).
Or (if it's possible, using some utilities) you can setup your camera to show only grayscale image.
To convert colored image to grayscale you have to call cvtColor with code CV_BGR2GRAY. This shouldn't take much time.
This is not possible if you use v4l (the default cv capture method on desktop Linux). The CV_CAP_PROP_FORMAT exists but is simply ignored. You have to convert the images to grayscale manually. If your device supports it, you may want to reimplement cap_v4l.cpp in order to interface v4l to set the format to grayscale.
On Android this is possible with the following native code (for the 0th device):
#include<opencv2/highgui/highgui.hpp>
#include<opencv2/highgui/highgui_c.h>
cv::VideoCapture camera(0);
camera->open(0);
cv::Mat dest(480,640,CV_8UC1);
if(camera->grab())
camera->retrieve(dest,CV_CAP_ANDROID_GREY_FRAME);
Here, passing CV_CAP_ANDROID_GREY_FRAME to the channel parameter of cv::VideoCapture::retrieve(cv::Mat,int) causes the YUV NV21 (a.k.a yuv420sp) image to be color converted to grayscale. This is just a mapping of the Y channel to the grayscale image, which does not involve any actual conversion or memcpy, therefore very fast. You can check this behavior in https://github.com/Itseez/opencv/blob/master/modules/videoio/src/cap_android.cpp#L407 and the "color conversion" in https://github.com/Itseez/opencv/blob/master/modules/videoio/src/cap_android.cpp#L511. I agree that this behavior is not documented at all and is very awkward, but it saved a lot of CPU time for me.
If you use <raspicam/raspicam_cv.h>]1 you can do it.
you need to open a device like this:
RaspiCam_Cv m_rapiCamera;
Set any parametters that you need using below code:
m_rapiCamera.set(CV_CAP_PROP_FORMAT, CV_8UC1);
And then open the stream like below:
m_rapiCamera.open();
And you will get only one channel.
Good luck!