OpenCV How to run two videos at same speed? - opencv

how to make two video run at same time and same fps ?
VideoCapture capture("../video/Success/NT 1.1.wmv");
VideoCapture capture2("../video/Success/NT 1.wmv");
capture.set(CV_CAP_PROP_FPS , 30);
capture2.set(CV_CAP_PROP_FPS , 60);
waitKey(30);
For example, i have this two video and i set the fps for these two videos already but this capture.set(CV_CAP_PROP_FPS , 30) doesn't work for my program..

OpenCV is not a playback library, nor it was ever intended to support such functions. Setting FPS does absolutely nothing there.
The only thing that OpenCV does it to offer you the possibility to extract frames from a video, one after another.
You'll have to devise your own, complete, timing sequences to control the speed at which images are displayed on screen.
Or, better, use VLC.

what is problem in writing very simple code for same fps:
// Open videos
VideoCapture capture("../video/Success/NT 1.1.wmv");
VideoCapture capture2("../video/Success/NT 1.wmv");
Mat frame, frame2;
while(..)
{
capture >> frame;
capture2 >> frame2;
//imSHOW or do anything with these frame..
waitKey(30);
}
//Close video
Am I missing something or you?

Related

video capture card (webcam like) with OpenCV

I want to use video capture card to capture my screen display, and process the image by OpenCV/C++.
I have heard that there's some video capture card which is webcam like.(i.e. I can get the screen display by VideoCapture in OpenCV.)
Can someone tell me which video capture card should I buy?
Thanks !!!
I do not know if there some way to achieve that directly using OpenCV. However, a simple workaround could be like this:
Using this software you can create new webcam that stream your screen: https://sparkosoft.com/how-to-stream-desktop-as-webcam-video
Using OpenCV you can start capture the stream using this code:
cv::VideoCapture cap;
if(!cap.open(0)) // Use the new webcam Id instead of 0
return 0;
while(true){
cv::Mat frame;
cap >> frame;
if(frame.empty()) break;
cv::imshow("Screen", frame);
if( waitKey(10) == 27 ) break;
}
return 0;
I don't know if this helps now. But i found a way using opencv.
In linux and python, we achieve this using the following piece of code.
import cv2
cap = cv2.VideoCapture('/dev/video0')

opencv VideoCapture very slow with high resolution videos

I am trying to read a high resolution video using OpenCV VideoCapture and it seems to be extremely slow. I read somewhere changing the buffer sizes might help, but I tried setting all kinds of buffer sizes and its still slow.
Any help here on what settings can help improve reading high resolution videos for java opencv is really appreciated
I am using VideoCapture to read a video from disk. I am using Mac OSX. Here is a snippet of my code:
while(camera.read(frame))
{
BufferedImage bufferedImage = MatToBufferedImage(frame);
BufferedImage scaledImage = (BufferedImage)getScaledImage(bufferedImage, frameWidth, frameHeight);
ImageIcon icon = new ImageIcon(scaledImage);
publish(icon);
}
I am using swingworker and doing this in the background thread.
I am not explicitly setting any openCV properties. Should I be setting any explicit properties is something I am not sure of.
Here is what I observe: My video starts off well and then around 50th frame or so, i see some lag and then again at around 120th frame and it almost completely stops at frame number 190ish.
Have you tried resizing the individual frames?
while(camera.read(frame))
{
resize(frame,frame,Size(640,360));
imshow("frame",frame);
}

Understanding camera capture rate using opencv

This is probably an open-ended question. I have written an opencv application that captures feed from two external cameras connected to the computer. The capture from both the cameras runs parallely on 2 different threads. This recorder module writes the frames to a video file which is later processed. The following code sits inside each thread function:
CvCapture *capture =cvCaptureFromCAM(indexOfCamera);
if(!capture) return;
CvSize sz =cvGetSize(cvQueryFrame(capture));
cvNamedWindow("src");
CvVideoWriter *writer =cvCreateVideoWriter((char*) p, CV_FOURCC('L','A','G','S'), 20, sz);
if( !writer ) {
cvReleaseCapture( &capture );
return;
}
IplImage *frame;
int frameCounter =0;
while(true){
QueryPerformanceCounter(&sideCamCounter);
frame =cvQueryFrame(capture);
if(!frame)break;
//Store timestamp of frame somewhere
cvShowImage("src", frame);
cvWriteFrame(writer, frame);
int c=cvWaitKey(1);
if((char)c ==27)break;
++frameCounter;
}
cvReleaseVideoWriter(&writer);
cvReleaseCapture(&capture);
cvDestroyAllWindows();
The two cameras I am using are: A - Microsoft hd-6000 lifecam for notebooks and B - Logitech sphere AF webcam. Camera A captures at around 16-20 fps(reaches upto 30 fps during a few recordings) and camera B captures at around 10-12 fps.
I need a faster capture rate to be able to capture real-time motion. I understand I will be limited by the camera's capture speed/rate but apart from that, what other factors will affect the capture rate - e.g. load on the system(Memory and CPU), the API's used ? I am open to explore options. Thanks.
Try to set different camera properties - http://docs.opencv.org/modules/highgui/doc/reading_and_writing_images_and_video.html#videocapture-set, probably the most interesting for you will be... FPS :) Note that it's not always working fine ( How to set camera FPS in OpenCV? CV_CAP_PROP_FPS is a fake ), but give it a chance, maybe it will help you. Also you may try to set smaller image resolution.
If you don't have to - don't display image.
You may try to grab frames in one thread and process in another.
Connect cameras directly to your computer - don't use USB hub.
the API's used
I don't think it will help, but if you want you may try to use different API - OpenCV on Mac is not opening USB web camera

Reverse video playback in iOS

I want to play video backward in AVPlayer. I have tried with changing rates property to -1.0, and although it did work it was not smooth. Is there any way through which I can smoothly play videos backward?
As stated in the comments, the problem is with keyframes and the fact that most codecs are not designed to play backwards. There are 2 options for re-encoding the video that doesn't require you to actually reverse the video in editing.
Make every frame a keyframe. I've seen this work well for codecs like H.264 that rely on keyframes. Basically if every frame is a key frame, then each frame can be decoded without relying on any previous frames so it's effectively the same as playing forward.
Use a codec that doesn't use keyframes and non-keyframes (basically all frames are always keyframes). PhotoJPEG is one such option, although I'm not completely sure if it plays back on iOS. I would think so. It works great on a Mac.
Note that either of this options will result in larger file sizes compared to typical "keyframe every x frames" encoded video.
You have to reach at the end of the current item and then set the rate to the negative value. Something like this:
-(void)reversePlay
{
CMTime durTime = myPlayer.currentItem.asset.duration;
if (CMTIME_IS_VALID(durTime))
{
[myPlayer seekToTime:durTime toleranceBefore:kCMTimeZero toleranceAfter:kCMTimeZero];
[myPlayer setRate:-1.0];
}
else
NSLog(#"Invalid time");
}
source: https://stackoverflow.com/a/16104363/701043

opencv VideoCapture.set greyscale?

I would avoid to convert each frame taken by video camera with cvtColor(frame, image, CV_RGB2GRAY);
Is there anyway to set VideoCapture to get directly in greyscale?
Example:
VideoCapture cap(0);
cap.set(CV_CAP_PROP_FRAME_WIDTH,420);
cap.set(CV_CAP_PROP_FRAME_HEIGHT,340);
cap.set(CV_CAP_GREYSCALE,1); //< ???
If your camera supports YUV420 then you could just take the Y channel:
http://en.wikipedia.org/wiki/YUV
How to do that is well explained here:
Access to each separate channel in OpenCV
Warning: the Y channel might not be the first Mat you get with split() so you should do an imshow() of all of them separately and chose the one that looks like the "real" gray image. The others will just be waaaay out of contrast so it'll be obvious. For me it was the second mat.
Usually, any camera should be able to do YUV420 since sending frames directly in RGB is slower so YUV is pretty much used by nearly every camera. :)
This is impossible. Here's list of all codes:
CV_CAP_PROP_POS_MSEC - position in milliseconds from the file beginning
CV_CAP_PROP_POS_FRAMES - position in frames (only for video files)
CV_CAP_PROP_POS_AVI_RATIO - position in relative units (0 - start of the file, 1 - end of the file)
CV_CAP_PROP_FRAME_WIDTH - width of frames in the video stream (only for cameras)
CV_CAP_PROP_FRAME_HEIGHT - height of frames in the video stream (only for cameras)
CV_CAP_PROP_FPS - frame rate (only for cameras)
CV_CAP_PROP_FOURCC - 4-character code of codec (only for cameras).
Or (if it's possible, using some utilities) you can setup your camera to show only grayscale image.
To convert colored image to grayscale you have to call cvtColor with code CV_BGR2GRAY. This shouldn't take much time.
This is not possible if you use v4l (the default cv capture method on desktop Linux). The CV_CAP_PROP_FORMAT exists but is simply ignored. You have to convert the images to grayscale manually. If your device supports it, you may want to reimplement cap_v4l.cpp in order to interface v4l to set the format to grayscale.
On Android this is possible with the following native code (for the 0th device):
#include<opencv2/highgui/highgui.hpp>
#include<opencv2/highgui/highgui_c.h>
cv::VideoCapture camera(0);
camera->open(0);
cv::Mat dest(480,640,CV_8UC1);
if(camera->grab())
camera->retrieve(dest,CV_CAP_ANDROID_GREY_FRAME);
Here, passing CV_CAP_ANDROID_GREY_FRAME to the channel parameter of cv::VideoCapture::retrieve(cv::Mat,int) causes the YUV NV21 (a.k.a yuv420sp) image to be color converted to grayscale. This is just a mapping of the Y channel to the grayscale image, which does not involve any actual conversion or memcpy, therefore very fast. You can check this behavior in https://github.com/Itseez/opencv/blob/master/modules/videoio/src/cap_android.cpp#L407 and the "color conversion" in https://github.com/Itseez/opencv/blob/master/modules/videoio/src/cap_android.cpp#L511. I agree that this behavior is not documented at all and is very awkward, but it saved a lot of CPU time for me.
If you use <raspicam/raspicam_cv.h>]1 you can do it.
you need to open a device like this:
RaspiCam_Cv m_rapiCamera;
Set any parametters that you need using below code:
m_rapiCamera.set(CV_CAP_PROP_FORMAT, CV_8UC1);
And then open the stream like below:
m_rapiCamera.open();
And you will get only one channel.
Good luck!

Resources