Can I create a virtual webcam and stream data to it? - opencv

I am looking to stream a video from ffmpeg to OpenCV (a video manipulation library) and I am stumped. My idea is to create a virtual webcam device and then stream a video from ffmpeg to this device and the device will in turn stream like a regular webcam. My motivation is for OpenCV. OpenCV can read in a video stream from a webcam and go along its merry way.
But is this possible? I know there is software to create a virtual webcam, but can it accept a video stream (like from ffmpeg) and can it stream this video like a normal webcam? (I am working in a cygwin environment , if that is important)

You don't need to fool OpenCV into thinking the file is a webcam. You just need to add a delay between each frame. This code will do that:
#include <iostream>
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
using namespace cv;
int main(int argc, const char * argv[]) {
VideoCapture cap;
cap.open("/Users/steve/Development/opencv2/opencv_extra/testdata/python/videos/bmp24.avi");
if (!cap.isOpened()) {
printf("Unable to open video file\n");
return -1;
}
Mat frame;
namedWindow("video", 1);
for(;;) {
cap >> frame;
if(!frame.data)
break;
imshow("video", frame);
if(waitKey(30) >= 0) //Show each frame for 30ms
break;
}
return 0;
}
Edit: trying to read from a file being created by ffmpeg:
for(;;) {
cap >> frame;
if(frame.data)
imshow("video", frame); //show frame if successfully loaded
if(waitKey(30) == 27) //Wait 30 ms. Quit if user presses escape
break;
}
I'm not sure how it will handle getting a partial frame at the end of the file while ffmpeg is still creating it.

Sounds like what you want is VideoCapture::Open, which can open both video devices and files.
If you're using C, the equivalents are cvCaptureFromFile and cvCaptureFromCAM.

Related

OpenCV: Background subtracting issue when one `VideoCapture` instance is used

Please have a look at the below code
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
cv::VideoCapture cam1,cam2;
cam1.open(0);
//cam2.open(0);
Mat im,im2;
cam1>>im;
cam1>>im2;
while(true)
{
cam1>>im;
for(int i=0;i<15000;i++)
{
}
cam1>>im2;
Mat im3 = im2-im;
imshow("video",im3);
if(waitKey(30)>=0)
{
break;
}
}
waitKey(0);
}
I am trying to identify the difference (in other terms, motion) by subtracting the images. However what I get is a 100% blank screen. If I use 2 VideoCapture instances capture frames and load them to im and im2, then it works. But I must not use 2 VideoCapture instances, I must only use 1. what have I done wrong here?
If you compare im.data and im2.data you will find that they are pointing to the same buffer.
Change your code to this
Mat im,im2;
cam1>>im;
im = im.clone();
cam1>>im2;
When you read a frame from VideoCapture, it does not copy the data.
If you want to copy the data before it gets overwritten by the next frame you have to do it yourself.
If you have two different VideoCapture instances, you already have separate buffers so the problem does not occur.

Capturing through a single multi-head (stereo) camera using OpenCV

I have a single multi-head (stereo) usb camera that can be detected and can stream stereo videos using the "Video Capture Sources" filter in GraphEdit .
I'm trying to access both channels using OpenCV2.4.8 (on PC that has VS2010, Win7 X64) for further stereo image processing. However, I can only detect/stream single head(channel) of the camera and not both stereo heads of it. My code is set according to the related documentation notes of VideoCapture::grab/VideoCapture::retrieve and looks like the following:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat Lframe,Rframe;
namedWindow("Lframe",CV_WINDOW_AUTOSIZE);namedWindow("Rframe",CV_WINDOW_AUTOSIZE);
while(char(waitKey(1)) != 'q') {
if(cap.grab())
cap.retrieve(Lframe,0); cap.retrieve(Rframe,1);// get a new frame
imshow("Lframe",Lframe);imshow("Rframe",Rframe);
if(waitKey(30) >= 0) break;
}
return 0;
}
The problem is that the rendered channels (Lframe,Rframe) are identical no matter which Channel index is passed. Hence, only certain head is accessed & I can't get stereo streaming.
Is there a way to use "Video Capture Sources" filter directly with OpenCV?
Waiting for your assistance & Thank you in advance,

Capture frame from ip cam using opencv

I'm trying to connect cp plus ip camera to my app by using open cv. I tried so much ways to capture the frame. help me to capture frame using "rtsp" protocol. URL of the IP cam is "rtsp://admin:admin#192.168.1.108:554/VideoInput/1/mpeg4/1 ". i tried this using VLC player. its working. if there is way to capture frame by libvlc and pass into open CV please mentioned the way.
Try "rtsp://admin:admin#192.168.1.108:554/VideoInput/1/mpeg4/1?.mjpg" opencv looks end of url for video stream type.
You can directly access the URL that gives you the camera's jpg snapshot.
See here for details on how to find it using onvif:
http://me-ol-blog.blogspot.co.il/2017/07/getting-still-image-urluri-of-ipcam-or.html
The first step is to discovery your rtsp url, and test it at the vlc. You said that you already have that.
If someone need to discovery the rtsp url, I recommend the software onvif-device-tool (link) or the gsoap-onvif (link), both works on Linux, look at your terminal, the rtsp url will be there. After discovery the rtsp url I recommend to test it on vlc player (link), you can test using the menu option "opening network stream" or from command line:
vlc rtsp://your_url
If you already have the rtsp url and the tested it successfully at vlc, than create a cv::VideoCapture and grab the frames. You can do that like this:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
int main() {
cv::VideoCapture stream = cv::VideoCapture("rtsp://admin:admin#192.168.1.108:554/VideoInput/1/mpeg4/1");
if (!stream.isOpened()) return -1; // if not success, exit program
double width = stream.get(CV_CAP_PROP_FRAME_WIDTH); //get the width of frames of the video
double height = stream.get(CV_CAP_PROP_FRAME_HEIGHT); //get the height of frames of the video
std::cout << "Frame size : " << width << " x " << height << std::endl;
cv::namedWindow("Onvif",CV_WINDOW_AUTOSIZE); //create a window called "Onvif"
cv::Mat frame;
while (1) {
// read a new frame from video
if (!stream.read(frame)) { //if not success, break loop
std::cout << "Cannot read a frame from video stream" << std::endl;
cv::waitKey(30); continue;
}
cv::imshow("Onvif", frame); //show the frame in "Onvif" window
if (cv::waitKey(15)==27) //wait for 'esc'
break;
}
}
To compile:
g++ main.cpp `pkg-config --cflags --libs opencv`

cvCreateCameraCapture Not Working

I am using OpenCV2.2 in Ubuntu 11.04. Using code::blocks 10.05 IDE. Testing the webcam with a simple code in openCV to capture video from the webcam. But, cvCreateCameraCapture(index) is always returning null(showing 0 error, 0 warning).
I have checked for index {-5 to +5}. The inbuilt webcam of my Acer Aspire 4736z is working fine with Cheese. lsusb showing:
Bus 002 Device 002: ID 04f2:b044 Chicony Electronics Co., Ltd Acer CrystalEye Webcam
means driver is installed.
grep -i v4l /var/log/udev returns
ID_V4L_VERSION=2
ID_V4L_PRODUCT=Video WebCam
ID_V4L_CAPABILITIES=:capture:
DEVLINKS=/dev/v4l/by-id/usb-Chicony_Electronics_Co.__Ltd._Video_WebCam_SN0001-video-index0 /dev/v4l/by-path/pci-0000:00:1d.7-usb-0:4:1.0-video-index0
Also followed this: cvCreateCameraCapture returns null
but got nothing.
Code is:
int main(int argc, char**argv)
{
IplImage *img;
char ch;
int c;
CvCapture *capture= cvCreateCameraCapture(0);
cvNamedWindow("Example1",CV_WINDOW_AUTOSIZE);
if(!capture)
printf("Camera Not Initialized");return 0;
while (capture)
{
img=cvQueryFrame(capture);
cvShowImage("Example1",img);
ch=cvWaitKey(33);
if(ch==32)
break;
}
cvReleaseImage(&img);
cvDestroyWindow("Example1");
}
Output Window:
Camera Not Initialized
Process returned 0(0X0) execution time:0.155s
press enter to continue.
Please Help me what is the problem, why the camera not working?
try recompiling OpenCV making sure you meet all the dependencies (see here).
Plus, use the newer
CvCapture* cam = cvCaptureFromCAM(CV_CAP_ANY);

Why does OpenCV give me a black screen?

I'm currently trying to use OpenCV (using the Processing library).
However, when I try to run any examples (either the Processing ones or the C ones included with OpenCV), I see nothing but black instead of input from the camera. The camera's LED indicator does turn on.. has anyone had the same problem? is my camera somehow incompatible with openCV? It's an Acer Crystal Eye...
Thanks,
OpenCV 2.1 still has a few problems with 64bits OS. You can read this topic on the subject.
If you're looking for working/compilable source code that shows how to use the webcam, check this out.
Let us know if it helped you.
I recently had the same problem. The OpenCV library on its own just gave me a blank screen, I had to include the videoInput library:
http://muonics.net/school/spring05/videoInput/
An example I followed was:
#include "stdafx.h"
#include "videoInput.h"
#include "cv.h"
#include "highgui.h"
int main()
{
videoInput VI;
int numDevices = VI.listDevices();
int device1= 0;
VI.setupDevice(device1);
int width = VI.getWidth(device1);
int height = VI.getHeight(device1);
IplImage* image= cvCreateImage(cvSize(width, height), 8, 3);
unsigned char* yourBuffer = new unsigned char[VI.getSize(device1)];
cvNamedWindow("test");
while(1)
{
VI.getPixels(device1, yourBuffer, false, false);
image->imageData = (char*)yourBuffer;
cvConvertImage(image, image, CV_CVTIMG_FLIP);
cvShowImage("test", image);
if(cvWaitKey(15)==27) break;
}
VI.stopDevice(device1);
cvDestroyWindow("test");
cvReleaseImage(&image);
return 0;
}
From this source: http://www.aishack.in/2010/03/capturing-images-with-directx/
I had somewhat same problem on Ubuntu. I downloaded a code from here:
http://www.rainsoft.de/projects/pwc.html
It does an extra step before starting to get frames(i think setting FPS). Worth a try, the code is easy to read and works with non-philips cams.
OpenCV only supports a limited number of types of cameras. Most likely your camera is not supported. You can look at either the source code or their web site to see which are supported.

Resources