I'm trying to connect cp plus ip camera to my app by using open cv. I tried so much ways to capture the frame. help me to capture frame using "rtsp" protocol. URL of the IP cam is "rtsp://admin:admin#192.168.1.108:554/VideoInput/1/mpeg4/1 ". i tried this using VLC player. its working. if there is way to capture frame by libvlc and pass into open CV please mentioned the way.
Try "rtsp://admin:admin#192.168.1.108:554/VideoInput/1/mpeg4/1?.mjpg" opencv looks end of url for video stream type.
You can directly access the URL that gives you the camera's jpg snapshot.
See here for details on how to find it using onvif:
http://me-ol-blog.blogspot.co.il/2017/07/getting-still-image-urluri-of-ipcam-or.html
The first step is to discovery your rtsp url, and test it at the vlc. You said that you already have that.
If someone need to discovery the rtsp url, I recommend the software onvif-device-tool (link) or the gsoap-onvif (link), both works on Linux, look at your terminal, the rtsp url will be there. After discovery the rtsp url I recommend to test it on vlc player (link), you can test using the menu option "opening network stream" or from command line:
vlc rtsp://your_url
If you already have the rtsp url and the tested it successfully at vlc, than create a cv::VideoCapture and grab the frames. You can do that like this:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
int main() {
cv::VideoCapture stream = cv::VideoCapture("rtsp://admin:admin#192.168.1.108:554/VideoInput/1/mpeg4/1");
if (!stream.isOpened()) return -1; // if not success, exit program
double width = stream.get(CV_CAP_PROP_FRAME_WIDTH); //get the width of frames of the video
double height = stream.get(CV_CAP_PROP_FRAME_HEIGHT); //get the height of frames of the video
std::cout << "Frame size : " << width << " x " << height << std::endl;
cv::namedWindow("Onvif",CV_WINDOW_AUTOSIZE); //create a window called "Onvif"
cv::Mat frame;
while (1) {
// read a new frame from video
if (!stream.read(frame)) { //if not success, break loop
std::cout << "Cannot read a frame from video stream" << std::endl;
cv::waitKey(30); continue;
}
cv::imshow("Onvif", frame); //show the frame in "Onvif" window
if (cv::waitKey(15)==27) //wait for 'esc'
break;
}
}
To compile:
g++ main.cpp `pkg-config --cflags --libs opencv`
Related
I am trying to VideoCapture video files using the following code:
VideoCapture cap("input.avi");
if(!cap.isOpened()){
cout << "Cannot open the video" << endl;
return-1;
}
However the output is always "Cannot open the video", meaning that I can't read this video. (I am sure I put the video file in a correct deritory)
But I am able to VideoCapture from a real-time camera successfully using
VideoCapture cap(0);
Why this happens? How can I fix it because I really need to read video files instead of real-time camera.
THX!
I have a single multi-head (stereo) usb camera that can be detected and can stream stereo videos using the "Video Capture Sources" filter in GraphEdit .
I'm trying to access both channels using OpenCV2.4.8 (on PC that has VS2010, Win7 X64) for further stereo image processing. However, I can only detect/stream single head(channel) of the camera and not both stereo heads of it. My code is set according to the related documentation notes of VideoCapture::grab/VideoCapture::retrieve and looks like the following:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat Lframe,Rframe;
namedWindow("Lframe",CV_WINDOW_AUTOSIZE);namedWindow("Rframe",CV_WINDOW_AUTOSIZE);
while(char(waitKey(1)) != 'q') {
if(cap.grab())
cap.retrieve(Lframe,0); cap.retrieve(Rframe,1);// get a new frame
imshow("Lframe",Lframe);imshow("Rframe",Rframe);
if(waitKey(30) >= 0) break;
}
return 0;
}
The problem is that the rendered channels (Lframe,Rframe) are identical no matter which Channel index is passed. Hence, only certain head is accessed & I can't get stereo streaming.
Is there a way to use "Video Capture Sources" filter directly with OpenCV?
Waiting for your assistance & Thank you in advance,
I am using the following piece of code to capture video from a camera connected to a video capture card.
int main() {
cv::VideoCapture cap(2);
if(!cap.isOpened())
{
std::cerr << "ERROR: Could not open camera." << std::endl;
return -1;
}
cv::Mat frame;
while(1){
cap >> frame;
cv::imshow("frame",frame);
cvWaitKey(10);
}
}
When I use my usb webcams there is no problem and the code works perfectly. However, with the video capture card, I don't see the any video stream! There is no error as well! When I put a break point inside the loop, I can see the video after a couple of iterations. At first I thought this problem is related to the delay and increased the wait time ,i.e., cvWaitKey(30) but that also didn't help. The only way it works is with break point! I don't understand what is special about the break point!
Please please help! I have to use this video capture card and want to make an executable file from this code which doesn't work without break points!!! Any comment is appreciated.
NOTE: I am using Windows.
I am using OpenCV and FFMPEG to capture frames from a network camera using RTSP. The point is that OpenCV successfully loads the FFMPEG .dll but icvCreateFileCapture_FFMPEG_p returns false in the following code of cap_ffmpeg.cpp:
virtual bool open( const char* filename )
{
close();
icvInitFFMPEG();
if( !icvCreateFileCapture_FFMPEG_p )
return false;
ffmpegCapture = icvCreateFileCapture_FFMPEG_p( filename );
return ffmpegCapture != 0;
}
Probably the stream is not ready or you have problems with the network address/access.
Check this if you have followed the correct way of doing it. Try pining the network resource first and see if it is available or not. The camera also must allow un-authenticated access, set via its web interface. Sometimes MJPEG works and MPEG4 has problems.
I am looking to stream a video from ffmpeg to OpenCV (a video manipulation library) and I am stumped. My idea is to create a virtual webcam device and then stream a video from ffmpeg to this device and the device will in turn stream like a regular webcam. My motivation is for OpenCV. OpenCV can read in a video stream from a webcam and go along its merry way.
But is this possible? I know there is software to create a virtual webcam, but can it accept a video stream (like from ffmpeg) and can it stream this video like a normal webcam? (I am working in a cygwin environment , if that is important)
You don't need to fool OpenCV into thinking the file is a webcam. You just need to add a delay between each frame. This code will do that:
#include <iostream>
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
using namespace cv;
int main(int argc, const char * argv[]) {
VideoCapture cap;
cap.open("/Users/steve/Development/opencv2/opencv_extra/testdata/python/videos/bmp24.avi");
if (!cap.isOpened()) {
printf("Unable to open video file\n");
return -1;
}
Mat frame;
namedWindow("video", 1);
for(;;) {
cap >> frame;
if(!frame.data)
break;
imshow("video", frame);
if(waitKey(30) >= 0) //Show each frame for 30ms
break;
}
return 0;
}
Edit: trying to read from a file being created by ffmpeg:
for(;;) {
cap >> frame;
if(frame.data)
imshow("video", frame); //show frame if successfully loaded
if(waitKey(30) == 27) //Wait 30 ms. Quit if user presses escape
break;
}
I'm not sure how it will handle getting a partial frame at the end of the file while ffmpeg is still creating it.
Sounds like what you want is VideoCapture::Open, which can open both video devices and files.
If you're using C, the equivalents are cvCaptureFromFile and cvCaptureFromCAM.