I am trying to access live video stream via OpenCV. Firstly, I am typing the rtsp url in VLC and I can see the video without any problem
However, when I put the same rtsp url into my python code and ffmpeg, it is not able to catch any information.
The python code is very easy as following and I always got "error" print message at very first time.
import cv2
vcap = cv2.VideoCapture('rtsp://XX.XX.XX.XX/mystream')
while True:
ret, frame = vcap.read()
if ret:
cv2.imshow("test", frame)
cv2.waitKey(1)
else:
print("error")
break
and also, the ffmpeg part I just simply type following command to test is there any information I can access or not. But it is still not working.
ffmpeg -i rtsp://XX.XX.XX.XX/mystream
The codec I got from the VLC as the picture below:
Does anyone have the same problem with me? and how to figure it out?
Thank you.
Related
I'm creating a video sample by combining cropped-faced images using Deepface python library.
I saved the cropped faces in a list called cropped_faces. And it has saved all the frames successfully.
The list content of cropped_faces[]:
Finally, I am using cv2.write() method in a for loop to write the frames to the video. The code is as follows:
height,width,layers = cropped_faces[0].shape
print(height,width,layers)
fourcc = cv2.VideoWriter_fourcc(*'H264')
video = cv2.VideoWriter('/content/gdrive/MyDrive/Kaggle/Data/output.mp4', fourcc,1,(width,height))
for j in range(0,5):
video.write(cropped_faces[j])
video.release()
cv2.destroyAllWindows()
print("The video was successfully saved")
The output:
Although there are no errors in the code, the video saved in the location '/content/gdrive/MyDrive/Kaggle/Data/' is not playable. Only a size 258bytes video is created.
Can somebody please help me to find the reason and solve this issue, please?
I write a video using cv2.VideoWriter with fps=5; VLC plays it back at 5fps; and ffprobe reports tbr=5.
I write it with fps=4.5; VLC plays it back at 4.5fps; but ffprobe reports tbr=9.
How can I detect 4.5 fps?
EDIT: BTW the metadata shown in windows file manager and using cv2 get(cv2.CAP_PROP_FPS) is 600fps.
EDIT2:
Turns out the issue is only on Raspberry pi. Looks like cv2 does not write the metadata correctly as rpi/ffprobe works fine on a file created on laptop. However even the rpi created file plays fine so there must be a way VLC detects fps.
import cv2
source = "test.avi"
reader = cv2.VideoCapture(source)
writer = cv2.VideoWriter("temp.avi", cv2.VideoWriter_fourcc(*'DIVX'), 4.5, (800, 600))
while True:
res, img = reader.read()
if not res:
break
writer.write(img)
reader.release()
writer.release()
For the past two weeks I am trying to find a proper way to read frames from .mts video file and process them in OpenCV. When .mts file is in 25p (25 fps progressive) format VideoCapture of OpenCV works fine for seeking video frames but when it is in 50i (25 fps interlaced) format VideoCapture of OpenCV can not properly decode it frame by frame.
(e.g. in a sample scenario when I get frame #1 and then read frame #300 and later read frame #1, it returns a corrupted image different from my previous read of frame #1) (i am using OpenCV 2.4.6)
I decided to replace video decoder part of the program.
I tried FFmpegSource2 but the problem of proper frame seeking for .mts was not resolved (most of the time FFMS_GetFrame function returns same output for several consecutive frames for 50i .mts file).
I also tried DirectShow. But IsFormatSupported method of IMediaSeeking for TIME_FORMAT_FRAME does not return S_OK for 50i .mts video file and it only supports TIME_FORMAT_MEDIA_TIME for this kind of video file. I have not tried myself but a friend said even using TIME_FORMAT_MEDIA_TIME for frame seeking will result in the same problem as above and I may not be able to jump back and forward to individual frames and read their data.
Now I am going to try gstreamer. I found sample method for linking gstreamer and openCV in the following link:
Adding opencv processing to gstreamer application
When I try to compile it in gstreamer 1.0, I get the following error:
error C3861: 'gst_app_sink_pull_buffer': identifier not found
I have included gst/gst.h, gst/app/gstappsink.h, gst/app/gstappsrc.h
Looked at the following help link and there was not gst_app_sink_pull_buffer function there too.
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-libs/html/gst-plugins-base-libs-appsink.html
I am using gstreamer 1.0 (v1.2.0) from gstreamer.freedesktop.org
May be gstreamer SDK from www.gstreamer.com (based on gstreamer 0.1) work for that, but I have not tried it yet and prefer to use gstreamer from gstreamer.freedesktop.org
I don't know where gst_app_sink_pull_buffer is defined. Anybody knows how I can compile the sample method provided for gstreamer 0.1 in
Adding opencv processing to gstreamer application for gstreamer 1.0?
Thank you in advance.
UPDATE 1: I am new to gstreamer. Now I know that have to port the sample method of Adding opencv processing to gstreamer application from gstreamer 0.1 to gstreamer 1.0. I replaced gst_app_sink_pull_buffer function with gst_app_sink_pull_sample and gst_sample_get_buffer. Have to work more on the other parts of the code and see if can open a desired frame from 50i .mts video file and process it with OpenCV.
UPDATE 2: I found a very good example at
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/section-data-spoof.html#section-spoof-appsink
And I easily replaced the part which saves the snapshot using GTk with functions that load frame data buffer to an OpenCV Mat. This program works fine for many video file types and I can grab frames of the video file in OpenCV Mat. But when the input video file is a 50i .mts video file, it returns the following errors and I can not read the frame data:
No accelerated IMDCT transform found
0:00:00.405110839 4632 0B775380 ERROR libav :0:: get_buffer() failed (-1 2 00000000)
0:00:00.405740899 4632 0B775380 ERROR libav :0:: decode_slice_header error
0:00:00.406401077 4632 0B7756A0 ERROR libav :0:: Missing reference picture
0:00:00.406705867 4632 0B7756A0 ERROR libav :0:: Missing reference picture
0:00:00.416044436 4632 0B7759C0 ERROR libav :0:: Cannot combine reference and non-reference fields in the same frame
0:00:00.416813339 4632 0B7759C0 ERROR libav :0:: decode_slice_header error
0:00:00.417725301 4632 0B775CE0 ERROR libav :0:: Missing reference picture
The step by step debug shows that "No accelerated IMDCT transform found" appears after running
ret = gst_element_get_state( pipeline, NULL, NULL, 5 * GST_SECOND );
and google search shows that I can ignore it as a warning.
All of the other errors emerge just after running
g_signal_emit_by_name( sink, "pull-preroll", &sample, NULL );
I have no idea how to resolve this issue? I have already played this .mts file in another example using playbin and gstreamer can play this .mts video file well when I use playbin.
i am using VIVOTEK IP camera. I am trying to interface it with OPENCV. internet explorer shows fine video at this url, after entering username and password.
the code is given below
const std::string videoStreamAddress ="http://192.168.100.128/main.html";
//i have also tried "http://username:pasword#192.168.100.128/main.html" but the same
//result
//and also tried ""http://192.168.100.128" i.e without "main.html"
if(!vcap.open(videoStreamAddress))
{
std::cout << "Error opening video stream or file" << std::endl;
}
I got the following error
warning: Error openong file <../../modules/highgui/src/cap_ffmpeg_impl.hpp:529>
Error opening video stream or file
what can be the problem?
The URL which you have given is the problem. You can use url something like this
"http://username:password#ipOfCamera/axis-cgi/mjpg/video.cgi?resolution=640x480&req_fps=30&.mjpg"
Or another option is to download iSpy software and use IP camera wizard where it finds the URL for you and gives the best choice for the camera you are using. I did use this approach.
Heres the code which worked for me. as far as you want to get the live feed from the IP Camera.
Here's the list of URL which can be used to get the video from your IP Camera..
ok, what i am trying to do is retrieve a frame from an existing video file, do some work on the frame and then save it to a new file,
what actually happens is that it writes some frames and then crashes as the code is quite fast,
if i don't put cvWaitKey() i get the same error i get when writing video frames with AVFoundation Library without using
AVAssetWriterInput.readyForMoreMediaData
OpenCV video writer is implemented using AVFoundation classes but we lose access to
AVAssetWriterInput.readyForMoreMediaData
or am i missing something ?
here is the code similar to what i'm trying to do,
while (grabResult&&frameResult) {
grabResult = cvGrabFrame(capture); // capture a frame
if(grabResult){
img = cvRetrieveFrame(capture, 0); // retrieve the captured frame
cvFlip(img,NULL,0); // edit img
frameResult = cvWriteFrame(writer,img); // add the frame to the file
cvWaitKey(-1); or anything that helps to finish adding the previous frame
}
}
I am trying to convert a video file using OpenCV (without displaying)
in my iPhone/iPad app, everything works except cvWaitKey() function
and I get this error:
OpenCV Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script) in cvWaitKey,
Without this function frames are dropped as there's no way to know if
the video writer is ready, is there an alternative to my problem?
I am using OpenCV 2.4.2 and I get same error with the latest
precompiled version of OpenCV.
Repaint the UIImageView:
[[NSRunLoop currentRunLoop]runUntilDate:[NSDate dateWithTimeIntervalSinceNow:0.0f]];
followed by your imshow or cvShowImage
I don't know anything about iOS development, but did you try calling the C++ interface method waitKey()?