OpenCV 2.4.3 VideoCapture is not working - opencv

Recently I migrated to OpenCV 2.4.3 from OpenCV 2.4.1.
My program which worked well with 2.4.1 version now encounters problem with 2.4.3.
The problem is related to VideoCapture that can not open my video file.
I saw a similar problem while searching the internet, but I couldn't find a proper solution for this. Here is my sample code:
VideoCapture video(argv[1]);
while(video.grab())
{
video.retrieve(imgFrame);
imshow("Video",ImgFrame);
waitKey(1);
}
It's worth mentioning that capturing video from webcam device works well, but I want to grab stream from file.
I'm using QT Creator 5 and I compiled OpenCV with MinGW. I'm using Windows.
I tried several different video formats and I rebuilt OpenCV with and without ffmpeg, but the problem still persists.
Any idea how to solve the problem?

Try this:
VideoCapture video(argv[1]);
int delay = 1000.0/video.get(CV_CAP_PROP_FPS);
while(1)
{
if ( !video.read(ImgFrame)) break;
imshow("Video",ImgFrame);
waitKey(delay);
}

In my experience with OpenCV I struggled using IP cams until my mentor discovered how to get them to work, don't forget to plug your IP address in otherwise it won't work!
import cv2
import numpy as np
import urllib.request
# Sets up the webcam and connects to it and initalizes a variable we use for it
stream=urllib.request.urlopen('http://xx.x.x.xx/mjpg/video.mjpg')
bytes=b''
while True:
# Takes frames from the camera that we can use
bytes+=stream.read(16384)
a = bytes.find(b'\xff\xd8')
b = bytes.find(b'\xff\xd9')
if a!=-1 and b!=-1:
jpg = bytes[a:b+2]
bytes= bytes[b+2:]
frame = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8),cv2.IMREAD_COLOR)
img = frame[0:400, 0:640] # Camera dimensions [0:WIDTH, 0:HEIGHT]
# Displays the final product
cv2.imshow('frame',frame)
cv2.imshow('img',img)
# Hit esc to kill
if cv2.waitKey(1) ==27:
exit(0)

Related

Raspberry Pi Camera and OpenCV: can't open camera by index

I've got a weird problem:
I've installed the OpenCV lib on my Pi. I have a Pi Cam connexted to the Pi (i am able to list all video devices and am able to take a picture with raspistill)
But when i try to take a video feed from opencv with python
from flask import Flask, render_template, Response
import cv2
app = Flask(__name__)
cap = cv2.VideoCapture(1)
I get the error:
[ WARN:0] global /tmp/pip-wheel-qd18ncao/opencv-python/opencv/modules/videoio/src/cap_v4l.cpp (893) open VIDEOIO(V4L2:/dev/video0): can't open camera by index
I tried it with different index (from -1 up to 13) but nothing works.
Any hints ?
I had a similar problem, try to specify the video backend, like:
cap = cv2.VideoCapture(index, cv2.CAP_V4L)
Index can be set to -1 to be automatically detected.
You should also need to enable this module if your raspberry is not recent:
sudo modprobe bcm2835-v4l2
Take also a look here, where a similar problem is described.

OpenCV no longer opens video files VideoCapture

I have a problem seemingly caused by OpenCV 3.xx - the problem does not manifest in OpenCV 2.xx
The issue is reading video files. I've set my code up as follows:
>#include <opencv2\opencv.hpp>
>#include <opencv2\core\core.hpp>
>#include <opencv2\highgui\highgui.hpp>
>#include <opencv2\imgproc\imgproc.hpp>
>#include <opencv2\features2d\features2d.hpp>
>int main()
> cv::VideoCapture cap;
> cv::Mat frame;
> if(!cap.open("Myfile.avi"))
> std::cout << "Open failed" << std::endl;
> else
> cap.read(frame);
>
> cv::imshow("Frame", frame);
> cv::waitKey(5000);
> return 0;
Now the problem is when the code gets to "cap.read(frame)" I get a "vector subscript is out of range" error with OpenCV 3.40 and this does not happen with my build of OpenCV 2.4.9. The format of the file is in avi, its not some weird codec, and clearly it works in previous versions of OpenCV.
I've tried other OpenCV 3.xx builds and I get the same or similar problems with simply reading a file in.
My question is twofold:
How do I get OpenCV 3.xx to work with reading video files (or do I need to regress to 2.xx?)
Why has the major revision change completely screwed up video file reading? That doesn't make any sense for a computer vision API.
As a guess it will be something to do with the FFMPEG implementation because various searches have turned up other people having issues with this.
Any help is much appreciated.
Thanks
I've managed to resolve it myself, it turns out that in OpenCV 3.xx I have to force the VideoCapture::open to use the FFMPEG library by doing this:
>cap.open("Myfile.avi", cv::CAP_FFMPEG)
where the latter parameter is the flags to identify which VideoCapture API backend to use. The list can be found here for any one else interested:
https://docs.opencv.org/3.3.0/d4/d15/group__videoio__flags__base.html

Video file not opening?(opencv 3.1.0 windows)

import cv2
cap = cv2.VideoCapture("StopMoti2001.mpeg")
if cap.isOpened():
print 'fine'
else:
print 'not fine'
output is 'not fine' I have checked for various videos and I also moved the ffmpeg file moving to PATH and still the problem remains same. can you please suggest a solution
Solved my problem, Actually I was working in anaconda, so instead of moving ffmpeg to Python27, I should move it to Anaconda dll folder to make it work.

OpenCV won't capture frames from a RTMP source, while FFmpeg does

my goal is to capture a frame from a rtmp stream every second, and process it using OpenCV. I'm using FFmpeg version N-71899-g6ef3426 and OpenCV 2.4.9 with the Java interface (but I'm first experimenting with Python).
For the moment, I can only take the simple and dirty solution, which is to capture images using FFmpeg, store them in disk, and then read those images from my OpenCV program. This is the FFmpeg command I'm using:
ffmpeg -i "rtmp://antena3fms35livefs.fplive.net:1935/antena3fms35live-live/stream-lasexta_1 live=1" -r 1 capImage%03d.jpg
This is currently working for me, at least with this concrete rtmp source. Then I would need to read those images from my OpenCV program in a proper way. I have not actually implemented this part, because I'm trying to find a better solution.
I think the ideal way would be to capture the rtmp frames directly from OpenCV, but I cannot find the way to do it. This is the code in Python I'm using:
cv2.namedWindow("camCapture", cv2.CV_WINDOW_AUTOSIZE)
cap = cv2.VideoCapture()
cap.open('"rtmp://antena3fms35livefs.fplive.net:1935/antena3fms35live-live/stream-lasexta_1 live=1"')
if not cap.open:
print "Not open"
while (True):
err,img = cap.read()
if img and img.shape != (0,0):
cv2.imwrite("img1", img)
cv2.imshow("camCapture", img)
if err:
print err
break
cv2.waitKey(30)
Instead of read() function, I'm also trying with grab() and retrieve() functions without any good result. The read() function is being executed every time, but no "img" or "err" is received.
Is there any other way to do it? or maybe there is no way to get frames directly from OpenCV 2.4.9 from a stream like this?
I've read OpenCV uses FFmpeg to do this kind of tasks, but as you can see, in my case FFmpeg is able to get frames from the stream while OpenCV is not.
In the case I could not find the way to get the frames directly from OpenCV, my next idea is to pipe somehow, FFmpeg output to OpenCV, which seems harder to implement.
Any idea,
thank you!
UPDATE 1:
I'm in Windows 8.1. Since I was running the python script from Eclipse PyDev, this time I run it from cmd instead, and I'm getting the following warning:
warning: Error opening file (../../modules/highgui/src/cap_ffmpeg_impl.hpp:545)
This warning means, as far as I could read, that either the file-path is wrong, or either the codec is not supported. Now, the question is the same. Is OpenCV not capable of getting the frames from this source?
Actually I have spent more that one day to figure out how to solve this issue. Finally I have solved this problem with the help of this link.
Here is client side code.
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int, char**) {
cv::VideoCapture vcap;
cv::Mat image;
const std::string videoStreamAddress = "rtmp://192.168.173.1:1935/live/test.flv";
if(!vcap.open(videoStreamAddress)) {
std::cout << "Error opening video stream or file" << std::endl;
return -1;
}
cv::namedWindow("Output Window");
cv::Mat edges;
for(;;) {
if(!vcap.read(image)) {
std::cout << "No frame" << std::endl;
cv::waitKey();
}
cv::imshow("Output Window", image);
if(cv::waitKey(1) >= 0) break;
}
}
Note: In this case I have created a android application to get real time video and send it to rtmp server wowza which is deployed in PC.So that is where I created this c++ implementation for real time video processing.
python -c "import cv2; print(cv2.getBuildInformation())"
check build opencv with ffmpeg。If it is correct, your code should be fine。
If not, rebuild opencv with ffmpeg。
Under osx
brew install opencv --with-ffmpeg

Problems converting video to pictures using python 2.7

Tried a lot of options and am running out of ideas. I was hoping someone here could help. I am trying to write some code in python that will extract frames (say every tenth frame) from a video (.avi or .wmv) and create a picture (.jpg preferably - but other formats will do). I have had no success and was wondering if someone could assist me in solving my problem by providing an alternative to what I have tried and failed.
I have tried PyMedia (the example in their tutorial http://pymedia.org/tut/src/dump_video.py.html does not work - the program bombs out when it looks for the video codecs):
#dm= muxer.Demuxer( inFile.split( '.' )[ -1 ] ) This line does not work
dm= muxer.Demuxer( 'avi' ) #This modified line does seem to work however
i= 1
inFile = "VideoTest.avi"
f= open( inFile, 'rb' )
s= f.read( 400000 )
r= dm.parse( s )
v= filter( lambda x: x[ 'type' ]== muxer.CODEC_TYPE_VIDEO, dm.streams )
v_id= v[ 0 ][ 'index' ]
print 'Assume video stream at %d index: ' % v_id
c= vcodec.Decoder( dm.streams[ v_id ] ) #this is the point where it crashes.
I have tried OpenCV v2.2 for Python but that doesn't work either (I can get most of OpenCV to work - except the one function, CaptureFromFile, that I need does not work). I believe the reason this function doesn't work is because on Windows it requires highGui to operate and for some reason, python and opencv cannot find highgui eventhough it is in the correct directory. I also understand OpenCV has issues with finding and applying correct video codecs so I am not sure which is the cause of my problem.
I have looked at pyFFMPEG but the last build for that was version 2.6 and I am running python 2.7.
I am running this on Windows Vista and Windows 7 machines, have Python 2.7 and OpenCV 2.2 loaded on C:\ and all other python packages (pygames, pymedia, numpy, scipy) installed in C:\python27\Libs\site-packages..." I downloaded and installed from executables, all packages were built for python 2.7. My Path variable includes Python27 and OpenCV and I have a PYTHONPATH variable.
Thanks for any ideas or recommendations.
I tried the following to extract each frames as a separate image:
import cv2
file_path = "some\path\to\the\file.avi"
video_object = cv2.VideoCapture(path)
success = True
while success:
success,frame = video_object.read()
if success:
cv2.imwrite(path[:-4]+"_"+str(i)+".jpg",frame)
print("Done!")
This script saves each frame as an image in the same folder as the source file. It may not be the most efficient way to do this, but it works for me! I know my string parsing is not the most recommended, but it works.
video_object.read() returns two object. The first is a bool indicating whether the reading operation was a success or not, and the second is the image.
There may be limitations with respect to the video codec, of which I am not aware. I'm using Python 2.7 with the most recent version of openCV and Numpy as of 11/7/2013.

Resources