I am trying to write a video with some sequence of depth map.
I have converted the depth image from cv::Mat_ to cv::Mat with single channel.
But no codec I use is able to open the avi file I want to write. The VideoCapture.open(...) doesn't seem to be able to either create the file or open it.
I think its the problem choosing the right codec. I might be wrong. I have posted a small code snippet.
cv::VideoWriter outputVideo_;
source_ = "~/Hello.avi";
cv::Size S(480, 640);
outputVideo_.open(source_, CV_FOURCC('D','I','B', ' '), 20, S, false);
if (!outputVideo_.isOpened())
{
std::cout << "Could not open the output video for write: " << source_ << std::endl;
return;
}
How do I get opencv to work correctly in this case. I am using linux 12.04, ROS (Robot Operating System) and OpenCV 2.4.2
Try to use open() function with only file name. because here with me it works well.
VideoWriter outputVideo_;
source_ = "~/Hello.avi";
// cv::Size S(480, 640);
outputVideo_.open(source_);
if (!outputVideo_.isOpened())
{
std::cout << "Could not open the output video for write: " << source_ << std::endl;
return;
Related
Im trying to access runfiles within c++. Im using Bazel 5.2.0. I tried to access like this:
std::string error;
std::unique_ptr<Runfiles> runfiles(Runfiles::Create(argv[0], &error));
if (!runfiles) {
std::cerr << error << std::endl;
return 1;
}
std::string path = runfiles->Rlocation("Test/Example.tx");
std::cout << "Example.tx: " << path << std::endl;
std::ifstream in(path);
if (!in.is_open())
{
std::cout << "Example.tx not found" << std::endl;
return -1;
}
(Example.tx is right, just to lazy to change)
The program is finding a path but the path starts from the bazelisk directory and doesn't point to the binary dir.
Example.tx: C:\users\nikla\_bazel_nikla\d47dtf2d\execroot\__main__\bazel-out\x64_windows-fastbuild\bin\Test\Test.exe.runfiles/Test/Example.tx
Example.tx not found
Im getting this as an result.
Maybe there is a new way to access runfiles but Im not finding it.
The answer is you have to include the workspace name in the path, if it isn't set you have to use __main__.
Closed.
I'm trying to record from a Logitech Brio at 60fps, preferably at 1080p.
It should work because I can get it working on OBS and many others have achieved the settings.
Here is the code I am using to try to capture at this rate:
// Do some grabbing
cv::VideoCapture video_capture;
video_capture.set(cv::CAP_PROP_FRAME_WIDTH, 1920);
video_capture.set(cv::CAP_PROP_FRAME_HEIGHT, 1080);
video_capture.set(cv::CAP_PROP_FPS, 60);
{
INFO_STREAM("Attempting to capture from device: " << device);
video_capture = cv::VideoCapture(device);
// Read a first frame often empty in camera
cv::Mat captured_image;
video_capture >> captured_image;
}
if (!video_capture.isOpened())
{
FATAL_STREAM("Failed to open video source");
return 1;
}
else INFO_STREAM("Device or file opened");
cv::Mat captured_image;
video_capture >> captured_image;
What should I be doing differently for the Brio?
I had the same problem: same camera, couldn't change resolution or fps . After hours of working on this and digging the internet I found a solution:
Need to use DSHOW and need to instead read from capture device 1 (instead of 0). Code below for reference
fourcc = cv2.VideoWriter_fourcc('M','J','P','G')
cap = cv2.VideoCapture()
cap.open(cameraNumber + 1 + cv2.CAP_DSHOW)
cap.set(cv2.CAP_PROP_FOURCC, fourcc)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
cap.set(cv2.CAP_PROP_FPS, 60)
sorry I only did this in Python, but I hope the same solution works in c++
I assume you can do something along the lines of
video_capture = cv::VideoCapture(device + 1 + cv::CAP_DSHOW);
With OpenCV 4.1.0, this achieved 4K video under Windows, with my Logitech BRIO. The important thing in the end seemed to be to use CAP_DSHOW and to set the resolution after initialising the camera, not before.
cv::VideoCapture capture;
capture = cv::VideoCapture(cv::CAP_DSHOW);
if (!capture.isOpened())
{
cerr << "ERROR: Can't initialize camera capture" << endl;
return 1;
}
capture.set(cv::CAP_PROP_FRAME_WIDTH, 3840);
capture.set(cv::CAP_PROP_FRAME_HEIGHT, 2160);
capture.set(cv::CAP_PROP_FPS, 30);
I think the problem has nothing to do with the camera. The code might not work because you are creating a separate scope for opening the video capture. Upon exiting that scope, the destructor of video_capture instance will be called and therefore the !isOpened() check will always return true. I can't understand why you use those braces. Instead it should be:
INFO_STREAM("Attempting to capture from device: " << device);
auto video_capture = cv::VideoCapture(device);
if (!video_capture.isOpened())
{
FATAL_STREAM("Failed to open video source");
return 1;
}
cv::Mat captured_image;
video_capture.set(cv::CAP_PROP_FRAME_WIDTH, 1920);
video_capture.set(cv::CAP_PROP_FRAME_HEIGHT, 1080);
video_capture.set(cv::CAP_PROP_FPS, 60);
INFO_STREAM("Device or file opened");
video_capture >> captured_image;
After some troubleshooting of my own, I found that #ffarhour's solution worked for me.
fourcc = cv2.VideoWriter_fourcc('M','J','P','G')
cap = cv2.VideoCapture()
cap.open(cameraNumber + 1 + cv2.CAP_DSHOW)
cap.set(cv2.CAP_PROP_FOURCC, fourcc)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
cap.set(cv2.CAP_PROP_FPS, 60)
But for anyone troubleshooting this in the future, I also want to add that you need a good USB cable and preferably direct access to a usb 3.0 port. I had the camera in my dock first, and only 1080p worked.
A good check is the windows 10 built in camera tool. My experience is that high resolutions will only show as options here, if your cable and usb port support it. So do not bother with Opencv if this does not work first.
Aditionally for those just starting out: the code snippit above goes before your opencv while loop. and Inside your while loop you add:
while True:
ret, frame = cap.read()
frame = cv2.cvtColor(frame, 1)
cv2.imshow('original image', frame)
cv2.waitKey(2)
Finally I want to note that 30 fps worked best for me (better than 24 fps) and that the max available resolution from the camera is NOT 3840 x 2160 pixel but 4096 x 2160, how cool is that?
I also strongly advise to download the logitech driver for brio called 'logitech camera settings' it allows you to set the FOV, autofocus and other things you otherwise could never access.
my goal is to capture a frame from a rtmp stream every second, and process it using OpenCV. I'm using FFmpeg version N-71899-g6ef3426 and OpenCV 2.4.9 with the Java interface (but I'm first experimenting with Python).
For the moment, I can only take the simple and dirty solution, which is to capture images using FFmpeg, store them in disk, and then read those images from my OpenCV program. This is the FFmpeg command I'm using:
ffmpeg -i "rtmp://antena3fms35livefs.fplive.net:1935/antena3fms35live-live/stream-lasexta_1 live=1" -r 1 capImage%03d.jpg
This is currently working for me, at least with this concrete rtmp source. Then I would need to read those images from my OpenCV program in a proper way. I have not actually implemented this part, because I'm trying to find a better solution.
I think the ideal way would be to capture the rtmp frames directly from OpenCV, but I cannot find the way to do it. This is the code in Python I'm using:
cv2.namedWindow("camCapture", cv2.CV_WINDOW_AUTOSIZE)
cap = cv2.VideoCapture()
cap.open('"rtmp://antena3fms35livefs.fplive.net:1935/antena3fms35live-live/stream-lasexta_1 live=1"')
if not cap.open:
print "Not open"
while (True):
err,img = cap.read()
if img and img.shape != (0,0):
cv2.imwrite("img1", img)
cv2.imshow("camCapture", img)
if err:
print err
break
cv2.waitKey(30)
Instead of read() function, I'm also trying with grab() and retrieve() functions without any good result. The read() function is being executed every time, but no "img" or "err" is received.
Is there any other way to do it? or maybe there is no way to get frames directly from OpenCV 2.4.9 from a stream like this?
I've read OpenCV uses FFmpeg to do this kind of tasks, but as you can see, in my case FFmpeg is able to get frames from the stream while OpenCV is not.
In the case I could not find the way to get the frames directly from OpenCV, my next idea is to pipe somehow, FFmpeg output to OpenCV, which seems harder to implement.
Any idea,
thank you!
UPDATE 1:
I'm in Windows 8.1. Since I was running the python script from Eclipse PyDev, this time I run it from cmd instead, and I'm getting the following warning:
warning: Error opening file (../../modules/highgui/src/cap_ffmpeg_impl.hpp:545)
This warning means, as far as I could read, that either the file-path is wrong, or either the codec is not supported. Now, the question is the same. Is OpenCV not capable of getting the frames from this source?
Actually I have spent more that one day to figure out how to solve this issue. Finally I have solved this problem with the help of this link.
Here is client side code.
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int, char**) {
cv::VideoCapture vcap;
cv::Mat image;
const std::string videoStreamAddress = "rtmp://192.168.173.1:1935/live/test.flv";
if(!vcap.open(videoStreamAddress)) {
std::cout << "Error opening video stream or file" << std::endl;
return -1;
}
cv::namedWindow("Output Window");
cv::Mat edges;
for(;;) {
if(!vcap.read(image)) {
std::cout << "No frame" << std::endl;
cv::waitKey();
}
cv::imshow("Output Window", image);
if(cv::waitKey(1) >= 0) break;
}
}
Note: In this case I have created a android application to get real time video and send it to rtmp server wowza which is deployed in PC.So that is where I created this c++ implementation for real time video processing.
python -c "import cv2; print(cv2.getBuildInformation())"
check build opencv with ffmpeg。If it is correct, your code should be fine。
If not, rebuild opencv with ffmpeg。
Under osx
brew install opencv --with-ffmpeg
I'm using cmake to build my project with opencv. There are two sub-projects, A and B, under the top directory. A has no opencv function while B uses VideoCapture to get image from webcam. There is no problem at first.
However, after I add the code from B to A, B can still capture image from webcam, but A cannot do the same thing, the error is below:
HIGHGUI ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
VIDIOC_STREAMON: Inappropriate ioctl for device
It is strange, and I find that VideoCapture cannot get image in A, the code is below
VideoCapture cam;
cam.open(0);
if(!cam.isOpened()){
cout << "Failed to open webcam" << endl;
return false;
}
Mat Image;
cam >> Image;
if(Image.empty())
cout<<"Image empty"<<endl;
"Image empty" is always in console, which means it just cannot capture the image at all!!
I followed some suggestions such as install "v4l2ucp", but there is no folder under "/usr/lib/" named "libv4l", so LD_PRELOAD=/usr/lib/libv4l/v4l1compat.so dose not work.
I will very appreciate it if someone could give me some help to solve the problem in project A.
The file might not necessarily be there. Try find / -name "*v4l1compat.so*" 2>/dev/null or find / -name "*libv4l*" 2>/dev/null. It should succeed since your project B captures frames just fine. Then try to export found file to LD_PRELOAD.
If it won't succeed - check out your libv4l installation.
And make sure you trying to open() the correct camera which is not already in use.
This is a code snippet from O'Reilly Learning Opencv,
cvNamedWindow("Example3", CV_WINDOW_AUTOSIZE);
g_capture = cvCreateFileCapture(argv[1]);
int frames = (int) cvGetCaptureProperty(g_capture, CV_CAP_PROP_FRAME_COUNT);
if (frames != 0) {
cvCreateTrackbar("Position", "Example3", &g_slider_postion, frames, onTrackbarSlide);
}
But unfortunately, cvGetCaptureProperty always return 0. I have searched opencv group in Yahoo, found the same problem.
Oh, I get it. I have found these code snippet in the Learning OpenCV's samples codes:
/*
OK, you caught us. Video playback under linux is still just bad. Part of this is due to FFMPEG, part of this
is due to lack of standards in video files. But the position slider here will often not work. We tried to at least
find number of frames using the "getAVIFrames" hack below. Terrible. But, this file shows something of how to
put a slider up and play with it. Sorry.
*/
//Hack because sometimes the number of frames in a video is not accessible.
//Probably delete this on Widows
int getAVIFrames(char * fname) {
char tempSize[4];
// Trying to open the video file
ifstream videoFile( fname , ios::in | ios::binary );
// Checking the availablity of the file
if ( !videoFile ) {
cout << "Couldn’t open the input file " << fname << endl;
exit( 1 );
}
// get the number of frames
videoFile.seekg( 0x30 , ios::beg );
videoFile.read( tempSize , 4 );
int frames = (unsigned char ) tempSize[0] + 0x100*(unsigned char ) tempSize[1] + 0x10000*(unsigned char ) tempSize[2] + 0x1000000*(unsigned char ) tempSize[3];
videoFile.close( );
return frames;
}
I had the same problem. It says it would work on Windows but it does not. I guess it is because I use Dev-C++ and Dev-C++ uses gcc. Though I'm not sure if that is the reason.
I don't seem to have this problem in the linux version (the one installed after doing a ROS installation), but I keep running into it on OSX. I thought it had to do with the OpenCV version I was using (I installed the linux one rather recently) so I installed OpenCV 2.2 on my mac, but the problem persists.
Does anyone know if this has been corrected completely on the latest version of the repository?
Worse is I didn't have this problem on Windows 7, and then a few days later I did, with the same video file. No rhyme or reason.