OpenCV: can't set resolution of video capture - opencv

I am using OpenCV 2.4.5 on Ubuntu 12.04 64-bit. I would like to be able to set the resolution of the input from my Logitech C310 webcam. The camera supports up to 1280x960 at 30fps, and I am able to view the video at this resolution in guvcview. But OpenCV always gets the video at only 640x480.
Trying to change the resolution with cap.set(CV_CAP_PROP_FRAME_WIDTH, 1280) and cap.set(CV_CAP_PROP_FRAME_HEIGHT, 960) immediately after the VideoCapture cap is created has no effect; trying to set them immediately before getting every frame causes the program to crash immediately. I cannot reduce the resolution with this method either. I am also getting the error "HIGHGUI ERROR: V4L/V4L2: VIDIOC_S_CROP". I think this may be related, because it appears once when the VideoCapture is created, and once when I try to set the width and height (but, oddly, not if I try to set only one of them).
I know I'm not the first to have this problem, but I have yet to find a solution after much Googling and scouring of SO and elsewhere on the internet (among the many things I've already tried to no avail is the answer to this StackOverflow question: Increasing camera capture resolution in OpenCV). Is this a bug in OpenCV? If so, it's a rather glaring one.
Here's an example of code that exhibits the problem (just a modified version of OpenCV's video display code):
#include <cv.h>
#include <highgui.h>
using namespace cv;
int main(int argc, char** argv)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
cap.set(CV_CAP_PROP_FRAME_WIDTH, 160);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 120);
Mat image;
namedWindow("Video", CV_WINDOW_AUTOSIZE);
while(1)
{
// cap.set(CV_CAP_PROP_FRAME_WIDTH, 160);
// cap.set(CV_CAP_PROP_FRAME_HEIGHT, 120);
cap >> image;
imshow("Video", image);
if(waitKey(10) == 99 ) break;
}
return
}
As it is, that gets me two "HIGHGUI ERROR"s as described above and I get a 640x480 output. I know that 160x120 is a resolution that my camera supports from running v4l2-ctl --list-formats-ext. If I uncomment the two commented-out lines in the while loop, the program crashes immediately.
These might be related or have possible solutions: http://answers.opencv.org/question/11427/decreasing-capture-resolution-of-webcam/, http://answers.opencv.org/question/30062/error-setting-resolution-of-video-capture-device/

This is a bug in the v4l "version" (build) of OpenCV 2.4 (including 2.4.12), but the bug is not in the libv4l version. For OpenCV 3.1.0, neither the v4l nor the libv4l version has the bug.
(Your error error message HIGHGUI ERROR: V4L/V4L2: VIDIOC_S_CROP indicates that you have the v4l version; the message is in cap_v4l.cpp, see code, but not in cap_libv4l.cpp.)
A workaround to get the v4l version of OpenCV 2.4 to work at a fixed resolution other than 640x480 is
changing the values for DEFAULT_V4L_WIDTH and DEFAULT_V4L_HEIGHT in
modules/highgui/src/cap_v4l.cpp and re-building OpenCV, kudos to this
answer.
If you want to build the libv4l version instead, all you likely need to do is
install libv4l-dev and rebuild OpenCV; WITH_LIBV4L was enabled by default for me. If it is not, your cmake command should contain
-D WITH_LIBV4L=ON
The cmake output (or version_string.tmp) for a libv4l build contains something like
Video I/O:
...
V4L/V4L2: Using libv4l1 (ver 0.8.6) / libv4l2 (ver 0.8.6)
(For a v4l build, it is just V4L/V4L2: NO/YES.)

Just wanted to add my CMAKE options to build with Java on the Raspberry Pi 3 based on Ulrich's comprehensive answer for OpenCV 3.2.0. Make a /build folder a in the same folder as OpenCV CMakeList.txt and execute this script for the new /build folder:
sudo cmake -D CMAKE_BUILD_TYPE=RELEASE -D WITH_OPENCL=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_SHARED_LIBS=OFF -D JAVA_INCLUDE_PATH=$JAVA_HOME/include -D JAVA_AWT_LIBRARY=$JAVA_HOME/jre/lib/arm/libawt.so -D JAVA_JVM_LIBRARY=$JAVA_HOME/jre/lib/arm/server/libjvm.so -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_TESTS=OFF -D WITH_MATLAB=OFF -D WITH_CUFFT=OFF -D WITH_CUDA=OFF -D WITH_CUBLAS=OFF -D WITH_GTK=OFF -D WITH_WEBP=OFF -D BUILD_opencv_apps=OFF -D BUILD_PACKAGE=OFF -D WITH_LIBV4L=ON ..

You can use v4l2-ctl to set frame size of captured video like below.
v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=1
You can find more information at this link

Maybe you can try this, but I am not sure if this is what you want:
#include <X11/Xlib.h>
Display* disp = XOpenDisplay(NULL);
Screen* scrn = DefaultScreenOfDisplay(disp);
int height = scrn->height;
int width = scrn->width;
//Create window for the ip cam video
cv::namedWindow("Front", CV_WINDOW_NORMAL);
cvSetWindowProperty( "Front", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN );
//Position of the screen where the video is shows
cvMoveWindow("Front", 0, 0);
cvResizeWindow( "Front", width, height );
Like this you get the full screen for any screen.

Related

OpenCV(3.4.3) !_src.empty() in function 'cvtColor' error

I am new to OpenCV and Google Colab. I have been working on a project that requires me to take the real-time image frames from the webcam and process it. But the problem is from the below code the 'frame' always returns a 'None' type and my webcam does not seem to switch on. But using the example code from Colab to capture images works fine:
How to use cap = cv2.VideoCapture(0) in Google Colab
Here is the code that fails:
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
---> 19 frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
error: OpenCV(3.4.3) /io/opencv/modules/imgproc/src/color.cpp:181: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'
try replace the first line with
frame = cv2.imread('your_image.png',0)
If it works, then the high chance is your camera issue.
There could be multiple reason. try
sudo apt-get install ffmpeg
sudo apt-get install cheese
cheese
to see if you can get video feed in ubuntu. If can, then its opencv config issue. if cannot then its either driver or hardware issue.
If its driver issue. follow https://help.ubuntu.com/community/Webcam to driver
If hardware broke down, not much you can do software

How to receive images from Jetson TX1 embedded camera?

I flashed my Jetson TX1 with the latest Jetpack (Linux For Tegra R23.2), and the following command works perfectly:
gst-launch-1.0 nvcamerasrc fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! nvtee ! nvvidconv flip-method=2 ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvoverlaysink -e
I tried to use the following python program to receive images from webcam:
source: http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
I got the following error:
OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cvtColor, file /hdd/buildbot/slave_jetson_tx_2/35-O4T-L4T-Jetson-L/opencv/modules/imgproc/src/color.cpp, line 3739
Traceback (most recent call last):
File "webcam.py", line 11, in <module>
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.error: /hdd/buildbot/slave_jetson_tx_2/35-O4T-L4T-Jetson-L/opencv/modules/imgproc/src/color.cpp:3739: error: (-215) scn == 3 || scn == 4 in function cvtColor
I know the problem is that it cannot receive images from webcam. I also changed the code to just show the received image from webcam, but it gives me error that means no image it get from camera.
I also tried to use C++ with the following code:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int argc, char** argv)
{
VideoCapture cap;
// open the default camera, use something different from 0 otherwise;
// Check VideoCapture documentation.
if(!cap.open(0))
return 0;
for(;;)
{
Mat frame;
cap >> frame;
if( frame.empty() ) break; // end of video stream
imshow("this is you, smile! :)", frame);
if( waitKey(1) == 27 ) break; // stop capturing by pressing ESC
}
// the camera will be closed automatically upon exit
// cap.close();
return 0;
}
and it compiled without any errors using
g++ webcam.cpp -o webcam `pkg-config --cflags --libs opencv`
But again, when I'm running the program I receive this error:
$ ./webcam
Unable to stop the stream.: Device or resource busy
Unable to stop the stream.: Bad file descriptor
VIDIOC_STREAMON: Bad file descriptor
Unable to stop the stream.: Bad file descriptor
What I've missed? Is there any command I should run to activate webcam before running this program?
According to the nvidia forums you need to get the gstreamer pipeline correct. And at the moment opencv can not autodetect the stream for the nvcamera.
The only way I got it to work, was with Opencv3 and this line of code to grab the video:
cap = cv2.VideoCapture("nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)640, height=(int)480, format=(string)I420, framerate=(fraction)30/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)I420 ! videoconvert ! video/x-raw, format=(string)BGR ! appsink")
Thanks for all the information from this thread, anyway, I just got my python works to get one frame from TX1 camera module.
The important thing to get it work is that we need install OpenCV 3.1.0, you can follow the official build method and replace python cv2.so lib to version 3.1.0. Original l4t OpenCV is 2.4.
And the other important thing is to use correct nvcamerasrc; try
c
ap = cv2.VideoCapture("nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1 ! nvtee ! nvvidconv flip-method=2 ! video/x-raw(memory:NVMM), format=(string)I420 ! nvoverlaysink -e ! appsink")
It works on my TX1, just for sharing here.

OpenCL ptxas error

I'm launching a program of mine which uses openCV/Opengl and which used to work fine with no errors. Now when it starts I get this:
OpenCL program build log: -D LOCAL_SIZE_X=8 -D LOCAL_SIZE_Y=8 -D SPLIT_STAGE=1 -D N_STAGES=20 -D MAX_FACES=10000 -D LBP
ptxas application ptx input, line 637; error : Instruction '{atom,red}.shared' requires .target sm_12 or higher
ptxas application ptx input, line 884; error : Instruction '{atom,red}.shared' requires .target sm_12 or higher
ptxas fatal : Ptx assembly aborted due to errors
(Mac Os X 10.11)
and then my program continues running normally. I have no idea what might be causing this and if this is relevant or not to my code and I have no idea where to look either. The very same code used to be OK. Is it something related to the openGL wrapper libraries I use? How serious is this? Could somebody please explain this error to me?
EDIT
I managed to identify the code which causes this error:
face_cascade.detectMultiScale(frame_gray, faces, 1.1, 2, 0, cv::Size(80, 80));
this is essentially a call to cv::CascadeClassifier::detectMultiScale with a cv::Mat a std::vector<cv::Rect> an a cv::Size as arguments.

Cannot grab image from Xtion Pro Live with Opencv code

I am using OpenCV 2.4.10 and I want to take image from my Asus Xtion Pro Live. When I'm trying to execute the code below, I get this error: "Can not open capture."
I tried everything, like Sensor update, opencv with openni compiling and opencv re-installing (even the version 2.4.6).
OpenNI and Sensor are working properly since I am able to run examples such as NiViewer. But the example openni_capture.cpp (on opencv-2.4.10/samples/cpp) cannot run properly.
The code:
#include "opencv2/opencv.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
VideoCapture capture;
capture.open(CV_CAP_OPENNI_ASUS);
if ( !capture.isOpened() )
{
cout << "Error opening capture" << endl;
return -1;
}
if( !capture.grab() )
{
cout << "Can not grab image" << endl;
}
return 0;
}
The compiling is done with the following command:
g++ capture.cpp -o capture pkg-config --cflags opencv --libs opencv
How can i fix this error? Is there any problem with the opencv version that i use?
I did what is being said on Can not grab image from VideoCapture OpenCV with Asus Xtion Pro Live
but the problem still exists.
What serial ports are listed when you print out the list as you're doing ? Have you checked to see what serial port your camera is connected to ? On Linux you can list mounted devices with :
lsusb

ffmpeg - Continuously stream webcam to single .jpg file (overwrite)

I have installed ffmpeg and mjpeg-streamer. The latter reads a .jpg file from /tmp/stream and outputs it via http onto a website, so I can stream whatever is in that folder through a web browser.
I wrote a bash script that continuously captures a frame from the webcam and puts it in /tmp/stream:
while true
do
ffmpeg -f video4linux2 -i /dev/v4l/by-id/usb-Microsoft_Microsoft_LifeCam_VX-5000-video-index0 -vframes 1 /tmp/stream/pic.jpg
done
This works great, but is very slow (~1 fps). In the hopes of speeding it up, I want to use a single ffmpeg command which continuously updates the .jpg at, let's say 10 fps. What I tried was the following:
ffmpeg -f video4linux2 -r 10 -i /dev/v4l/by-id/usb-Microsoft_Microsoft_LifeCam_VX-5000-video-index0 /tmp/stream/pic.jpg
However this - understandably - results in the error message:
[image2 # 0x1f6c0c0] Could not get frame filename number 2 from pattern '/tmp/stream/pic.jpg'
av_interleaved_write_frame(): Input/output error
...because the output pattern is bad for a continuous stream of images.
Is it possible to stream to just one jpg with ffmpeg?
Thanks...
You can use the -update option:
ffmpeg -y -f v4l2 -i /dev/video0 -update 1 -r 1 output.jpg
From the image2 file muxer documentation:
-update number
If number is nonzero, the filename will always be interpreted as just a
filename, not a pattern, and this file will be continuously overwritten
with new images.
It is possible to achieve what I wanted by using:
./mjpg_streamer -i "input_uvc.so -r 1280×1024 -d /dev/video0 -y" -o "output_http.so -p 8080 -w ./www"
...from within the mjpg_streamer's directory. It will do all the nasty work for you by displaying the stream in the browser when using the address:
http://{IP-OF-THE-SERVER}:8080/
It's also light-weight enough to run on a Raspberry Pi.
Here is a good tutorial for setting it up.
Thanks for the help!

Resources