I've got a V4L2 camera that can grab frame in JPEG format or YUV422 or BGR24. I'd like to set camera to BGR24#640x480 by OpenCV. To do this, I did the following settings:
capture = cvCreateCameraCapture(0);
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, 640 );
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, 480 );
cvSetCaptureProperty( capture, CV_CAP_PROP_FOURCC, CV_FOURCC('B', 'G', 'R', '3'));
but opencv gives me back the following error message:
HIGHGUI ERROR: V4L: Property <unknown property string>(6) not supported by device
So, openCV set JPEG#640x480 format instead of BGR24.
How can I fix it?
NOTE: BGR24 format was tested with the following gstreamer pipeline and it works properly:
gst-launch-0.10 v4l2src num-buffers=10 device=/dev/video0 ! 'video/x-raw-rgb,width=640,height=480,bpp=24,depth=24,red_mask=255,green_mask=65280,blue_mask=16711680,endianness=4321' ! filesink location=/tmp/output10.rgb24
Kind regards
I'd check that you are accessing the correct camera
If you have multiple cameras varying N in cvCreateCameraCapture(N) should cycle through them.
Other than that I would check that the webcam itself conforms to the UVC specification. V4L might be having trouble querying the parameters of the cam.
Just because the Camera supports the capture of a certain format, if it doesn't strictly comply with the Usb Video Class, OpenCV is not guaranteed to be able to detect that it can capture in that format and, to the best of my knowledge, cannot be forced to.
Related
For a specific purpose I am trying to convert an AVI video to a kind of Moving JPEG format using OpenCV. In order to do so I read images from the source video, convert them to JPEG using imEncode, and write these JPEG images to the target video.
After several hundreds of frames suddenly the size of the resulting JPEG image nearly doubles. Here's a list of sizes:
68045
68145
68139
67885
67521
67461
67537
67420
67578
67573
67577
67635
67700
67751
127800
127899
127508
127302
126990
126904
Anybody got a clue what's going on here?
By the way: I'm using OpenCV.Net as a wrapper for OpenCV.
Thanks a lot in advance,
Paul
I found the solution. If I explicitly enter the third parameter to imEncode (for JPEG encoding this indicates the quality of the encoding, ranging from 0 to 100) instead of using the default (95) the problem disappears. It's likely this is a bug in OpenCV.Net, but it could also be a bug in OpenCV itself.
I have a bare h.264 file (from a raspberry pi camera), and I'd like to wrap it as an mp4. I don't need to play it, edit it, add or remove anything, or access the pixels.
Lots of people have asked about compiling ffmpeg for iOS, or streaming live data. But given the lack of easy translation between the ffmpeg command line and its iOS build, it's very difficult for me to figure out how to implement this simple command:
ffmpeg -i input.h264 -vcodec copy out.mp4
I don't specifically care whether this happens via ffmpeg, avconv, or AVFoundation (or something else). It just seems like it should be not-this-hard to do on a device.
It is not hard but requires some work and attention to detail.
Here is my best guess:
read PPS/SPS from your input.h264
extract height & width from SPS
generate avcC header from PPS/SPS
create an AVAssetWriter with file type AVFileTypeQuickTimeMovie
create an AVAssetWriterInput
add the AVAssetWriterInput as AVMediaTypeVideo with your height & width to the AVAssetWriter
read from your input.h264 (likely in Annex B format) one NALs at a time
convert your NALs from your input.h264 from start code prefixed (0 0 1; Annex B) to size prefixed (mp4 format)
drop NALs of type AU, PPS, SPS
create a CMSampleBuffer for each NAL and add a CMFormatDescription with the avcC header
regenerate timestamps starting a zero using the known frame rate (watch out if your frames are reordered)
append your CMSampleBuffer to your AVAssetWriterInput
goto 7 until EOF
I am trying to write video using opencv. It is important for me to do this precisely - so it has to be a lossless codec. I am working with OpenCV 2.4.1 on Ubuntu 12.04
Previously, I was using the fourcc code 0. This gave me the exact result I wanted, and I was able to recover the images perfectly.
I am not sure what happened, but as of a recent update (around Jul 20th 2012), something went wrong and I am no longer able to write files with this fourcc code. I really don't remember what it was, but it could have come from doing an update, removing some software from my software center, and some other things I did during general cleaning...
When I check an older file with mediainfo (http://www.fourcc.org/identifier/) I see the following result:
Complete name : oldsample.avi
Format : AVI
Format/Info : Audio Video Interleave
Format profile : OpenDML
File size : 1.07 GiB
Duration : 41s 467ms
Overall bit rate : 221 Mbps
Writing application : Lavf53.5.0
Video
ID : 0
Format : RGB
Codec ID : 0x00000000
Codec ID/Info : Basic Windows bitmap format. 1, 4 and 8 bpp versions are palettised. 16, 24 and 32bpp contain raw RGB samples
Duration : 41s 467ms
Bit rate : 221 Mbps
Width : 640 pixels
Height : 4294966 816 pixels
Display aspect ratio : 0.000
Frame rate : 30.000 fps
Bit depth : 8 bits
Stream size : 1.07 GiB (100%)
Now, I see that when I write using the 0 fourcc codec, the program actually defaults to the i420 codec. Here is the output from one of the files I try to write now:
Complete name : newsample.avi
Format : AVI
Format/Info : Audio Video Interleave
File size : 73.0 MiB
Duration : 5s 533ms
Overall bit rate : 111 Mbps
Writing application : Lavf54.6.100
Video
ID : 0
Format : YUV
Codec ID : I420
Codec ID/Info : 8 bit Y plane followed by 8 bit 2x2 subsampled U and V planes.
Duration : 5s 533ms
Bit rate : 111 Mbps
Width : 640 pixels
Height : 480 pixels
Display aspect ratio : 4:3
Frame rate : 30.000 fps
Compression mode : Lossless
Bits/(Pixel*Frame) : 12.000
Stream size : 72.9 MiB (100%)
This format, and other formats I try to use (like huffyuv HFYU), do not work for me because I end up with effects like this http://imgur.com/a/0OC4y - you see the bright artifacts coming in due to what I assume is either lossy compression or chroma subsampling in the case of HFYU which is supposed to be lossless. What you are looking at is the red channel from one of my videos. The perceptual effect is negligible when you look at all 3 channels simultaneously but it is essential that I reconstruct the images exactly.
Furthermore, while I am able to play my old files in media players like vlc, I suddenly find them to be completely incompatible with opencv. When I try to open the older files with a videocapture, the open step works fine, but trying to do a read operation results in a segfault. Furthermore, When I try to write with either:
CV_FOURCC(0,0,0,0)
0
Opencv defaults to I420 for some reason.
Next, I tried using some alternate codecs. 'DIB ' seems like something that should work for me, and on the opencv website (http://opencv.willowgarage.com/wiki/VideoCodecs) it is listed as a 'recommended' codec. However, trying to use this results in the following message:
OpenCV-2.4.1/modules/highgui/src/cap_gstreamer.cpp:483: error: (-210) Gstreamer Opencv backend doesn't support this codec acutally. in function CvVideoWriter_GStreamer::open
Aborted (core dumped)
I checked the opencv source for this codec, and stumbled across the following:
cd OpenCV-2.4.1/modules
grep -i -r "CV_FOURCC" ./*
...
./highgui/src/cap_qt.cpp: /*if( fourcc == CV_FOURCC( 'D', 'I', 'B', ' ' ))
./highgui/include/opencv2/highgui/highgui_c.h:#define CV_FOURCC_DEFAULT CV_FOURCC('I', 'Y', 'U', 'V') /* Use default codec for specified filename (Linux only) */
I tried installing qt4 and reconfiguring with the WITH_QT flag, but that did not change anything. I also tried uncommenting that part of the code and reinstalling opencv, but that also did not work.
My ultimate goal is for any way to efficiently store and retrieve a video stream with 16 bits for every pixel (like 32float would work fine, and then it wouldn't need to be perfect). Right now I am unpacking the 16 bits into the red and green channels, which is why I need it to be perfect - since an error of 1 in the red channel is multiplied by 256 in the final result. I am not having success with any of the fourcc codes available to me.
Most probably you uninstalled or updated a codec. Try install a new codec pack, or update your ffmpeg, or gstreamer
I ended up figuring this out a little while ago, and finally got a chance to write it up for everyone. You can see my (rather hacky) solution here:
http://denislantsman.com/?p=111
Edit: As the website is down the following summarizes what can be found from the Wayback Machine:
Save frames as individual PNG images
Run ffmpeg to generate a file which can be opened by OpenCV:
ffmpeg -i ./outimg/depth%d.png -vcodec png depth.mov
The following Python snippet may be useful for saving the individual frames
std::ostringstream out_depth;
...
expand_depth(playback.pDepthMap, expanded_depth, playback.rows, playback.cols);
out_depth << root << "/outimg/depth" << framecount << ".png";
cv::imwrite(out_depth.str(), expanded_depth);
framecount++;
...
Why not use FFV1? Its compression rate is way better then dibs and it is widely available.
VideoWriter video("lossless.mkv", VideoWriter::fourcc('F','F','V','1'),FPS, Size(WIDTH,HEIGHT));
I have a problem with a RAW YUB video load in OpenCV. I can play it in mplayer with the following command:
mplayer myvideo.raw -rawvideo w=1280:h=1024:fps=30:y8 -demuxer rawvideo
My code for load in OpenCV is:
CvCapture* capture=cvCaptureFromFile("C:\\myvideo.raw");
cvCaptureFromFile always return NULL. But if I try with a normal avi file, the code runs normally (capture is not null).
I'm working with the lastest version of OpenCV under Windows 7.
EDIT: Output messages are
[IMGUTILS # 0036f724] Picture size 0x0 is invalid
[image2 # 009f3300] Could not find codec parameters (Video: rawvideo, yuv420p)
Thanks
OpenCV uses ffmpeg as back-end, however, it includes only a subset of ffmpeg functions. What you can try is to install some codecs. (K-lite helped me some time ago)
But, if your aim is to obtain raw YUV in OpenCV, the answer is "not possible".
OpenCV is hardcoded to convert every input format to BGR, so even if you will be able to open the raw input, it will automatically convery it to BGR before passing it. No chance to solve that, the only way is to use a different capture library or hack into OpenCV.
What you can do (to simulate YUV input) is to capture the avi, convert to YUV
cvtColor(...,CV_BGR2YCBCR /* or CV_BGR2YUV */ );
and then process it
Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 3 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I need to use OpenCV with a GigE Vision Ethernet Camera, but I couldn't find much useful information on how to do this, any pointers, documents and example code?
I need to read frames from the camera.
Gig-E is a communication standard for a wide range of cameras. OpenCV now contains a wrapper for The Prosilica Gig-E based cameras (see CV_CAP_PVAPI)
But in general it's better to use the camera's native API to get the data and then use openCV to convert the returned data into an image, openCv contains a number of Bayer pattern ->RGB routines.
The CvCapture module is convenient for testing, because it can seemlessly read from a camera or a file - but it's not really suitable for high-speed real-time vision
You can do this! I used the Baumer GAPI SDK, which is a GenTL consumer. GenTL is a generic transport layer, which is a module within genIcam. You can read up on GenTL HERE. Using a GenTL consumer like Baumer's GAPI or Basler's API makes things a lot easier. They should work with any GigE camera.
I made a more comprehensive way to use Baumer's GAPI SDK in another answer HERE, so I will give a summary of what you need.
Visual Studios
openCV 3 for C++ (HERE is a youtube tutorial on how)
Baumer GAPI SDK HERE
(optional) Test your camera and network interface card using Baumer's Camera Explorer program. You need to enable jumbo packets. You may also need to configure the camera and car IP address using Baumer's IPconfig program.
Setup your system Variables. refer to the programmer's guide in the Baumer GAPI SDK docs folder (should be in C:\Program Files\Baumer\Baumer GAPI SDK\Docs\Programmers_Guide). Refer to section 4.3.1.
Create a new C++ project in Visual Studios and configure the properties. Refer to section 4.4.1.
Go to the examples folder and look for 005_PixelTransformation example. It should be in (C:\Program Files\Baumer\Baumer GAPI SDK\Components\Examples\C++\src\0_Common\005_PixelTransformation). Copy the C++ file and paste it into the source directory of your new project.
Verify you can build and compile. NOTE: You may find a problem with the part that adjusts camera parameters (exposure time for example). you should see pixel values written to the screen for the first 6 pixels in the first 6 rows, for 8 images.
Add these #include statements to the top of the .cpp source file:
#include <opencv2\core\core.hpp
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\video\video.hpp>
Add these variable declarations at the beginning of the main() function
// OPENCV VARIABLE DECLARATIONS
cv::VideoWriter cvVideoCreator; // Create OpenCV video creator
cv::Mat openCvImage; // create an OpenCV image
cv::String videoFileName = "openCvVideo.avi"; // Define video filename
cv::Size frameSize = cv::Size(2048, 1088); // Define video frame size
cvVideoCreator.open(videoFileName, CV_FOURCC('D', 'I', 'V', 'X'), 20, frameSize, true); // set the codec type and frame rate
In the original 005_PixelTransformation.cpp file, line 569 has a for loop that loops over 8 images, which says for(int i = 0; i < 8; i++). We want to change this to run continuously. I did this by changing it to a while loop that says
while (pDataStream->GetIsGrabbing())
Within the while loop there's an if and else statement to check the image pixel format. After the else statement closing brace and before the pImage->Release(); statement, add the following lines
// OPEN CV STUFF
openCvImage = cv::Mat(pTransformImage->GetHeight(), pTransformImage->GetWidth(), CV_8U, (int *)pTransformImage->GetBuffer());
// create OpenCV window ----
cv::namedWindow("OpenCV window: Cam", CV_WINDOW_NORMAL);
//display the current image in the window ----
cv::imshow("OpenCV window : Cam", openCvImage);
cv::waitKey(1);
Make sure you chose the correct pixel format for your openCvImage object. I chose CV_8U because my camera is mono 8 bit.
When you build and compile, you should get an openCV window which displays the live feed from your camera!
Like I said, it can be done, because I've done it. If you run into problems, refer to the programmer's guide.
I use an uEye GigE camera (5240) with OpenCV. It works as a cv::VideoCapture out of the box. Nevertheless using the API allows for much more control over the cameras parameters.
You don't mention the type of the camera and your platform. On Windows, according to the OpenCV documentation:
Currently two camera interfaces can be
used on Windows: Video for Windows
(VFW) and Matrox Imaging Library (MIL)
It is unlikely that your GigE camera driver supports VFW, and for MIL you need the MIL library, which is not free AFAIK.
Most GigE cameras will have an API that you can use to capture images. In most cases the API will be based on GenICam. Probably your best approach is to use the API that came with your camera, and then convert the captured image to an IplImage structure (C) or Mat class (C++).