CvVideoWriter WriteFrame for iOS not working - ios

I'm using a pre-compiled version of OpenCV I got from here: http://aptogo.co.uk/2011/09/opencv-framework-for-ios/
I'm working on a Ogre3D Application, and I'm trying to grab screens from the app to stitch together to make a video. I've successfully gotten this to work on Windows using OpenCV's CvVideoWriter.
Unfortunately, I'm running into some problems on the iOS version of this. My code is as follows:
IplImage *mCvCaptureImage, *mCvConvertImage;
mCvCaptureImage = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, 3);
mCvConvertImage = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, 3);
PixelFormat pf = win->suggestPixelFormat();
PixelBox pb(width, height, 1, pf, mCvCaptureImage->imageData);
win->copyContentsToMemory(pb);
cvConvertImage(mCvCaptureImage, mCvConvertImage, CV_CVTIMG_SWAP_RB);
cvWriteFrame(writer, mCvConvertImage );
cvReleaseImage(&mCvConvertImage);
cvReleaseImage(&mCvCaptureImage);
The cvWriteFrame call is throwing an error due to writer status check. I checked the source code, and this seems to be the problem:
// writer status check
if (![mMovieWriterInput isReadyForMoreMediaData] || mMovieWriter.status != AVAssetWriterStatusWriting ) {
NSLog(#"[mMovieWriterInput isReadyForMoreMediaData] Not ready for media data or ...");
NSLog(#"mMovieWriter.status: %d. Error: %#", mMovieWriter.status,[mMovieWriter.error localizedDescription]);
[localpool drain];
return false;
}
Anyone has any ideas why this is happening and how it can be fixed?
Thanks!

Change the file extension to m4v and delete existing file before calling cvCreateVideoWriter()
( opencv chooses filetype looking at file extension )

Related

Compare Images From Pylon GigE Camera Video Feed using absdiff

I'm trying to grab two pictures from an basler GigE camera video feed with pylon and opencv. The two pictures are converted to grayscale and compared using absdiff. The pixels that have changed should be the object i'm looking for.
Here is the while-loop of the code.
while(camera.IsGrabbing()) {
camera.RetrieveResult(5000,ptrGrabResult, TimeoutHandling_ThrowException);
if(ptrGrabResult->GrabSucceeded()){
formatConverter.Convert(pylonImage, ptrGrabResult);
openCVImage1 = cv::Mat(ptrGrabResult->GetHeight(), ptrGrabResult->GetWidth(), CV_8UC3, (uint8_t *) pylonImage.GetBuffer());
}
else{
cout<<"ERROR OpenCVImage1:" <<ptrGrabResult->GetErrorCode()<<ptrGrabResult->GetErrorDescription()<< endl;
}
camera.RetrieveResult(5000,ptrGrabResult, TimeoutHandling_ThrowException);
if(ptrGrabResult->GrabSucceeded()){
formatConverter.Convert(pylonImage, ptrGrabResult);
openCVImage2 = cv::Mat(ptrGrabResult->GetHeight(), ptrGrabResult->GetWidth(), CV_8UC3, (uint8_t *) pylonImage.GetBuffer());
}
else{
cout<<"ERROR OpenCVImage2:" <<ptrGrabResult->GetErrorCode()<<ptrGrabResult->GetErrorDescription()<< endl;
}
cvtColor(openCVImage1,grayImage1,CV_BGR2GRAY);
cvtColor(openCVImage2,grayImage2,CV_BGR2GRAY);
absdiff(grayImage1,grayImage2,differenceImage);
imshow("DiffImage", differenceImage);
threshold(differenceImage,thresholdImage,sensitivity, 255, THRESH_BINARY);
switch(waitKey(10)){
case 27: //ESC
return 0;
case 116: //t
trackingEnabled = !trackingEnabled;
if(trackingEnabled=false) cout<<"tracking disabled"<<endl;
else cout<<"tracking enabled"<<endl;
break;
case 112: //d
debugMode=!debugMode;
if(debugMode==false){cout<<"Code paused, press 'p' again to resume"<<endl;
while (pause==true) {
switch(waitKey(0)){
case 112: //p
pause = false;
cout<<"Code Resumed"<<endl;
break;
}
}
}
}
//do stuff
}
I have three problems doing that.
First: The differenceImage is always black. Altough I'm moving in fornt of the camera. Probably I'm not grabbing the pictures correctly. Does anyone know what i'm doing wrong? Already tried to add a waitKey(10) before the second RetrieveResult.
Second: The switch(waitKey(10)) is not working. No Matter what i press on the keyboard, there is no output on the screen. Already tried it using if and else if, but that didn't solve the problem either.
Third: If i stop debugging and start debugging again, i get an exception caught by my catch block. Probably this is because the camera didn't stop Grabbing after i stopped debugging. If i plug the camera out and back in the script runs. I tried using stopGrabbing() and close() but it didn't work.
I'm using windows 7 with visual studio 2010!!
Thanks a lot in Advance!

HOG Detection : Webcam unable to grab new frame

I have the following code to get a video feed from a network cam and subsequently perform a HOG detection.
However, when i add the HOG detection code , the code is only able to grab the first image upon running and it doesnt refresh on subsequent iteration. I have checked, using breakpoint, that the program is continuously running.
Please advise. Thank you.
void doDetection(void){
VideoCapture cap("http://user:password#10.0.0.1/mjpg/video.mjpg");
if (!cap.isOpened())
return;
HOGDescriptor hog;
hog.setSVMDetector(HOGDescriptor::getDefaultPeopleDetector());
while (true)
{
Mat img, mono_img;
bool status = cap.read(img);
if (status == false) break;
vector<Rect> found, found_filtered;
cvtColor(img, mono_img, CV_BGR2GRAY);
equalizeHist(mono_img, mono_img);
// this is the problem statement , when removed this program refresh fine
hog.detectMultiScale(mono_img, found, 0, Size(8, 8), Size(50, 50), 1.05, 2);
imshow("Show", img);
if (waitKey(20) >= 0)
break;
}
}
I have found this to happen even when you pause the execution of the code for a little while.
In my case MJPEG OVERREAD would appear in the console, after which no more images would get grabbed from the stream, but the stream would still be `openĀ“
I build this little check around this problem which resets the capture. just place this in your WHILE
if (cap.isOpened())
{
cap.read(img);
if (img.rows == 0)
{
cap.release();
cap.open("");//Insert own url
cap.read(img);
}
}
else
{
cap.open(""); //Insert own url
cap.read(img);
}
if this "solves" the problem by keeping you app running, you should capture the frames on a separate thread to prevent the stream from dying like this.
If this does not "solve" the problem, I'm not sure what else could be wrong

OpenCV 2.4.3 and videoInput into Mat

I am trying to capture video into a Mat type from two or more MSFT LifeCam HD-3000s using the videoInput library, OpenCV 2.4.3, and VS2010 Express.
I followed the example at: Most efficient way to capture and send images from a webcam in a network and it worked great.
Now I want to replace the IplImage type with a c++ Mat type. I tried to follow the example at: opencv create mat from camera data
That gave me the following:
VI = new videoInput;
int CurrentCam = 0;
VI->setupDevice(CurrentCam,WIDTH,HEIGHT);
int width = VI->getWidth(CurrentCam);
int height = VI->getHeight(CurrentCam);
unsigned char* yourBuffer = new unsigned char[VI->getSize(CurrentCam)];
cvNamedWindow("test",1);
while(1)
{
VI->getPixels(CurrentCam, yourBuffer, false, true);
cv::Mat image(width, height, CV_8UC3, yourBuffer, Mat::AUTO_STEP);
imshow("test", image);
if(cvWaitKey(15)==27) break;
}
The output is a lined image (i.e., it looks like the first line is correct but the second line seems off, third correct, fourth off, etc). That suggests that either the step part is wrong or there is some difference between the IplImage type and the Mat type that I am not getting. I have tried looking at/altering all the parameters, but I can't find anything.
Hopefully, an answer will help those facing what appears to be a fairly common issue with loading an image form the videoInput library to the Mat type.
Thanks in advance!
Try
cv::Mat image(height, width, CV_8UC3, yourBuffer, Mat::AUTO_STEP);

opcv videowriter, i don't know why it does'nt work

I've just written a first program for videocaptur and videowriter. I copied the source from the wiki and changed the only video file name, but it made error.
Here is the source from the wiki.
The opencv is 2.1 and the compiler is visual c++ 2008 express.
#include "cv.h"
#include "highgui.h"
using namespace cv;
int main(int, char**)
{
VideoCapture capture(1); // open the default camera
if( !capture.isOpened() ) {
printf("Camera failed to open!\n");
return -1;
}
Mat frame;
capture >> frame; // get first frame for size
// record video
VideoWriter record("RobotVideo.avi", CV_FOURCC('D','I','V','X'), 30, frame.size(), true);
if( !record.isOpened() ) {
printf("VideoWriter failed to open!\n");
return -1;
}
namedWindow("video",1);
for(;;)
{
// get a new frame from camera
capture >> frame;
// show frame on screen
imshow("video", frame);
// add frame to recorded video
record << frame;
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
// the recorded video will be closed automatically in the VideoWriter destructor
return 0;
}
With the source, I changed 2 parts. One is for VideoCapture. (I don't have tunercard or camera.) The source is
VideoCapture capture(1); // open the default camera
and changed to
VideoCapture capture("C:/Users/Public/Videos/Sample Videos/WildlifeTest.wmv");
And the other is for VideoWriter:
// record video
VideoWriter record("RobotVideo.avi", CV_FOURCC('D','I','V','X'), 30, frame.size(), true);
and changed to
VideoWriter record("C:/Users/Public/Videos/Sample Videos/WildlifeRec.wmv",
CV_FOURCC('W','M','V','1'), 30,frame.size(), true);
and the part of error is:
// add frame to recorded video
record << frame;
Please show me what is my mistake!
P.S.
when I delete the line record << frame;, it works well. I think the error caused at the line.
And I found that even if without change, the wiki source program make same error.
The first error that i see is the file paths. You have to give them like this : C:\\Users\\....
please make sure you opencv_ffmpegXXX.dll work right

How to set camera FPS in OpenCV? CV_CAP_PROP_FPS is a fake

How to set Camera FPS?
May be
cvSetCaptureProperty(cameraCapture, CV_CAP_PROP_FPS, 30);
?
But it's return
HIGHGUI ERROR: V4L2: Unable to get property (5) - Invalid argument
Because there is no implementation in highgui/cap_v4l.cpp
static int icvSetPropertyCAM_V4L( CvCaptureCAM_V4L* capture,
int property_id, double value ){
static int width = 0, height = 0;
int retval;
/* initialization */
retval = 0;
/* two subsequent calls setting WIDTH and HEIGHT will change
the video size */
/* the first one will return an error, though. */
switch (property_id) {
case CV_CAP_PROP_FRAME_WIDTH:
width = cvRound(value);
if(width !=0 && height != 0) {
retval = icvSetVideoSize( capture, width, height);
width = height = 0;
}
break;
case CV_CAP_PROP_FRAME_HEIGHT:
height = cvRound(value);
if(width !=0 && height != 0) {
retval = icvSetVideoSize( capture, width, height);
width = height = 0;
}
break;
case CV_CAP_PROP_BRIGHTNESS:
case CV_CAP_PROP_CONTRAST:
case CV_CAP_PROP_SATURATION:
case CV_CAP_PROP_HUE:
case CV_CAP_PROP_GAIN:
case CV_CAP_PROP_EXPOSURE:
retval = icvSetControl(capture, property_id, value);
break;
default:
fprintf(stderr,
"HIGHGUI ERROR: V4L: setting property #%d is not supported\n",
property_id);
}
/* return the the status */
return retval;
}
How to solve it?
using the python wrappers for opencv, it worked for me to refer to the variable as:
cap = cv2.VideoCapture(1)
cap.set(cv2.cv.CV_CAP_PROP_FPS, 60)
I am using python 2.7.3 and opencv 2.4.8
The camera is the PS3 Eye
I don't know if that's still valid, but some time ago, something like one year and a half, I encountered that exactly problem. I contacted with a developer of OpenCV and he told me that the access and ability to change some of the properties of a capture weren't implemented yet and some other just worked for certain kinds of camera. I finally took a look to libdc1394 (working in linux) and made some functions that converted the data retrieved by libdc1394 to IplImages from OpenCV. It wasn't a such a tough task.
CV_CAP_PROP_FPS is a NOT a fake. See cap_libv4l.cpp(1) in OpenCV github repo. The key is to make sure, you use libv4l over v4l while configuring OpenCV. For that, before running cmake, install libv4l-dev
sudo apt-get install libv4l-dev
Now while configuring OpenCV with cmake, enable option, WITH_LIBV4L. If all goes good, in configuration status, you will see some thing similar to below
V4L/V4L2: Using libv4l1 (ver ) / libv4l2 (ver )
And then while building your OpenCV code, you will have to link with libv4l1/libv4l2/libv4lconvert.
Arbitary FPS values at the resolutions you choose, needn't be supported by your webcam. You may check supported resolutions/fps with a graphical tools like cheese or commands like lsusb (2)
check opencv2.4 handbook out, the video capture thing is much better than before,
->set(CV_CAP_PROP_FPS,30);works for me most of the times.
but a little bit low efficiency.
just in case you might not like the new opencv2.4 and still want to control your camera. check the videoinput lib here. it works good and using directshow features.
http://www.aishack.in/2010/03/capturing-images-with-directx/
http://www.muonics.net/school/spring05/videoInput/

Resources