Compare Images From Pylon GigE Camera Video Feed using absdiff - opencv

I'm trying to grab two pictures from an basler GigE camera video feed with pylon and opencv. The two pictures are converted to grayscale and compared using absdiff. The pixels that have changed should be the object i'm looking for.
Here is the while-loop of the code.
while(camera.IsGrabbing()) {
camera.RetrieveResult(5000,ptrGrabResult, TimeoutHandling_ThrowException);
if(ptrGrabResult->GrabSucceeded()){
formatConverter.Convert(pylonImage, ptrGrabResult);
openCVImage1 = cv::Mat(ptrGrabResult->GetHeight(), ptrGrabResult->GetWidth(), CV_8UC3, (uint8_t *) pylonImage.GetBuffer());
}
else{
cout<<"ERROR OpenCVImage1:" <<ptrGrabResult->GetErrorCode()<<ptrGrabResult->GetErrorDescription()<< endl;
}
camera.RetrieveResult(5000,ptrGrabResult, TimeoutHandling_ThrowException);
if(ptrGrabResult->GrabSucceeded()){
formatConverter.Convert(pylonImage, ptrGrabResult);
openCVImage2 = cv::Mat(ptrGrabResult->GetHeight(), ptrGrabResult->GetWidth(), CV_8UC3, (uint8_t *) pylonImage.GetBuffer());
}
else{
cout<<"ERROR OpenCVImage2:" <<ptrGrabResult->GetErrorCode()<<ptrGrabResult->GetErrorDescription()<< endl;
}
cvtColor(openCVImage1,grayImage1,CV_BGR2GRAY);
cvtColor(openCVImage2,grayImage2,CV_BGR2GRAY);
absdiff(grayImage1,grayImage2,differenceImage);
imshow("DiffImage", differenceImage);
threshold(differenceImage,thresholdImage,sensitivity, 255, THRESH_BINARY);
switch(waitKey(10)){
case 27: //ESC
return 0;
case 116: //t
trackingEnabled = !trackingEnabled;
if(trackingEnabled=false) cout<<"tracking disabled"<<endl;
else cout<<"tracking enabled"<<endl;
break;
case 112: //d
debugMode=!debugMode;
if(debugMode==false){cout<<"Code paused, press 'p' again to resume"<<endl;
while (pause==true) {
switch(waitKey(0)){
case 112: //p
pause = false;
cout<<"Code Resumed"<<endl;
break;
}
}
}
}
//do stuff
}
I have three problems doing that.
First: The differenceImage is always black. Altough I'm moving in fornt of the camera. Probably I'm not grabbing the pictures correctly. Does anyone know what i'm doing wrong? Already tried to add a waitKey(10) before the second RetrieveResult.
Second: The switch(waitKey(10)) is not working. No Matter what i press on the keyboard, there is no output on the screen. Already tried it using if and else if, but that didn't solve the problem either.
Third: If i stop debugging and start debugging again, i get an exception caught by my catch block. Probably this is because the camera didn't stop Grabbing after i stopped debugging. If i plug the camera out and back in the script runs. I tried using stopGrabbing() and close() but it didn't work.
I'm using windows 7 with visual studio 2010!!
Thanks a lot in Advance!

Related

How to get proper saturated frame from see3cam cu135M using OpenCV

I was trying to get 8 single shot frames from 5 x CU135M E-con cameras using OpenCV(4.5.5-dev).
Each ~20 seconds every camera take single shot (one cam by one so stream should not be overloaded).
I use powered USB-hub so all of them are connected to single USB 3.2 port in my computer.
My problem is that recived frames are sometimes (20-30% of them) over-saturated with pink or yellow.
Example of correctly recorded pic:
Example of yellow over-saturated pic:
Code for recording frames is quite simple -
cv::VideoCapture cap;
cap.open("SOME_CAMERA_ID", CAP_V4L2);
cap.set(CAP_PROP_FRAME_WIDTH, 4208);
cap.set(CAP_PROP_FRAME_HEIGHT, 3120);
Mat frame;
try{
for(int i = 0; i < 4; i++) //need this so i dont recive pure green screen frames
cap.read(frame);
while(frame.empty()){
if( cap.read(frame) )
break;
}
} catch (Exception e) {
//some error handling
}
imwrite("someFileName.png", frame );
cap.release();
I was trying to set denoise and default setings using hidraw, hovewer without results.
I'd be glad for any help.

I want to convert video to jpg image

I'm going to analyze video.
I want to convert video to image per sec.
I mean if the video which I want to analyze is 1 hour. The program which I want to make will make output 3600 image files.
How can I make it?
Is there any solution for it?
The worst case is that I have to run the video and take snapshot per second.
I need your help. Thank you.
All you need is to load the video and save individual frames in a loop. This is not exactly a snapshot, but, you are just saving each and every frame.
Please mind the resolution of the video can also affect the speed in which the frames are processed.
I am going to assume you are using C++. For this, the code will look like this:
VideoCapture cap("Your_Video.mp4");
// Check if camera opened successfully
if(!cap.isOpened())
{
cout << "Error opening the video << endl;
return -1;
}
while(1)
{
Mat frame;
cap.read(frame);
if (frame.empty())
{
break;
}
// This is where you save the frame
imwrite( "Give your File Path", frame );
}
cap.release();
destroyAllWindows();
return 0;
}

HOG Detection : Webcam unable to grab new frame

I have the following code to get a video feed from a network cam and subsequently perform a HOG detection.
However, when i add the HOG detection code , the code is only able to grab the first image upon running and it doesnt refresh on subsequent iteration. I have checked, using breakpoint, that the program is continuously running.
Please advise. Thank you.
void doDetection(void){
VideoCapture cap("http://user:password#10.0.0.1/mjpg/video.mjpg");
if (!cap.isOpened())
return;
HOGDescriptor hog;
hog.setSVMDetector(HOGDescriptor::getDefaultPeopleDetector());
while (true)
{
Mat img, mono_img;
bool status = cap.read(img);
if (status == false) break;
vector<Rect> found, found_filtered;
cvtColor(img, mono_img, CV_BGR2GRAY);
equalizeHist(mono_img, mono_img);
// this is the problem statement , when removed this program refresh fine
hog.detectMultiScale(mono_img, found, 0, Size(8, 8), Size(50, 50), 1.05, 2);
imshow("Show", img);
if (waitKey(20) >= 0)
break;
}
}
I have found this to happen even when you pause the execution of the code for a little while.
In my case MJPEG OVERREAD would appear in the console, after which no more images would get grabbed from the stream, but the stream would still be `openĀ“
I build this little check around this problem which resets the capture. just place this in your WHILE
if (cap.isOpened())
{
cap.read(img);
if (img.rows == 0)
{
cap.release();
cap.open("");//Insert own url
cap.read(img);
}
}
else
{
cap.open(""); //Insert own url
cap.read(img);
}
if this "solves" the problem by keeping you app running, you should capture the frames on a separate thread to prevent the stream from dying like this.
If this does not "solve" the problem, I'm not sure what else could be wrong

Real time circle detection using OpenCV

I have written the following program to detect a circle in real time. But it doesn't work.
The compiler doesn't show any error but the problem is that the program doesn't even detect a circle. How can I fix it?
here is my code
using namespace cv;
int main()
{
VideoCapture cap(0);
namedWindow("main",CV_WINDOW_AUTOSIZE);
namedWindow("blur",CV_WINDOW_AUTOSIZE);
Mat img;
Mat img2;
int c;
float radius;
while(1)
{
cap>>img;
imshow("main",img);
cvtColor(img,img2,CV_BGR2GRAY);
GaussianBlur(img2,img2,Size(9,9),2,2);
imshow("blur",img2);
vector <Vec3f> circles;
HoughCircles(img2,circles,CV_HOUGH_GRADIENT,1,img2.rows/8,200,100,0,0);
for(size_t i=0;i<circles.size();i++)
{
Point center(cvRound(circles[i][0]),cvRound(circles[i][1]));
radius = cvRound(circles[i][2]);
circle(img,center,3,Scalar(0,255,0),-1,8,0);
circle(img,center,radius,Scalar(0,0,255),3,8,0);
}
c = waitKey(33);
if(c==27)
break;
}
destroyAllWindows();
return 0;
}
I checked your program, it seems you just forgot to visualize it using imshow() after the detection. You only drew the image before the detection, in this way, you were not able to see the circles (maybe this mistakenly make you think there is no circles detected) even it did detect some circles.
Try to add
imshow("main", img);
right before c = waitKey(33);.
You will see the circles if it does detect some circles.
Edit: to answer your comment for real time circle detection:
Do it in a while loop style will make it work for video frames. However, whether it is real time or not depends on how fast HoughCircles() will work and also other stuff inside the loop despite you setup the proper time for waitKey().

Capture video from specific frame opencv

i have problem with capturing video from specific frame. Even if code is very simple. It works like the image is all noised and only the moving object are clear. And it works for me, but only for like 3-4 seconds, after i get crash. Some screen shots:
Errors plus noised and clear part of image - i.stack.imgur.com/wIh9p.png
Errors - i.stack.imgur.com/5xFHy.png
And here is my code:
int main(int, char**)
{
namedWindow("edges",1);
cap.get(CV_CAP_PROP_FRAME_COUNT);
cap.set(CV_CAP_PROP_POS_FRAMES,50);
cap.read(frame);
resize(frame,frame,Size(1920/2,1080/2));
imshow("edges",frame);
for(;;)
{
cap.read(frame);
resize(frame,frame, Size(1920/2,1080/2));
imshow("edges",frame);
if(waitKey(1) >= 0) break;
}
waitKey();
return 0; //return
}
Is there anyone could help me?

Resources