How to get proper saturated frame from see3cam cu135M using OpenCV - opencv

I was trying to get 8 single shot frames from 5 x CU135M E-con cameras using OpenCV(4.5.5-dev).
Each ~20 seconds every camera take single shot (one cam by one so stream should not be overloaded).
I use powered USB-hub so all of them are connected to single USB 3.2 port in my computer.
My problem is that recived frames are sometimes (20-30% of them) over-saturated with pink or yellow.
Example of correctly recorded pic:
Example of yellow over-saturated pic:
Code for recording frames is quite simple -
cv::VideoCapture cap;
cap.open("SOME_CAMERA_ID", CAP_V4L2);
cap.set(CAP_PROP_FRAME_WIDTH, 4208);
cap.set(CAP_PROP_FRAME_HEIGHT, 3120);
Mat frame;
try{
for(int i = 0; i < 4; i++) //need this so i dont recive pure green screen frames
cap.read(frame);
while(frame.empty()){
if( cap.read(frame) )
break;
}
} catch (Exception e) {
//some error handling
}
imwrite("someFileName.png", frame );
cap.release();
I was trying to set denoise and default setings using hidraw, hovewer without results.
I'd be glad for any help.

Related

I want to convert video to jpg image

I'm going to analyze video.
I want to convert video to image per sec.
I mean if the video which I want to analyze is 1 hour. The program which I want to make will make output 3600 image files.
How can I make it?
Is there any solution for it?
The worst case is that I have to run the video and take snapshot per second.
I need your help. Thank you.
All you need is to load the video and save individual frames in a loop. This is not exactly a snapshot, but, you are just saving each and every frame.
Please mind the resolution of the video can also affect the speed in which the frames are processed.
I am going to assume you are using C++. For this, the code will look like this:
VideoCapture cap("Your_Video.mp4");
// Check if camera opened successfully
if(!cap.isOpened())
{
cout << "Error opening the video << endl;
return -1;
}
while(1)
{
Mat frame;
cap.read(frame);
if (frame.empty())
{
break;
}
// This is where you save the frame
imwrite( "Give your File Path", frame );
}
cap.release();
destroyAllWindows();
return 0;
}

A method to resize frame after decoding on client side

I've just joined a project to build a realtime video streaming application using ffmpeg/opencv/c++ via udp socket. On server side, they want to transmit a video size (640x480) to client, in order to reduce data transmission through network I resize the video to (320x240) and send frame. On client side (client), after receiving frame, we will upscale the frame back to (640x480). Using H265 for encode/decoding.
As I am just a beginner with video encoding, I would like to understand how to down-sampling & up-sampling the frame at server & client side in which we can incorporate with the video encoder/decoder.
A simple idea came into my mind that after decoding avframe -> Mat frame, I will upsampling this frame then display it.
I am not sure my idea is right or wrong. I would like to seek advice from any people who had experience in this area. Thank you very much!
static void updateFrameCallback(AVFrame *avframe, void* userdata) {
VideoStreamUDPClient* streamer = static_cast<VideoStreamUDPClient*> (userdata);
TinyClient* client = static_cast<TinyClient*> (streamer->userdata);
//Update Frame
pthread_mutex_lock(&client->mtx_updateFrame);
if (streamer->irect.width == client->frameSize.width
&& streamer->irect.height == client->frameSize.height) {
cvtAVFrameYUV4202Frame(&avframe, client->frame);
printf("TinyClient: Received Full Frame\n");
} else {
Mat block;
cvtAVFrameYUV4202Frame(&avframe, block);
block.copyTo(client->frame(streamer->irect));
}
//How to resize frame before display it!!!
imshow("Frame", client->frame);
waitKey(1);
pthread_mutex_unlock(&client->mtx_updateFrame);
}
From what I understand you want just to resize frame after decoding. In opencv you can do it like this
if(...)
{
...
}
else
{
Mat block;
cvtAVFrameYUV4202Frame(&avframe, block);
Mat temp;
resize(block, temp, 640,480)
temp.copyTo(client->frame(streamer->irect));
}
//client->frame will always have 640x480
imshow("Frame" client->frame);
I don't have much experience in video decoding but from what I know you cannot resize video( or single frame) without decoding it first.

Compare Images From Pylon GigE Camera Video Feed using absdiff

I'm trying to grab two pictures from an basler GigE camera video feed with pylon and opencv. The two pictures are converted to grayscale and compared using absdiff. The pixels that have changed should be the object i'm looking for.
Here is the while-loop of the code.
while(camera.IsGrabbing()) {
camera.RetrieveResult(5000,ptrGrabResult, TimeoutHandling_ThrowException);
if(ptrGrabResult->GrabSucceeded()){
formatConverter.Convert(pylonImage, ptrGrabResult);
openCVImage1 = cv::Mat(ptrGrabResult->GetHeight(), ptrGrabResult->GetWidth(), CV_8UC3, (uint8_t *) pylonImage.GetBuffer());
}
else{
cout<<"ERROR OpenCVImage1:" <<ptrGrabResult->GetErrorCode()<<ptrGrabResult->GetErrorDescription()<< endl;
}
camera.RetrieveResult(5000,ptrGrabResult, TimeoutHandling_ThrowException);
if(ptrGrabResult->GrabSucceeded()){
formatConverter.Convert(pylonImage, ptrGrabResult);
openCVImage2 = cv::Mat(ptrGrabResult->GetHeight(), ptrGrabResult->GetWidth(), CV_8UC3, (uint8_t *) pylonImage.GetBuffer());
}
else{
cout<<"ERROR OpenCVImage2:" <<ptrGrabResult->GetErrorCode()<<ptrGrabResult->GetErrorDescription()<< endl;
}
cvtColor(openCVImage1,grayImage1,CV_BGR2GRAY);
cvtColor(openCVImage2,grayImage2,CV_BGR2GRAY);
absdiff(grayImage1,grayImage2,differenceImage);
imshow("DiffImage", differenceImage);
threshold(differenceImage,thresholdImage,sensitivity, 255, THRESH_BINARY);
switch(waitKey(10)){
case 27: //ESC
return 0;
case 116: //t
trackingEnabled = !trackingEnabled;
if(trackingEnabled=false) cout<<"tracking disabled"<<endl;
else cout<<"tracking enabled"<<endl;
break;
case 112: //d
debugMode=!debugMode;
if(debugMode==false){cout<<"Code paused, press 'p' again to resume"<<endl;
while (pause==true) {
switch(waitKey(0)){
case 112: //p
pause = false;
cout<<"Code Resumed"<<endl;
break;
}
}
}
}
//do stuff
}
I have three problems doing that.
First: The differenceImage is always black. Altough I'm moving in fornt of the camera. Probably I'm not grabbing the pictures correctly. Does anyone know what i'm doing wrong? Already tried to add a waitKey(10) before the second RetrieveResult.
Second: The switch(waitKey(10)) is not working. No Matter what i press on the keyboard, there is no output on the screen. Already tried it using if and else if, but that didn't solve the problem either.
Third: If i stop debugging and start debugging again, i get an exception caught by my catch block. Probably this is because the camera didn't stop Grabbing after i stopped debugging. If i plug the camera out and back in the script runs. I tried using stopGrabbing() and close() but it didn't work.
I'm using windows 7 with visual studio 2010!!
Thanks a lot in Advance!

EmguCV 3.1 Capture.QueryFrame returning error intermittantly

I am using EmguCV to create a capture from a video file stored on disk. I set the capture property for frame position and then perform a QueryFrame. On certain frames from the video, when I go to process the Mat further I get the error '{"OpenCV: Unrecognized or unsupported array type"}'. This doesn't happen on all frames of the video but when I run it for the same video it happens for the same frames in the video. If I save the Mat to disk the image looks perfectly fine and saves without error. Here is the code for loading and processing the image:
Capture cap = new Capture(movieLocation);
int framePos = 0;
while (reading)
{
cap.SetCaptureProperty(CapProp.PosFrames, framePos);
using (var frame = cap.QueryFrame())
{
if (frame != null)
{
try
{
var fm = Rotate(frame); // Works fine
// Other Processing including classifier.DetectMultiScale -- Error occurs here
frameMap.Add(framePos, r);
}
catch (Exception ex)
{
var s = ""; // Done to just see the error
}
framePos = framePos + 2;
}
else
{
reading = false;
}
}
}
Line of code which throws exception in further processing
var r = _classifier.DetectMultiScale(matIn, 1.1, 2, new Size(200, 200), new Size(375, 375));
As I said, this does not fail for every frame of the video.
I'm trying to solve this because sometimes it skips 1 frame but at other times it will skip whole blocks of frames which is causing me to miss important events in the video.
After a bit more working on it, I figured out that the Mat had a ROI set on it before going to the cascade classifier. In the instances where the mat was failing the ROI was set to 0 height and 0 width. This caused the issue.

the nchannel() returns always 1 even in the case of color video image

I'm coding an opencv 2.1 program with visual c++ 2008 express. I want to get each pixel color data of each pixel and modify them by pixel.
I understand that the code "frmSource.channels();" returns the color channels of the mat frmSource, but it always returns 1 even if it is absolutely color video image, not 3 or 4.
Am I wrong?
If I'm wrong, please guide me how to get the each color component data of each pixel.
Also, the total frame count by "get(CV_CAP_PROP_FRAME_COUNT)" is much larger than the frame count I expected, so I divide the "get(CV_CAP_PROP_FRAME_COUNT) by get(CV_CAP_PROP_FPS Frame rate.") and I can get the result as I expected.
I understand that the frame is like a cut of a movie, and 30 frames per sec. Is that right?
My coding is as follows:
void fEditMain()
{
VideoCapture vdoCap("C:/Users/Public/Videos/Sample Videos/WildlifeTest.wmv");
// this video file is provided in window7
if( !vdoCap.isOpened() )
{
printf("failed to open!\n");
return;
}
Mat frmSource;
vdoCap >> frmSource;
if(! frmSource.data) return;
VideoWriter vdoRec(vRecFIleName, CV_FOURCC('W','M','V','1'), 30, frmSource.size(), true);
namedWindow("video",1);
// record video
int vFrmCntNo=1;
for(;;)
{
int vDepth = frmSource.depth();
vChannel = frmSource.channels();
// here! vChannel is always 1, i expect 3 or 4 because it is color image
imshow("video", frmSource);// frmSource Show
vdoRec << frmSource;
vdoCap >> frmSource;
if(! frmSource.data)
return;
}
return;
}
I am not sure if this will answer your question but if you use IplImage it will be very easy to get the correct number of channels as well as manipulate the image. Try using:
IplImage *frm = cvQueryFrame(cap);
int numOfChannels = channelfrm->nChannels;
A video is composed of frames and you can know how many frames pass in a second by using get(CV_CAP_PROP_FPS). If you divide the frame count by the FPS you'll get the number of seconds for the clip.

Resources