A method to resize frame after decoding on client side - opencv

I've just joined a project to build a realtime video streaming application using ffmpeg/opencv/c++ via udp socket. On server side, they want to transmit a video size (640x480) to client, in order to reduce data transmission through network I resize the video to (320x240) and send frame. On client side (client), after receiving frame, we will upscale the frame back to (640x480). Using H265 for encode/decoding.
As I am just a beginner with video encoding, I would like to understand how to down-sampling & up-sampling the frame at server & client side in which we can incorporate with the video encoder/decoder.
A simple idea came into my mind that after decoding avframe -> Mat frame, I will upsampling this frame then display it.
I am not sure my idea is right or wrong. I would like to seek advice from any people who had experience in this area. Thank you very much!
static void updateFrameCallback(AVFrame *avframe, void* userdata) {
VideoStreamUDPClient* streamer = static_cast<VideoStreamUDPClient*> (userdata);
TinyClient* client = static_cast<TinyClient*> (streamer->userdata);
//Update Frame
pthread_mutex_lock(&client->mtx_updateFrame);
if (streamer->irect.width == client->frameSize.width
&& streamer->irect.height == client->frameSize.height) {
cvtAVFrameYUV4202Frame(&avframe, client->frame);
printf("TinyClient: Received Full Frame\n");
} else {
Mat block;
cvtAVFrameYUV4202Frame(&avframe, block);
block.copyTo(client->frame(streamer->irect));
}
//How to resize frame before display it!!!
imshow("Frame", client->frame);
waitKey(1);
pthread_mutex_unlock(&client->mtx_updateFrame);
}

From what I understand you want just to resize frame after decoding. In opencv you can do it like this
if(...)
{
...
}
else
{
Mat block;
cvtAVFrameYUV4202Frame(&avframe, block);
Mat temp;
resize(block, temp, 640,480)
temp.copyTo(client->frame(streamer->irect));
}
//client->frame will always have 640x480
imshow("Frame" client->frame);
I don't have much experience in video decoding but from what I know you cannot resize video( or single frame) without decoding it first.

Related

How to get proper saturated frame from see3cam cu135M using OpenCV

I was trying to get 8 single shot frames from 5 x CU135M E-con cameras using OpenCV(4.5.5-dev).
Each ~20 seconds every camera take single shot (one cam by one so stream should not be overloaded).
I use powered USB-hub so all of them are connected to single USB 3.2 port in my computer.
My problem is that recived frames are sometimes (20-30% of them) over-saturated with pink or yellow.
Example of correctly recorded pic:
Example of yellow over-saturated pic:
Code for recording frames is quite simple -
cv::VideoCapture cap;
cap.open("SOME_CAMERA_ID", CAP_V4L2);
cap.set(CAP_PROP_FRAME_WIDTH, 4208);
cap.set(CAP_PROP_FRAME_HEIGHT, 3120);
Mat frame;
try{
for(int i = 0; i < 4; i++) //need this so i dont recive pure green screen frames
cap.read(frame);
while(frame.empty()){
if( cap.read(frame) )
break;
}
} catch (Exception e) {
//some error handling
}
imwrite("someFileName.png", frame );
cap.release();
I was trying to set denoise and default setings using hidraw, hovewer without results.
I'd be glad for any help.

I want to convert video to jpg image

I'm going to analyze video.
I want to convert video to image per sec.
I mean if the video which I want to analyze is 1 hour. The program which I want to make will make output 3600 image files.
How can I make it?
Is there any solution for it?
The worst case is that I have to run the video and take snapshot per second.
I need your help. Thank you.
All you need is to load the video and save individual frames in a loop. This is not exactly a snapshot, but, you are just saving each and every frame.
Please mind the resolution of the video can also affect the speed in which the frames are processed.
I am going to assume you are using C++. For this, the code will look like this:
VideoCapture cap("Your_Video.mp4");
// Check if camera opened successfully
if(!cap.isOpened())
{
cout << "Error opening the video << endl;
return -1;
}
while(1)
{
Mat frame;
cap.read(frame);
if (frame.empty())
{
break;
}
// This is where you save the frame
imwrite( "Give your File Path", frame );
}
cap.release();
destroyAllWindows();
return 0;
}

HOG Detection : Webcam unable to grab new frame

I have the following code to get a video feed from a network cam and subsequently perform a HOG detection.
However, when i add the HOG detection code , the code is only able to grab the first image upon running and it doesnt refresh on subsequent iteration. I have checked, using breakpoint, that the program is continuously running.
Please advise. Thank you.
void doDetection(void){
VideoCapture cap("http://user:password#10.0.0.1/mjpg/video.mjpg");
if (!cap.isOpened())
return;
HOGDescriptor hog;
hog.setSVMDetector(HOGDescriptor::getDefaultPeopleDetector());
while (true)
{
Mat img, mono_img;
bool status = cap.read(img);
if (status == false) break;
vector<Rect> found, found_filtered;
cvtColor(img, mono_img, CV_BGR2GRAY);
equalizeHist(mono_img, mono_img);
// this is the problem statement , when removed this program refresh fine
hog.detectMultiScale(mono_img, found, 0, Size(8, 8), Size(50, 50), 1.05, 2);
imshow("Show", img);
if (waitKey(20) >= 0)
break;
}
}
I have found this to happen even when you pause the execution of the code for a little while.
In my case MJPEG OVERREAD would appear in the console, after which no more images would get grabbed from the stream, but the stream would still be `openĀ“
I build this little check around this problem which resets the capture. just place this in your WHILE
if (cap.isOpened())
{
cap.read(img);
if (img.rows == 0)
{
cap.release();
cap.open("");//Insert own url
cap.read(img);
}
}
else
{
cap.open(""); //Insert own url
cap.read(img);
}
if this "solves" the problem by keeping you app running, you should capture the frames on a separate thread to prevent the stream from dying like this.
If this does not "solve" the problem, I'm not sure what else could be wrong

opencv cvGrabFrame frame rate on iOS?

I'm using openCV to split a video into frames. For that I need the fps and duration. Both of these value return 1 when asking them via cvGetCaptureProperty.
I've made a hack where I use AVURLAsset to get the fps and duration, but when I combine that with openCV I get only a partial video. It seems like it's missing frames.
This is my code right now:
while (cvGrabFrame(capture)) {
frameCounter++;
if (frameCounter % (int)(videoFPS / MyDesiredFramesPerSecond) == 0) {
IplImage *frame = cvCloneImage(cvRetrieveFrame(capture));
// Do Stuff
}
if (frameCounter > duration*fps)
break; // this is here because the loop never stops on its own
}
How can I get all the frames of a video using openCV on iOS? (opencv 2.3.2)
According to the documentation you should check the value returned by cvRetrieveFrame(), if a null pointer is returned you're at the end of the video sequence. Then you break the loop when that happens, instead of relying on the accuracy of FPS*frame_number.

the nchannel() returns always 1 even in the case of color video image

I'm coding an opencv 2.1 program with visual c++ 2008 express. I want to get each pixel color data of each pixel and modify them by pixel.
I understand that the code "frmSource.channels();" returns the color channels of the mat frmSource, but it always returns 1 even if it is absolutely color video image, not 3 or 4.
Am I wrong?
If I'm wrong, please guide me how to get the each color component data of each pixel.
Also, the total frame count by "get(CV_CAP_PROP_FRAME_COUNT)" is much larger than the frame count I expected, so I divide the "get(CV_CAP_PROP_FRAME_COUNT) by get(CV_CAP_PROP_FPS Frame rate.") and I can get the result as I expected.
I understand that the frame is like a cut of a movie, and 30 frames per sec. Is that right?
My coding is as follows:
void fEditMain()
{
VideoCapture vdoCap("C:/Users/Public/Videos/Sample Videos/WildlifeTest.wmv");
// this video file is provided in window7
if( !vdoCap.isOpened() )
{
printf("failed to open!\n");
return;
}
Mat frmSource;
vdoCap >> frmSource;
if(! frmSource.data) return;
VideoWriter vdoRec(vRecFIleName, CV_FOURCC('W','M','V','1'), 30, frmSource.size(), true);
namedWindow("video",1);
// record video
int vFrmCntNo=1;
for(;;)
{
int vDepth = frmSource.depth();
vChannel = frmSource.channels();
// here! vChannel is always 1, i expect 3 or 4 because it is color image
imshow("video", frmSource);// frmSource Show
vdoRec << frmSource;
vdoCap >> frmSource;
if(! frmSource.data)
return;
}
return;
}
I am not sure if this will answer your question but if you use IplImage it will be very easy to get the correct number of channels as well as manipulate the image. Try using:
IplImage *frm = cvQueryFrame(cap);
int numOfChannels = channelfrm->nChannels;
A video is composed of frames and you can know how many frames pass in a second by using get(CV_CAP_PROP_FPS). If you divide the frame count by the FPS you'll get the number of seconds for the clip.

Resources