Processing: Number of Frames of Video - opencv

1) Is there a way to know the number of the frames of a video after we load it but before playing it??
2) Also I want to take the first column from each frame. What I have thought is to read the whole video and store into an ArrayList every frame I read and then parse again the whole ArrayList and take the the first column from each frame. Is any more optimal way to do this?
Is any function in OpenCV that can help???

Take a look at the VideoCapture class in OpenCV. Specifically the get function for retrieving video properties.
You can load each frame and store the first column like this:
//Video capture object
cv::VideoCapture cap;
cap.open("filename");
//Storage for video frames and columns
cv::Mat frame;
std::vector<cv::Mat> cols;
//Get each frame
while(true){
//Load next frame
cap >> frame;
//If no frame, end of video
if(!frame.data) break;
//Store first column
cv::Mat col;
frame.col(0).copyTo(col);
cols.push_back(col);
}

Related

I want to convert video to jpg image

I'm going to analyze video.
I want to convert video to image per sec.
I mean if the video which I want to analyze is 1 hour. The program which I want to make will make output 3600 image files.
How can I make it?
Is there any solution for it?
The worst case is that I have to run the video and take snapshot per second.
I need your help. Thank you.
All you need is to load the video and save individual frames in a loop. This is not exactly a snapshot, but, you are just saving each and every frame.
Please mind the resolution of the video can also affect the speed in which the frames are processed.
I am going to assume you are using C++. For this, the code will look like this:
VideoCapture cap("Your_Video.mp4");
// Check if camera opened successfully
if(!cap.isOpened())
{
cout << "Error opening the video << endl;
return -1;
}
while(1)
{
Mat frame;
cap.read(frame);
if (frame.empty())
{
break;
}
// This is where you save the frame
imwrite( "Give your File Path", frame );
}
cap.release();
destroyAllWindows();
return 0;
}

cvResizeWindow() flicker reaction

I have an OpenCV window that I would like to resize to fill my screen, but when I use the resize function the window flickers. The output is my webcam and I guess the flicker is because my camera does not have those dimensions. Is there any other way to make the output from the camera larger?
cvNamedWindow("video", CV_WINDOW_AUTOSIZE);
IplImage *frame=0;
frame=cvQueryFrame(capture);
cvShowImage("video", frame);
cvResizeWindow("video", 1920,1080);
Give you an example of using cvResize() to resize the image or frame.
IplImage *frame;
CvCapture *capture = cvCaptureFromCAM(0);
cvNamedWindow("capture", CV_WINDOW_AUTOSIZE);
while(1) {
frame = cvQueryFrame(capture);
IplImage *frame_resize = cvCreateImage(cvSize(1366, 768), frame -> depth, frame -> nChannels);
cvResize(frame, frame_resize, CV_INTER_LINEAR);
cvShowImage("capture", frame);
cvWaitKey(25);
}
One possibility is to use the cvResize() function to change the size of the frame.
However, an easier way is to get rid of the CV_WINDOW_AUTOSIZE flag -- without that the video will be displayed at the size of the window.
Something like this:
cvNamedWindow("video", 0);
cvResizeWindow("video", 1920,1080);
IplImage *frame=0;
while(true)
{
frame=cvQueryFrame(capture);
cvShowImage("video", frame);
int c = waitKey(10);
...
}
I am not sure of the cause of the flickering, as I could not replicate that issue on my system.
Therefore I cannot guarantee that the flickering will disappear for you (but at least the video should be the correct size).

OpenCV VideoCapture reading issue

This will probably be a dumb question, but i really can't figure it out.
First of all: sorry for the vague title, i'm not really sure about how to describe my problem in a couple of words.
I'm using OpenCV 2.4.3 in MS Visual Studio, C++. I'm using the VideoCapture interface for capturing frames from my laptop webcam.
What my program should do is:
Loop on different poses of the user, for each pose:
wait that the user is in position (a getchar() waits for an input that says "i'm in position" by simply hitting enter)
read the current frame
extract a region of intrest from that frame
save the image in the ROI and then label it
Here is the code:
int main() {
Mat img, face_img, img_start;
Rect *face;
VideoCapture cam(0);
ofstream fout("dataset/dataset.txt");
if(!fout) {
cout<<"Cannot open dataset file! Aborting"<<endl;
return 1;
}
int count = 0; // Number of the (last + 1) image in the dataset
// Orientations are: 0°, +/- 30°, +/- 60°, +/-90°
// Distances are just two, for now
// So it is 7x2 images
cam.read(img_start);
IplImage image = img_start;
face = face_detector(image);
if(!face) {
cout<<"No face detected..? Aborting."<<endl;
return 2;
}
// Double ROI dimensions
face->x = face->x-face->width / 2;
face->y = face->y-face->height / 2;
face->width *= 2;
face->height *=2;
for(unsigned i=0;i<14;++i) {
// Wait for the user to get in position
getchar();
// Get the face ROI
cam.read(img);
face_img = Mat(img, *face);
// Save it
stringstream sstm;
string fname;
sstm << "dataset/image" << (count+i) << ".jpeg";
fname = sstm.str();
imwrite(fname,face_img);
//do some other things..
What i expect from it:
i stand in front of the camera when the program starts and it gets the ROI rectangle using the face_detector() function
when i'm ready, say in pose0, i hit enter and a picture is taken
from that picture a subimage is extracted and it is saved as image0.jpeg
loop this 7 times
What it does:
i stand in front of the camera when the program starts, nothing special here
i hit enter
the ROI is extracted not from the picture taken in that moment, but from the first one
At first, i used img in every cam.capture(), then i changed the first one in cam.capture(img_start) but that didn't help.
The second iteration of my code saves the image that should have been saved in the 1st, the 3rd iteration the one that should have been saved in the 2nd and so on.
I'm probably missing someting important from the VideoCapture, but i really can't figure it out, so here i am.
Thanks for any help, i really appreciate it.
The problem with your implementation is that the camera is not running freely and capturing images in real time. When you start up the camera, the videocapture buffer is filled up while waiting for you to read in the frames. Once the buffer is full, it doesn't drop old frames for new ones until you read and free up space in it.
The solution would be to have a separate capture thread, in addition to your "process" thread. The capture thread keeps reading in frames from the buffer whenever a new frame comes in and stores it in a "recent frame" image object. When the process thread needs the most recent frame (i.e. when you hit Enter), it locks a mutex for thread safety, copies the most recent frame into another object and frees the mutex so that the capture thread continues reading in new frames.
#include <iostream>
#include <stdio.h>
#include <thread>
#include <mutex>
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace std;
using namespace cv;
void camCapture(VideoCapture cap, Mat* frame, bool* Capture){
while (*Capture==true) {
cap >> *frame;
}
cout << "camCapture finished\n";
return;
}
int main() {
VideoCapture cap(0); // open the default camera
if (!cap.isOpened()) // check if we succeeded
return -1;
Mat *frame, SFI, Input;
frame = new Mat;
bool *Capture = new bool;
*Capture = true;
//your capture thread has started
thread captureThread(camCapture, cap, frame, Capture);
mtx.lock();
imshow(*frame,current_frame);
mtx.unlock();
//Terminate the thread
mtx.lock();
*Capture = false;
mtx.unlock();
captureThread.join();
return 0;
}
This is the code that I wrote from the above advice. I hope someone can get help from this.
When you are capturing the image continuously, no captured frame will be stored in the opencv buffer, such that there will be no lag in streaming.
If you take screenshot/capture image with some time gap inbetween, the captured image will be first stored in the opencv buffer, after that the image is retrieved from the buffer.
When the buffer is full, when you are calling captureObject >> matObject, the last frame from the image is returned, not the current frame in the capturecard/webcam.
So only you are seeing a lag in your code. This issue can be resolved by taking screenshot based on the frames per second (fps) value of the webcam and time taken to capture the screenshot.
The time taken to read frame from buffer is very less, Measure the time taken to take the screenshot. If it is lesser than the fps we can assume that is read from buffer else it means it is captured from webcam.
Sample Code:
For capturing a recent screenshot from webcam.
#include <opencv2/opencv.hpp>
#include <time.h>
#include <thread>
#include <chrono>
using namespace std;
using namespace cv;
int main()
{
struct timespec start, end;
VideoCapture cap(-1); // first available webcam
Mat screenshot;
double diff = 1000;
double fps = ((double)cap.get(CV_CAP_PROP_FPS))/1000;
while (true)
{
clock_gettime(CLOCK_MONOTONIC, &start);
//camera.grab();
cap.grab();// can also use cin >> screenshot;
clock_gettime(CLOCK_MONOTONIC, &end);
diff = (end.tv_sec - start.tv_sec)*1e9;
diff = (diff + (end.tv_nsec - start.tv_nsec))*1e-9;
std::cout << "\n diff time " << diff << '\n';
if(diff > fps)
{
break;
}
}
cap >> screenshot; // gets recent frame, can also use cap.retrieve(screenshot);
// process(screenshot)
cap.release();
screenshot.release();
return 0;
}

opencv cvGrabFrame frame rate on iOS?

I'm using openCV to split a video into frames. For that I need the fps and duration. Both of these value return 1 when asking them via cvGetCaptureProperty.
I've made a hack where I use AVURLAsset to get the fps and duration, but when I combine that with openCV I get only a partial video. It seems like it's missing frames.
This is my code right now:
while (cvGrabFrame(capture)) {
frameCounter++;
if (frameCounter % (int)(videoFPS / MyDesiredFramesPerSecond) == 0) {
IplImage *frame = cvCloneImage(cvRetrieveFrame(capture));
// Do Stuff
}
if (frameCounter > duration*fps)
break; // this is here because the loop never stops on its own
}
How can I get all the frames of a video using openCV on iOS? (opencv 2.3.2)
According to the documentation you should check the value returned by cvRetrieveFrame(), if a null pointer is returned you're at the end of the video sequence. Then you break the loop when that happens, instead of relying on the accuracy of FPS*frame_number.

the nchannel() returns always 1 even in the case of color video image

I'm coding an opencv 2.1 program with visual c++ 2008 express. I want to get each pixel color data of each pixel and modify them by pixel.
I understand that the code "frmSource.channels();" returns the color channels of the mat frmSource, but it always returns 1 even if it is absolutely color video image, not 3 or 4.
Am I wrong?
If I'm wrong, please guide me how to get the each color component data of each pixel.
Also, the total frame count by "get(CV_CAP_PROP_FRAME_COUNT)" is much larger than the frame count I expected, so I divide the "get(CV_CAP_PROP_FRAME_COUNT) by get(CV_CAP_PROP_FPS Frame rate.") and I can get the result as I expected.
I understand that the frame is like a cut of a movie, and 30 frames per sec. Is that right?
My coding is as follows:
void fEditMain()
{
VideoCapture vdoCap("C:/Users/Public/Videos/Sample Videos/WildlifeTest.wmv");
// this video file is provided in window7
if( !vdoCap.isOpened() )
{
printf("failed to open!\n");
return;
}
Mat frmSource;
vdoCap >> frmSource;
if(! frmSource.data) return;
VideoWriter vdoRec(vRecFIleName, CV_FOURCC('W','M','V','1'), 30, frmSource.size(), true);
namedWindow("video",1);
// record video
int vFrmCntNo=1;
for(;;)
{
int vDepth = frmSource.depth();
vChannel = frmSource.channels();
// here! vChannel is always 1, i expect 3 or 4 because it is color image
imshow("video", frmSource);// frmSource Show
vdoRec << frmSource;
vdoCap >> frmSource;
if(! frmSource.data)
return;
}
return;
}
I am not sure if this will answer your question but if you use IplImage it will be very easy to get the correct number of channels as well as manipulate the image. Try using:
IplImage *frm = cvQueryFrame(cap);
int numOfChannels = channelfrm->nChannels;
A video is composed of frames and you can know how many frames pass in a second by using get(CV_CAP_PROP_FPS). If you divide the frame count by the FPS you'll get the number of seconds for the clip.

Resources