the nchannel() returns always 1 even in the case of color video image - opencv

I'm coding an opencv 2.1 program with visual c++ 2008 express. I want to get each pixel color data of each pixel and modify them by pixel.
I understand that the code "frmSource.channels();" returns the color channels of the mat frmSource, but it always returns 1 even if it is absolutely color video image, not 3 or 4.
Am I wrong?
If I'm wrong, please guide me how to get the each color component data of each pixel.
Also, the total frame count by "get(CV_CAP_PROP_FRAME_COUNT)" is much larger than the frame count I expected, so I divide the "get(CV_CAP_PROP_FRAME_COUNT) by get(CV_CAP_PROP_FPS Frame rate.") and I can get the result as I expected.
I understand that the frame is like a cut of a movie, and 30 frames per sec. Is that right?
My coding is as follows:
void fEditMain()
{
VideoCapture vdoCap("C:/Users/Public/Videos/Sample Videos/WildlifeTest.wmv");
// this video file is provided in window7
if( !vdoCap.isOpened() )
{
printf("failed to open!\n");
return;
}
Mat frmSource;
vdoCap >> frmSource;
if(! frmSource.data) return;
VideoWriter vdoRec(vRecFIleName, CV_FOURCC('W','M','V','1'), 30, frmSource.size(), true);
namedWindow("video",1);
// record video
int vFrmCntNo=1;
for(;;)
{
int vDepth = frmSource.depth();
vChannel = frmSource.channels();
// here! vChannel is always 1, i expect 3 or 4 because it is color image
imshow("video", frmSource);// frmSource Show
vdoRec << frmSource;
vdoCap >> frmSource;
if(! frmSource.data)
return;
}
return;
}

I am not sure if this will answer your question but if you use IplImage it will be very easy to get the correct number of channels as well as manipulate the image. Try using:
IplImage *frm = cvQueryFrame(cap);
int numOfChannels = channelfrm->nChannels;
A video is composed of frames and you can know how many frames pass in a second by using get(CV_CAP_PROP_FPS). If you divide the frame count by the FPS you'll get the number of seconds for the clip.

Related

How to get proper saturated frame from see3cam cu135M using OpenCV

I was trying to get 8 single shot frames from 5 x CU135M E-con cameras using OpenCV(4.5.5-dev).
Each ~20 seconds every camera take single shot (one cam by one so stream should not be overloaded).
I use powered USB-hub so all of them are connected to single USB 3.2 port in my computer.
My problem is that recived frames are sometimes (20-30% of them) over-saturated with pink or yellow.
Example of correctly recorded pic:
Example of yellow over-saturated pic:
Code for recording frames is quite simple -
cv::VideoCapture cap;
cap.open("SOME_CAMERA_ID", CAP_V4L2);
cap.set(CAP_PROP_FRAME_WIDTH, 4208);
cap.set(CAP_PROP_FRAME_HEIGHT, 3120);
Mat frame;
try{
for(int i = 0; i < 4; i++) //need this so i dont recive pure green screen frames
cap.read(frame);
while(frame.empty()){
if( cap.read(frame) )
break;
}
} catch (Exception e) {
//some error handling
}
imwrite("someFileName.png", frame );
cap.release();
I was trying to set denoise and default setings using hidraw, hovewer without results.
I'd be glad for any help.

FFmpeg iOS Get UIImage from AVFrame

I am having trouble getting a UIImage out of the frames I am reading into my iOS FFmpeg project. I need to be able to read a frame in, and then convert this to a UIImage in order to display the frame in a UIImageView. My code appears to be reading in the frames, but I am lost as to how to convert them as there is little documentation on how to do this. Can anyone help?
while (!finished) {
if (av_read_frame(_formatContext, &packet) >= 0) {
if (packet.stream_index == _videoStream) {
int ret = avcodec_send_packet(_codecContext, &packet);
if (ret < 0 || ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
printf("av_codec_send_packet error ");
}
while (ret >= 0) {
ret = avcodec_receive_frame(_codecContext, _frame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
printf("avcodec_receive_frame error ");
}
finished = true;
}
}
av_packet_unref(&packet);
}
}
You should know about pixel formats like rgb and yuv. Videos almost always uses yuv formats like yuv420p. Then study AVFrame structure, here some info:
AVFormat.format : Current frame's pixel format i.e. AV_PIX_FMT_YUV420P
AVFormat.width : Horizontal length of current frame (hence width) unit: pixels
AVFormat.height : Vertical length of current frame (hence height) unit: pixels
Now where is the actual frame buffer you might ask, it is in AVFormat.data[n]
n can be 0-3. Depending on the format, just first one may contain whole frame or all 4 of them. I.e. yuv420p uses 0, 1, and 2. Their linesizes (aka strides) can be obtained reading corresponding AVFormat.linesize[n] value.
As for yuv420p:
data[0] is Y plane
data[1] is U plane
data[2] is V plane
If you multiply linesize[0] with AVFrame.height, you'll get size of that plane (Y) as number of bytes.
I don't know about UIImage structure (or whatever it is), if it requeris a specific format like RGB, you need to convert your AVFrame to that format using swscale.
Here some examples: https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/scaling_video.c
In libav (ffmpeg) scaling (resizing) and pixel format conversion are done via same function.
Hope these helps.

OpenCV frame blending only results in blue

I'm trying to average every 30 frames of a video to create a blurred timelapse. I got the video reading and video writing working, but something is wrong, because I'm only seeing the blue channel! (or one channel that is being written to blue).
Any ideas? Or better ways to do this? I'm new to OpenCV. The code is in Kotlin, but I think it should be the same issue if this was Java or python or whatever.
val videoCapture = VideoCapture(parsedArgs.inputFile)
val frameSize = Size(
videoCapture.get(Videoio.CV_CAP_PROP_FRAME_WIDTH),
videoCapture.get(Videoio.CV_CAP_PROP_FRAME_HEIGHT))
val fps = videoCapture.get(Videoio.CAP_PROP_FPS)
val videoWriter = VideoWriter( parsedArgs.outputFile, VideoWriter.fourcc('M', 'J', 'P', 'G'), fps, frameSize)
val image = Mat(frameSize,CV_8UC3)
val blended = Mat(frameSize,CV_64FC3)
println("Size: $frameSize fps:$fps over $frameCount frames")
try {
while (videoCapture.read(image)) {
val frameNumber = videoCapture.get(Videoio.CAP_PROP_POS_FRAMES).toInt()
Core.flip(image, image, -1) // I shot the video upside down
Imgproc.accumulate(image,blended)
if(frameNumber>0 && frameNumber%parsedArgs.windowSize==0) {
Core.multiply(blended, Scalar(1.0/parsedArgs.windowSize), blended)
blended.convertTo(image, CV_8UC3);
videoWriter.write(image)
blended.setTo(Scalar(0.0,0.0,0.0))
println(frameNumber.toDouble()/frameCount)
}
}
} finally {
videoCapture.release()
videoWriter.release()
}
Martin Beckett led me to the right answer (thank you!). I was multiplying by a Scalar(double), which should have been my hint because I wasn't multiplying by plain-double.
It expected a Scalar, with a value for each channel so it was happily multiplying my first channel by double, and the rest by 0.
Imgproc.accumulate(image, blended64)
if (frameNumber > 0 && frameNumber % parsedArgs.windowSize == 0) {
val blendDivisor = 1.0 / parsedArgs.windowSize
Core.multiply(blended64, Scalar(blendDivisor, blendDivisor, blendDivisor), blended64)
My guess would be using different types in Imgproc.accumulate(image,blended) try converting image to match blended before combining them.
If it was writing the entire 8bit*3 pixel data into one float the first field in an openCV image is blue (it uses BGR order)

Processing: Number of Frames of Video

1) Is there a way to know the number of the frames of a video after we load it but before playing it??
2) Also I want to take the first column from each frame. What I have thought is to read the whole video and store into an ArrayList every frame I read and then parse again the whole ArrayList and take the the first column from each frame. Is any more optimal way to do this?
Is any function in OpenCV that can help???
Take a look at the VideoCapture class in OpenCV. Specifically the get function for retrieving video properties.
You can load each frame and store the first column like this:
//Video capture object
cv::VideoCapture cap;
cap.open("filename");
//Storage for video frames and columns
cv::Mat frame;
std::vector<cv::Mat> cols;
//Get each frame
while(true){
//Load next frame
cap >> frame;
//If no frame, end of video
if(!frame.data) break;
//Store first column
cv::Mat col;
frame.col(0).copyTo(col);
cols.push_back(col);
}

Flicker removal using OpenCV?

I am a newbie to openCV. I have installed the opencv library on a ubuntu system, compiled it and trying to look into some image/video processing apps in opencv to understand more.
I am interested to know if OpenCV library has any algorithm/class for removal flicker in captured videos? If yes what document or code should I should look deeper into?
If openCV does not have it, are there any standard implementations in some other Video processing library/SDK/Matlab,.. which provide algorithms for flicker removal from video sequences?
Any pointers would be useful
Thank you.
-AD.
I don't know any standard way to deflicker a video.
But VirtualDub is a Video Processing software which has a Filter for deflickering the video. You can find it's filter source and documents (algorithm description probably) here.
I wrote my own Deflicker C++ function. here it is. You can cut and paste this code as is - no headers needed other than the usual openCV ones.
Mat deflicker(Mat,int);
Mat prevdeflicker;
Mat deflicker(Mat Mat1,int strengthcutoff = 20){ //deflicker - compares each pixel of the frame to a previously stored frame, and throttle small changes in pixels (flicker)
if (prevdeflicker.rows){//check if we stored a previous frame of this name.//if not, theres nothing we can do. clone and exit
int i,j;
uchar* p;
uchar* prevp;
for( i = 0; i < Mat1.rows; ++i)
{
p = Mat1.ptr<uchar>(i);
prevp = prevdeflicker.ptr<uchar>(i);
for ( j = 0; j < Mat1.cols; ++j){
Scalar previntensity = prevp[j];
Scalar intensity = p[j];
int strength = abs(intensity.val[0] - previntensity.val[0]);
if(strength < strengthcutoff){ //the strength of the stimulus must be greater than a certain point, else we do not want to allow the change
//value 25 works good for medium+ light. anything higher creates too much blur around moving objects.
//in low light however this makes it worse, since low light seems to increase contrasts in flicker - some flickers go from 0 to 255 and back. :(
//I need to write a way to track large group movements vs small pixels, and only filter out the small pixel stuff. maybe blur first?
if(intensity.val[0] > previntensity.val[0]){ // use the previous frames value. Change it by +1 - slow enough to not be noticable flicker
p[j] = previntensity.val[0] + 1;
}else{
p[j] = previntensity.val[0] - 1;
}
}
}
}//end for
}
prevdeflicker = Mat1.clone();//clone the current one as the old one.
return Mat1;
}
Call it as: Mat= deflicker(Mat). It needs a loop, and a greyscale image, like so:
for(;;){
cap >> frame; // get a new frame from camera
cvtColor( frame, src_grey, CV_RGB2GRAY ); //convert to greyscale - simplifies everything
src_grey = deflicker(src_grey); // this is the function call
imshow("grey video", src_grey);
if(waitKey(30) >= 0) break;
}

Resources