how to detect moving object in live video camera using openCV - ios

i am using openCV for my ios application to detect moving object in live video camera, but i am not familiar with use of openCV please help me. any other way to do this also welcome .
- (void)viewDidLoad {
[super viewDidLoad];
self.videoCamera.delegate = self;
self.videoCamera = [[CvVideoCamera alloc] initWithParentView:self.view];
self.videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionBack;
self.videoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPreset352x288;
self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationPortrait;
self.videoCamera.defaultFPS = 30;
self.videoCamera.grayscaleMode = NO;
[self.videoCamera start];
// cv::BackgroundSubtractorMOG2 bg;
}
- (void)processImage:(cv::Mat&)image
{
//process here
cv::cvtColor(image, img, cv::COLOR_BGRA2RGB);
int fixedWidth = 270;
cv::resize(img, img, cv::Size(fixedWidth,(int)((fixedWidth*1.0f)* (image.rows/(image.cols*1.0f)))),cv::INTER_NEAREST);
//update the model
bg_model->operator()(img, fgmask, update_bg_model ? -1 : 0);
GaussianBlur(fgmask, fgmask, cv::Size(7, 7), 2.5, 2.5);
threshold(fgmask, fgmask, 10, 255, cv::THRESH_BINARY);
image = cv::Scalar::all(0);
img.copyTo(image, fgmask);
}
i am new at openCV so, i don't know what to do.

Add this code to processImage delegate :
- (void)processImage:(cv::Mat&)image
{
//process here
cv::cvtColor(image, img, cv::COLOR_BGRA2RGB);
int fixedWidth = 270;
cv::resize(img, img, cv::Size(fixedWidth,(int)((fixedWidth*1.0f)* (image.rows/(image.cols*1.0f)))),cv::INTER_NEAREST);
//update the model
bg_model->apply(img, fgmask, update_bg_model ? -1 : 0);
GaussianBlur(fgmask, fgmask, cv::Size(7, 7), 2.5, 2.5);
threshold(fgmask, fgmask, 10, 255, cv::THRESH_BINARY);
image = cv::Scalar::all(0);
img.copyTo(image, fgmask);
}
You'll need to declare following as global variable
cv::Mat img, fgmask;
cv::Ptr<cv::BackgroundSubtractor> bg_model;
bool update_bg_model;
Where, img <- smaller image fgmask <- the mask denotes that where motion is happening update_bg_model <- if you want to fixed your background;
Please refer this URL for further information.

I am assuming that you are wanting something of a similar nature to this demonstration: https://www.youtube.com/watch?v=UFIVCDDnrmM
This gives you motion and removes the background of an image.
Anytime you are wanting to detect motion you often times must start by determining what is moving and what it still. This task is completed by taking several frames, and comparing them to determine if sections are different enough to be regarded as not being in the background. (Generally over a certain threshold of consistency.)
For finding the background in an image, check out: http://docs.opencv.org/master/d1/dc5/tutorial_background_subtraction.html
Afterwords you will need an implementation (potentially an edge detector like canny) to be able to recognize the object you are wanting and track its motion. (Determine velocity, direction, etc.) This will be specific to your use-case, and there isn't enough information in the original question to say specifically what you need here.
Another really good resource to look at would be: http://www.intorobotics.com/how-to-detect-and-track-object-with-opencv/
This even has a section dedicated to utilizing mobile devices with opencv. (The use scenario that they are using is robotics, one of the largest applications of computer vision, but you should be able to use it for what you need; albeit with a few modifications. )
Please let me know if this helps, and if I can do anything else to assist you.

Related

Improving Tesseract OCR Quality Fails

I am currently using tesseract to scan receipts. The quality wasn't good so I read this article on how to improve it: https://github.com/tesseract-ocr/tesseract/wiki/ImproveQuality#noise-removal. I implemented resizing, deskewing(aligning), and gaussian blur. But none of them seem to have a positive effect on the accuracy of the OCR except the deskewing. Here is my code for resizing and gaussian blur. Am I doing anything wrong? If not, what else can I do to help?
Code:
+(UIImage *) prepareImage: (UIImage *)image{
//converts UIImage to Mat format
Mat im = cvMatWithImage(image);
//grayscale image
Mat gray;
cvtColor(im, gray, CV_BGR2GRAY);
//deskews text
//did not provide code because I know it works
Mat preprocessed = preprocess2(gray);
double skew = hough_transform(preprocessed, im);
Mat rotated = rot(im,skew* CV_PI/180);
//resize image
Mat scaledImage = scaleImage(rotated, 2);
//Guassian Blur
GaussianBlur(scaledImage, scaledImage, cv::Size(1, 1), 0, 0);
return UIImageFromCVMat(scaledImage);
}
// Organization -> Resizing
Mat scaleImage(Mat mat, double factor){
Mat resizedMat;
double width = mat.cols;
double height = mat.rows;
double aspectRatio = width/height;
resize(mat, resizedMat, cv::Size(width*factor*aspectRatio, height*factor*aspectRatio));
return resizedMat;
}
Receipt:
If you read the Tesseract documentation you will see that tesseract engine works best with texts in a single line in a square. Passing it the whole receipt image reduces the engine's accuracy. What you need to do is use the new iOS framework CITextFeature to detect texts in your receipt into multiple blocks of images. Then only you can pass those images to tesseract for processing.

Face detection differenet sizes of faces using openCV haarcascade_frontalfacedetection

I'm trying to figure out the best face detection algo for me.I've already tried different methods to do so, but the detection isn't working so well.Im using openCV haarcascase(trying out the different kinds).
My question is:how do i set the size of the face detection so it will detect Big faces(Close up on the person) and also small faces(Zoom out) with the same code.
When I'm using the following command im getting true for facedetecting even when there aren't faces in the image:
faceDetector.detectMultiScale(image, faceDetections, 1.1, 3, 0 | Objdetect.CASCADE_SCALE_IMAGE,
new Size(**50,50**), new Size());
But if I'm using it with new Size(200,200) for example im unable to detect faces in pictures where the faces are "small".
Does anyone have an idea how can I make the detection work for both small and big faces without "inventing" faces that aren't there?
detectMultiScale function has parameters min and max face size. Don't forget to consider you image size before choosing face size. Try smth like that
double max_face_size_percent = 1;
double min_face_size_percent = 0.15;
cv::Size min_face_size = cv::Size(grayImage.size().width * min_face_size_percent,grayImage.size().height * min_face_size_percent);
cv::Size max_face_size = cv::Size(grayImage.size().width * max_face_size_percent, grayImage.size().height * max_face_size_percent);
detector.detectMultiScale(grayImage, detected_faces, (double) scale_factor, 3, 0, min_face_size, max_face_size);
You can choose the biggest face found after that with code like this
double best_face_metric = 0;
std::vector<cv::Rect>::iterator bestFaceIterator = detected_faces.begin();
for (std::vector<cv::Rect>::iterator it = detected_faces.begin(); it != detected_faces.end(); ++it) {
double curr_face_metric = it->width + it->height;
if (curr_face_metric > best_face_metric) {
best_face_metric = curr_face_metric;
bestFaceIterator = it;
}
}
cv::Rect_<double> bestFace(bestFaceIterator->x, bestFaceIterator->y, bestFaceIterator->width,
bestFaceIterator->height);
Or implement any solution of any kind to choose correct faces by yourself

How Can I Detect Any Object from Video Camera in IOS?

I am developing an application to identify object from the real world object. Is that possible to identify any object from video camera? Now on words I am using openCV library to identify object. But just recording a video it does not do anything. I am very poor about AR. Please suggest me what I have to do for the identifying a real world object if possible.
Step 1: follow this to setup your opencv camera
Step 2: add this code to processImage delegate
- (void)processImage:(cv::Mat&)image
{
//process here
cv::cvtColor(image, img, cv::COLOR_BGRA2RGB);
int fixedWidth = 270;
cv::resize(img, img, cv::Size(fixedWidth,(int)((fixedWidth*1.0f)* (image.rows/(image.cols*1.0f)))),cv::INTER_NEAREST);
//update the model
bg_model->apply(img, fgmask, update_bg_model ? -1 : 0);
GaussianBlur(fgmask, fgmask, cv::Size(7, 7), 2.5, 2.5);
threshold(fgmask, fgmask, 10, 255, cv::THRESH_BINARY);
image = cv::Scalar::all(0);
img.copyTo(image, fgmask);
}
You'll need to declare following as global variable
cv::Mat img, fgmask;
cv::Ptr<cv::BackgroundSubtractor> bg_model;
bool update_bg_model;
Where, img <- smaller image fgmask <- the mask denotes that where motion is happening update_bg_model <- if you want to fixed your background;
for more detail go through this link

How to determine the distance between upper lip and lower lip by using webcam in Processing?

Where should I start? I can see plenty of face recognition and analysis using Python, Java script but how about Processing ?
I want to determine the distance by using 2 points between upper and lower lip at their highest and lowest point via webcam to use it in further project.
any help would be appreciated
If you want to do it in Processing alone you can use Greg Borenstein's OpenCV for Processing library:
You can start with the Face Detection example
Once you detect a face, you can detect a mouth within the face rectangle using OpenCV.CASCADE_MOUTH.
Once you have mouth detected maybe you can get away with using the mouth bounding box height. For more detail you use OpenCV to threshold that rectangle. Hopefully the open mouth will segment nicely from the rest of the skin. Finding contours should give you lists of points you can work with.
For something a lot more exact, you can use Jason Saragih's CLM FaceTracker, which is available as an OpenFrameworks addon. OpenFrameworks has similarities to Processing. If you do need this sort of accuracy in Processing you can run FaceOSC in the background and read the mouth coordinates in Processing using oscP5
Update
For the first option, using HAAR cascade classifiers, turns out there are a couple of issues:
The OpenCV Processing library can load one cascade and a second instance will override the first.
The OpenCV.CASCADE_MOUTH seems to work better for closed mouths, but not very well with open mouths
To get past the 1st issue, you can use the OpenCV Java API directly, bypassing OpenCV Processing for multiple cascade detection.
There are couple of parameters that can help the detection, such as having idea of the bounding box of the mouth before hand to pass as a hint to the classifier.
I've done a basic test using a webcam on my laptop and measure the bounding box for face and mouth at various distances. Here's an example:
import gab.opencv.*;
import org.opencv.core.*;
import org.opencv.objdetect.*;
import processing.video.*;
Capture video;
OpenCV opencv;
CascadeClassifier faceDetector,mouthDetector;
MatOfRect faceDetections,mouthDetections;
//cascade detections parameters - explanations from Mastering OpenCV with Practical Computer Vision Projects
int flags = Objdetect.CASCADE_FIND_BIGGEST_OBJECT;
// Smallest object size.
Size minFeatureSizeFace = new Size(50,60);
Size maxFeatureSizeFace = new Size(125,150);
Size minFeatureSizeMouth = new Size(30,10);
Size maxFeatureSizeMouth = new Size(120,60);
// How detailed should the search be. Must be larger than 1.0.
float searchScaleFactor = 1.1f;
// How much the detections should be filtered out. This should depend on how bad false detections are to your system.
// minNeighbors=2 means lots of good+bad detections, and minNeighbors=6 means only good detections are given but some are missed.
int minNeighbors = 4;
//laptop webcam face rectangle
//far, small scale, ~50,60px
//typing distance, ~83,91px
//really close, ~125,150
//laptop webcam mouth rectangle
//far, small scale, ~30,10
//typing distance, ~50,25px
//really close, ~120,60
int mouthHeightHistory = 30;
int[] mouthHeights = new int[mouthHeightHistory];
void setup() {
opencv = new OpenCV(this,320,240);
size(opencv.width, opencv.height);
noFill();
frameRate(30);
video = new Capture(this,width,height);
video.start();
faceDetector = new CascadeClassifier(dataPath("haarcascade_frontalface_alt2.xml"));
mouthDetector = new CascadeClassifier(dataPath("haarcascade_mcs_mouth.xml"));
}
void draw() {
//feed cam image to OpenCV, it turns it to grayscale
opencv.loadImage(video);
opencv.equalizeHistogram();
image(opencv.getOutput(), 0, 0 );
//detect face using raw Java OpenCV API
Mat equalizedImg = opencv.getGray();
faceDetections = new MatOfRect();
faceDetector.detectMultiScale(equalizedImg, faceDetections, searchScaleFactor, minNeighbors, flags, minFeatureSizeFace, maxFeatureSizeFace);
Rect[] faceDetectionResults = faceDetections.toArray();
int faces = faceDetectionResults.length;
text("detected faces: "+faces,5,15);
if(faces >= 1){
Rect face = faceDetectionResults[0];
stroke(0,192,0);
rect(face.x,face.y,face.width,face.height);
//detect mouth - only within face rectangle, not the whole frame
Rect faceLower = face.clone();
faceLower.height = (int) (face.height * 0.65);
faceLower.y = face.y + faceLower.height;
Mat faceROI = equalizedImg.submat(faceLower);
//debug view of ROI
PImage faceImg = createImage(faceLower.width,faceLower.height,RGB);
opencv.toPImage(faceROI,faceImg);
image(faceImg,width-faceImg.width,0);
mouthDetections = new MatOfRect();
mouthDetector.detectMultiScale(faceROI, mouthDetections, searchScaleFactor, minNeighbors, flags, minFeatureSizeMouth, maxFeatureSizeMouth);
Rect[] mouthDetectionResults = mouthDetections.toArray();
int mouths = mouthDetectionResults.length;
text("detected mouths: "+mouths,5,25);
if(mouths >= 1){
Rect mouth = mouthDetectionResults[0];
stroke(192,0,0);
rect(faceLower.x + mouth.x,faceLower.y + mouth.y,mouth.width,mouth.height);
text("mouth height:"+mouth.height+"~px",5,35);
updateAndPlotMouthHistory(mouth.height);
}
}
}
void updateAndPlotMouthHistory(int newHeight){
//shift older values by 1
for(int i = mouthHeightHistory-1; i > 0; i--){
mouthHeights[i] = mouthHeights[i-1];
}
//add new value at the front
mouthHeights[0] = newHeight;
//plot
float graphWidth = 100.0;
float elementWidth = graphWidth / mouthHeightHistory;
for(int i = 0; i < mouthHeightHistory; i++){
rect(elementWidth * i,45,elementWidth,mouthHeights[i]);
}
}
void captureEvent(Capture c) {
c.read();
}
One very imortant note to make: I've copied cascade xml files from the OpenCV Processing library folder (~/Documents/Processing/libraries/opencv_processing/library/cascade-files) to the sketch's data folder. My sketch is OpenCVMouthOpen, so the folder structure looks like this:
OpenCVMouthOpen
├── OpenCVMouthOpen.pde
└── data
├── haarcascade_frontalface_alt.xml
├── haarcascade_frontalface_alt2.xml
├── haarcascade_frontalface_alt_tree.xml
├── haarcascade_frontalface_default.xml
├── haarcascade_mcs_mouth.xml
└── lbpcascade_frontalface.xml
If you don't copy the cascades files and use the code as it is you won't get any errors, but the detection simply won't work. If you want to check, you can do
println(faceDetector.empty())
at the end of the setup() function and if you get false, the cascade has been loaded and if you get true, the cascade hasn't been loaded.
You may need to play with the minFeatureSize and maxFeatureSize values for face and mouth for your setup. The second issue, cascade not detecting wide open mouth very well is tricky. There might be an already trained cascade for open mouths, but you'd need to find it. Otherwise, with this method you may need to train one yourself and that can be a bit tedious.
Nevertheless, notice that there is an upside down plot drawn on the left when a mouth is detected. In my tests I noticed that the height isn't super accurate, but there are noticeable changes in the graph. You may not be able to get a steady mouth height, but by comparing current to averaged previous height values you should see some peaks (values going from positive to negative or vice-versa) which give you an idea of a mouth open/close change.
Although searching through the whole image for a mouth as opposed to a face only can be a bit slower and less accurate, it's a simpler setup. It you can get away with less accuracy and more false positives on your project this could be simpler:
import gab.opencv.*;
import java.awt.Rectangle;
import org.opencv.objdetect.Objdetect;
import processing.video.*;
Capture video;
OpenCV opencv;
Rectangle[] faces,mouths;
//cascade detections parameters - explanations from Mastering OpenCV with Practical Computer Vision Projects
int flags = Objdetect.CASCADE_FIND_BIGGEST_OBJECT;
// Smallest object size.
int minFeatureSize = 20;
int maxFeatureSize = 150;
// How detailed should the search be. Must be larger than 1.0.
float searchScaleFactor = 1.1f;
// How much the detections should be filtered out. This should depend on how bad false detections are to your system.
// minNeighbors=2 means lots of good+bad detections, and minNeighbors=6 means only good detections are given but some are missed.
int minNeighbors = 6;
void setup() {
size(320, 240);
noFill();
stroke(0, 192, 0);
strokeWeight(3);
video = new Capture(this,width,height);
video.start();
opencv = new OpenCV(this,320,240);
opencv.loadCascade(OpenCV.CASCADE_MOUTH);
}
void draw() {
//feed cam image to OpenCV, it turns it to grayscale
opencv.loadImage(video);
opencv.equalizeHistogram();
image(opencv.getOutput(), 0, 0 );
Rectangle[] mouths = opencv.detect(searchScaleFactor,minNeighbors,flags,minFeatureSize, maxFeatureSize);
for (int i = 0; i < mouths.length; i++) {
text(mouths[i].x + "," + mouths[i].y + "," + mouths[i].width + "," + mouths[i].height,mouths[i].x, mouths[i].y);
rect(mouths[i].x, mouths[i].y, mouths[i].width, mouths[i].height);
}
}
void captureEvent(Capture c) {
c.read();
}
I was mentioning segmenting/thresholding as well. Here's a rough example using the lower part of a detected face just a basic threshold, then some basic morphological filters (erode/dilate) to cleanup the thresholded image a bit:
import gab.opencv.*;
import org.opencv.core.*;
import org.opencv.objdetect.*;
import org.opencv.imgproc.Imgproc;
import java.awt.Rectangle;
import java.util.*;
import processing.video.*;
Capture video;
OpenCV opencv;
CascadeClassifier faceDetector,mouthDetector;
MatOfRect faceDetections,mouthDetections;
//cascade detections parameters - explanations from Mastering OpenCV with Practical Computer Vision Projects
int flags = Objdetect.CASCADE_FIND_BIGGEST_OBJECT;
// Smallest object size.
Size minFeatureSizeFace = new Size(50,60);
Size maxFeatureSizeFace = new Size(125,150);
// How detailed should the search be. Must be larger than 1.0.
float searchScaleFactor = 1.1f;
// How much the detections should be filtered out. This should depend on how bad false detections are to your system.
// minNeighbors=2 means lots of good+bad detections, and minNeighbors=6 means only good detections are given but some are missed.
int minNeighbors = 4;
//laptop webcam face rectangle
//far, small scale, ~50,60px
//typing distance, ~83,91px
//really close, ~125,150
float threshold = 160;
int erodeAmt = 1;
int dilateAmt = 5;
void setup() {
opencv = new OpenCV(this,320,240);
size(opencv.width, opencv.height);
noFill();
video = new Capture(this,width,height);
video.start();
faceDetector = new CascadeClassifier(dataPath("haarcascade_frontalface_alt2.xml"));
mouthDetector = new CascadeClassifier(dataPath("haarcascade_mcs_mouth.xml"));
}
void draw() {
//feed cam image to OpenCV, it turns it to grayscale
opencv.loadImage(video);
opencv.equalizeHistogram();
image(opencv.getOutput(), 0, 0 );
//detect face using raw Java OpenCV API
Mat equalizedImg = opencv.getGray();
faceDetections = new MatOfRect();
faceDetector.detectMultiScale(equalizedImg, faceDetections, searchScaleFactor, minNeighbors, flags, minFeatureSizeFace, maxFeatureSizeFace);
Rect[] faceDetectionResults = faceDetections.toArray();
int faces = faceDetectionResults.length;
text("detected faces: "+faces,5,15);
if(faces > 0){
Rect face = faceDetectionResults[0];
stroke(0,192,0);
rect(face.x,face.y,face.width,face.height);
//detect mouth - only within face rectangle, not the whole frame
Rect faceLower = face.clone();
faceLower.height = (int) (face.height * 0.55);
faceLower.y = face.y + faceLower.height;
//submat grabs a portion of the image (submatrix) = our region of interest (ROI)
Mat faceROI = equalizedImg.submat(faceLower);
Mat faceROIThresh = faceROI.clone();
//threshold
Imgproc.threshold(faceROI, faceROIThresh, threshold, width, Imgproc.THRESH_BINARY_INV);
Imgproc.erode(faceROIThresh, faceROIThresh, new Mat(), new Point(-1,-1), erodeAmt);
Imgproc.dilate(faceROIThresh, faceROIThresh, new Mat(), new Point(-1,-1), dilateAmt);
//find contours
Mat faceContours = faceROIThresh.clone();
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(faceContours, contours, new Mat(), Imgproc.RETR_EXTERNAL , Imgproc.CHAIN_APPROX_SIMPLE);
//draw contours
for(int i = 0 ; i < contours.size(); i++){
MatOfPoint contour = contours.get(i);
Point[] points = contour.toArray();
stroke(map(i,0,contours.size()-1,32,255),0,0);
beginShape();
for(Point p : points){
vertex((float)p.x,(float)p.y);
}
endShape();
}
//debug view of ROI
PImage faceImg = createImage(faceLower.width,faceLower.height,RGB);
opencv.toPImage(faceROIThresh,faceImg);
image(faceImg,width-faceImg.width,0);
}
text("Drag mouseX to control threshold: " + threshold+
"\nHold 'e' and drag mouseX to control erodeAmt: " + erodeAmt+
"\nHold 'd' and drag mouseX to control dilateAmt: " + dilateAmt,5,210);
}
void mouseDragged(){
if(keyPressed){
if(key == 'e') erodeAmt = (int)map(mouseX,0,width,1,6);
if(key == 'd') dilateAmt = (int)map(mouseX,0,width,1,10);
}else{
threshold = mouseX;
}
}
void captureEvent(Capture c) {
c.read();
}
This could be improved a bit by using YCrCb colour space to segment skin better, but overall you notice that there are quite a few variables to get right which doesn't make this a very flexible setup.
You will be much better results using FaceOSC and reading the values you need in Processing via oscP5. Here is a slightly simplified version of the FaceOSCReceiver Processing example focusing mainly on mouth:
import oscP5.*;
OscP5 oscP5;
// num faces found
int found;
// pose
float poseScale;
PVector posePosition = new PVector();
// gesture
float mouthHeight;
float mouthWidth;
void setup() {
size(640, 480);
frameRate(30);
oscP5 = new OscP5(this, 8338);
oscP5.plug(this, "found", "/found");
oscP5.plug(this, "poseScale", "/pose/scale");
oscP5.plug(this, "posePosition", "/pose/position");
oscP5.plug(this, "mouthWidthReceived", "/gesture/mouth/width");
oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height");
}
void draw() {
background(255);
stroke(0);
if(found > 0) {
translate(posePosition.x, posePosition.y);
scale(poseScale);
noFill();
ellipse(0, 20, mouthWidth* 3, mouthHeight * 3);
}
}
// OSC CALLBACK FUNCTIONS
public void found(int i) {
println("found: " + i);
found = i;
}
public void poseScale(float s) {
println("scale: " + s);
poseScale = s;
}
public void posePosition(float x, float y) {
println("pose position\tX: " + x + " Y: " + y );
posePosition.set(x, y, 0);
}
public void mouthWidthReceived(float w) {
println("mouth Width: " + w);
mouthWidth = w;
}
public void mouthHeightReceived(float h) {
println("mouth height: " + h);
mouthHeight = h;
}
// all other OSC messages end up here
void oscEvent(OscMessage m) {
if(m.isPlugged() == false) {
println("UNPLUGGED: " + m);
}
}
On OSX you can simply download the compiled FaceOSC app.
On other operating systems you may need to setup OpenFrameworks, download ofxFaceTracker and compile FaceOSC yourself.
It's really hard to answer general "how do I do this" type questions. Stack Overflow is designed for specific "I tried X, expected Y, but got Z instead" type questions. But I'll try to answer in a general sense:
You need to break your problem down into smaller pieces.
Step 1: Can you get a webcam feed showing in your sketch? Don't worry about the computer vision stuff for a second. Just get the camera connected. Do some research and try something out.
Step 2: Can you detect facial features in that video? You might try doing it yourself, or you might use one of the many libraries listed in the Videos and Vision section of the Processing libraries page.
Step 3: Read the documentation on those libraries. Try them out. You might have to make a bunch of little example sketches using each library until you find one you like. We can't do this for you, as which one is right for you depends on you. If you're confused about something specific we can try to help you, but we can't really help you with picking out a library.
Step 4: Once you've done a bunch of example programs and picked out a library, start working towards your goal. Can you detect facial features using the library? Get just that part working. Once you have that working, can you detect changes like opening or closing a mouth?
Work on one small step at a time. If you get stuck, post an MCVE along with a specific technical question, and we'll go from there. Good luck.

Flicker removal using OpenCV?

I am a newbie to openCV. I have installed the opencv library on a ubuntu system, compiled it and trying to look into some image/video processing apps in opencv to understand more.
I am interested to know if OpenCV library has any algorithm/class for removal flicker in captured videos? If yes what document or code should I should look deeper into?
If openCV does not have it, are there any standard implementations in some other Video processing library/SDK/Matlab,.. which provide algorithms for flicker removal from video sequences?
Any pointers would be useful
Thank you.
-AD.
I don't know any standard way to deflicker a video.
But VirtualDub is a Video Processing software which has a Filter for deflickering the video. You can find it's filter source and documents (algorithm description probably) here.
I wrote my own Deflicker C++ function. here it is. You can cut and paste this code as is - no headers needed other than the usual openCV ones.
Mat deflicker(Mat,int);
Mat prevdeflicker;
Mat deflicker(Mat Mat1,int strengthcutoff = 20){ //deflicker - compares each pixel of the frame to a previously stored frame, and throttle small changes in pixels (flicker)
if (prevdeflicker.rows){//check if we stored a previous frame of this name.//if not, theres nothing we can do. clone and exit
int i,j;
uchar* p;
uchar* prevp;
for( i = 0; i < Mat1.rows; ++i)
{
p = Mat1.ptr<uchar>(i);
prevp = prevdeflicker.ptr<uchar>(i);
for ( j = 0; j < Mat1.cols; ++j){
Scalar previntensity = prevp[j];
Scalar intensity = p[j];
int strength = abs(intensity.val[0] - previntensity.val[0]);
if(strength < strengthcutoff){ //the strength of the stimulus must be greater than a certain point, else we do not want to allow the change
//value 25 works good for medium+ light. anything higher creates too much blur around moving objects.
//in low light however this makes it worse, since low light seems to increase contrasts in flicker - some flickers go from 0 to 255 and back. :(
//I need to write a way to track large group movements vs small pixels, and only filter out the small pixel stuff. maybe blur first?
if(intensity.val[0] > previntensity.val[0]){ // use the previous frames value. Change it by +1 - slow enough to not be noticable flicker
p[j] = previntensity.val[0] + 1;
}else{
p[j] = previntensity.val[0] - 1;
}
}
}
}//end for
}
prevdeflicker = Mat1.clone();//clone the current one as the old one.
return Mat1;
}
Call it as: Mat= deflicker(Mat). It needs a loop, and a greyscale image, like so:
for(;;){
cap >> frame; // get a new frame from camera
cvtColor( frame, src_grey, CV_RGB2GRAY ); //convert to greyscale - simplifies everything
src_grey = deflicker(src_grey); // this is the function call
imshow("grey video", src_grey);
if(waitKey(30) >= 0) break;
}

Resources