How Can I Detect Any Object from Video Camera in IOS? - ios

I am developing an application to identify object from the real world object. Is that possible to identify any object from video camera? Now on words I am using openCV library to identify object. But just recording a video it does not do anything. I am very poor about AR. Please suggest me what I have to do for the identifying a real world object if possible.

Step 1: follow this to setup your opencv camera
Step 2: add this code to processImage delegate
- (void)processImage:(cv::Mat&)image
{
//process here
cv::cvtColor(image, img, cv::COLOR_BGRA2RGB);
int fixedWidth = 270;
cv::resize(img, img, cv::Size(fixedWidth,(int)((fixedWidth*1.0f)* (image.rows/(image.cols*1.0f)))),cv::INTER_NEAREST);
//update the model
bg_model->apply(img, fgmask, update_bg_model ? -1 : 0);
GaussianBlur(fgmask, fgmask, cv::Size(7, 7), 2.5, 2.5);
threshold(fgmask, fgmask, 10, 255, cv::THRESH_BINARY);
image = cv::Scalar::all(0);
img.copyTo(image, fgmask);
}
You'll need to declare following as global variable
cv::Mat img, fgmask;
cv::Ptr<cv::BackgroundSubtractor> bg_model;
bool update_bg_model;
Where, img <- smaller image fgmask <- the mask denotes that where motion is happening update_bg_model <- if you want to fixed your background;
for more detail go through this link

Related

Improving Tesseract OCR Quality Fails

I am currently using tesseract to scan receipts. The quality wasn't good so I read this article on how to improve it: https://github.com/tesseract-ocr/tesseract/wiki/ImproveQuality#noise-removal. I implemented resizing, deskewing(aligning), and gaussian blur. But none of them seem to have a positive effect on the accuracy of the OCR except the deskewing. Here is my code for resizing and gaussian blur. Am I doing anything wrong? If not, what else can I do to help?
Code:
+(UIImage *) prepareImage: (UIImage *)image{
//converts UIImage to Mat format
Mat im = cvMatWithImage(image);
//grayscale image
Mat gray;
cvtColor(im, gray, CV_BGR2GRAY);
//deskews text
//did not provide code because I know it works
Mat preprocessed = preprocess2(gray);
double skew = hough_transform(preprocessed, im);
Mat rotated = rot(im,skew* CV_PI/180);
//resize image
Mat scaledImage = scaleImage(rotated, 2);
//Guassian Blur
GaussianBlur(scaledImage, scaledImage, cv::Size(1, 1), 0, 0);
return UIImageFromCVMat(scaledImage);
}
// Organization -> Resizing
Mat scaleImage(Mat mat, double factor){
Mat resizedMat;
double width = mat.cols;
double height = mat.rows;
double aspectRatio = width/height;
resize(mat, resizedMat, cv::Size(width*factor*aspectRatio, height*factor*aspectRatio));
return resizedMat;
}
Receipt:
If you read the Tesseract documentation you will see that tesseract engine works best with texts in a single line in a square. Passing it the whole receipt image reduces the engine's accuracy. What you need to do is use the new iOS framework CITextFeature to detect texts in your receipt into multiple blocks of images. Then only you can pass those images to tesseract for processing.

openv HSV is soo noisy

Good day
I am try to filter video by subtracting some colors in specified range.
but while the recorded image is still or not changed but the HSV filtered image looks shaken and not stable.
this shake or instability cause lot's of problem in my processing.
is there any way that I can filter image in stable way
this is sample code of my filter ... part of the code
while (1)
{
//first frame read
cap.read(origonal1);
morphOps(origonal1);
cvtColor(origonal1, HSV1, COLOR_BGR2HSV);
inRange(HSV1, Scalar(0, 129,173), Scalar(26,212, 255), thresholdImage1);
waitKey(36);
//second image read and convert it to HSV
cap.read(origonal2);
morphOps(origonal2);
cvtColor(origonal2, HSV2, COLOR_BGR2HSV);
inRange(HSV2, Scalar(28, 89, 87), Scalar(93, 255, 255),thresholdImage2);
morphOps(thresholdImage1);
morphOps(thresholdImage2);
//create a mask so that i only detect motion of certain color range and don't
//care about other colors motion detection
maskImage = thresholdImage1 | thresholdImage2;
//make the difference between images
absdiff(thresholdImage1,thresholdImage2,imageDifference);
imageDifference = imageDifference&maskImage;
morphOps(imageDifference);
imshow("threshold Image", imageDifference);
//search for movement now update the origonal image
searchForMovement(thresholdImage1, origonal1);
imshow("origonal", origonal1);
imshow("HSV", HSV1);
imshow("threshold1", thresholdImage1);
imshow("threshold2", thresholdImage2);
//wait for a while give a break to the processor
//waitKey(1000);
}
Thanks in advance.
try this function
fastNlMeansDenoisingColored( frame, frame_result, 3, 3, 7, 21 );
it's too slow but good for trying.

how to detect moving object in live video camera using openCV

i am using openCV for my ios application to detect moving object in live video camera, but i am not familiar with use of openCV please help me. any other way to do this also welcome .
- (void)viewDidLoad {
[super viewDidLoad];
self.videoCamera.delegate = self;
self.videoCamera = [[CvVideoCamera alloc] initWithParentView:self.view];
self.videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionBack;
self.videoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPreset352x288;
self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationPortrait;
self.videoCamera.defaultFPS = 30;
self.videoCamera.grayscaleMode = NO;
[self.videoCamera start];
// cv::BackgroundSubtractorMOG2 bg;
}
- (void)processImage:(cv::Mat&)image
{
//process here
cv::cvtColor(image, img, cv::COLOR_BGRA2RGB);
int fixedWidth = 270;
cv::resize(img, img, cv::Size(fixedWidth,(int)((fixedWidth*1.0f)* (image.rows/(image.cols*1.0f)))),cv::INTER_NEAREST);
//update the model
bg_model->operator()(img, fgmask, update_bg_model ? -1 : 0);
GaussianBlur(fgmask, fgmask, cv::Size(7, 7), 2.5, 2.5);
threshold(fgmask, fgmask, 10, 255, cv::THRESH_BINARY);
image = cv::Scalar::all(0);
img.copyTo(image, fgmask);
}
i am new at openCV so, i don't know what to do.
Add this code to processImage delegate :
- (void)processImage:(cv::Mat&)image
{
//process here
cv::cvtColor(image, img, cv::COLOR_BGRA2RGB);
int fixedWidth = 270;
cv::resize(img, img, cv::Size(fixedWidth,(int)((fixedWidth*1.0f)* (image.rows/(image.cols*1.0f)))),cv::INTER_NEAREST);
//update the model
bg_model->apply(img, fgmask, update_bg_model ? -1 : 0);
GaussianBlur(fgmask, fgmask, cv::Size(7, 7), 2.5, 2.5);
threshold(fgmask, fgmask, 10, 255, cv::THRESH_BINARY);
image = cv::Scalar::all(0);
img.copyTo(image, fgmask);
}
You'll need to declare following as global variable
cv::Mat img, fgmask;
cv::Ptr<cv::BackgroundSubtractor> bg_model;
bool update_bg_model;
Where, img <- smaller image fgmask <- the mask denotes that where motion is happening update_bg_model <- if you want to fixed your background;
Please refer this URL for further information.
I am assuming that you are wanting something of a similar nature to this demonstration: https://www.youtube.com/watch?v=UFIVCDDnrmM
This gives you motion and removes the background of an image.
Anytime you are wanting to detect motion you often times must start by determining what is moving and what it still. This task is completed by taking several frames, and comparing them to determine if sections are different enough to be regarded as not being in the background. (Generally over a certain threshold of consistency.)
For finding the background in an image, check out: http://docs.opencv.org/master/d1/dc5/tutorial_background_subtraction.html
Afterwords you will need an implementation (potentially an edge detector like canny) to be able to recognize the object you are wanting and track its motion. (Determine velocity, direction, etc.) This will be specific to your use-case, and there isn't enough information in the original question to say specifically what you need here.
Another really good resource to look at would be: http://www.intorobotics.com/how-to-detect-and-track-object-with-opencv/
This even has a section dedicated to utilizing mobile devices with opencv. (The use scenario that they are using is robotics, one of the largest applications of computer vision, but you should be able to use it for what you need; albeit with a few modifications. )
Please let me know if this helps, and if I can do anything else to assist you.

Multiple Face Detection

I have a code in OpenCV (in C++) which uses "haarcascade_mcs_upperbody.xml" to detect upper body.
It detects single upper body. How can I make it detect multiple upper bodies.
I think CV_HAAR_FIND_BIGGEST_OBJECT is detecting only the biggest object. But I am not knowing how to solve this issue
The code goes like this:
int main(int argc, const char** argv)
{
CascadeClassifier body_cascade;
body_cascade.load("haarcascade_mcs_upperbody.xml");
VideoCapture captureDevice;
captureDevice.open(0);
Mat captureFrame;
Mat grayscaleFrame;
namedWindow("outputCapture", 1);
//create a loop to capture and find faces
while(true)
{
//capture a new image frame
captureDevice>>captureFrame;
//convert captured image to gray scale and equalize
cvtColor(captureFrame, grayscaleFrame, CV_BGR2GRAY);
equalizeHist(grayscaleFrame, grayscaleFrame);
//create a vector array to store the face found
std::vector<Rect> bodies;
//find faces and store them in the vector array
body_cascade.detectMultiScale(grayscaleFrame, faces, 1.1, 3,
CV_HAAR_FIND_BIGGEST_OBJECT|CV_HAAR_SCALE_IMAGE, Size(30,30));
//draw a rectangle for all found faces in the vector array on the original image
for(int i = 0; i < faces.size(); i++)
{
Point pt1(bodies[i].x + bodies[i].width, bodies[i].y + bodies[i].height);
Point pt2(bodies[i].x, bodies[i].y);
rectangle(captureFrame, pt1, pt2, cvScalar(0, 255, 0, 0), 1, 8, 0);
}
//print the output
imshow("outputCapture", captureFrame);
//pause for 33ms
waitKey(33);
}
return 0;
}
It seems there is some inconsistency in your code, since face_cascade is not defined anywhere, but I assume its type is CascadeClassifier.
detectMultiScale stores all detected objects in the faces vector. Are you sure it contains only one object?
Try removing the CV_HAAR_FIND_BIGGEST_OBJECT flag, because you want all objects to be detected, and not only the biggest one.
Also, make sure you set the minSize and maxSize parameters correctly (see documentation), since those parameters determine the minimal and maximal detectable object sizes.

OpenCV errors for iOS / detecting Hough Circles

I have been trying for hours to run an xcode project with openCV. I have built the source, imported it into the project and included
#ifdef __cplusplus #import opencv2/opencv.hpp>
#endif
in the .pch file.
I followed the instructions from http://docs.opencv.org/trunk/doc/tutorials/introduction/ios_install/ios_install.html
Still I am getting many Apple Mach-O linker errors when I compile.
Undefined symbols for architecture i386:
"std::__1::__vector_base_common<true>::__throw_length_error() const", referenced from:
Please help me I am really lost..
UPDATE:
Errors all fixed and now I am trying to detect circles..
Mat src, src_gray;
cvtColor( image, src_gray, CV_BGR2GRAY );
vector<Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles( src_gray, circles, CV_HOUGH_GRADIENT, 1, image.rows/8, 200, 100, 0, 0 );
/// Draw the circles detected
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
// circle center
circle( src, center, 3, Scalar(0,255,0), -1, 8, 0 );
// circle outline
circle( src, center, radius, Scalar(0,0,255), 3, 8, 0 );
}
I am using the code above, however no circles are being drawn on the image.. is there something obvious that I am doing wrong?
Try the solution in my answer to this question...
How to resolve iOS Link errors with OpenCV
Also on github I have a couple of simple working samples - with recently built openCV framework.
NB - OpenCVSquares is simpler than OpenCVSquaresSL. The latter was adapted for Snow Leopard backwards compatibility - it contains two builds of the openCV framework and 3 targets, so you are better off using the simpler OpenCVSquares if it will run on your system.
To adapt OpenCVSquares to detect circles, I suggest that you start with the Hough Circles c++ sample from the openCV distro, and use it to adapt/replace CVSquares.cpp and CVSquares.h with, say CVCircles.cpp and CVCicles.h
The principles are exactly the same:
remove UI code from the c++, the UI is provided on the obj-C side
transform the main() function into a static member function for the class declared in the header file. This should mirror in form an Objective-C message to the wrapper (which translates the obj-c method to a c++ function call).
From the objective-C side, you are passing a UIImage to the wrapper object, which:
converts the UIImage to a cv::Mat image
pass the Mat to a c++ class for processing
converts the result from Mat back to UIImage
returns the processed UIImage back to the objective-C calling object
update
The adapted houghcircles.cpp should look something like this at it's most basic (I've replaced the CVSquares class with a CVCircles class):
cv::Mat CVCircles::detectedCirclesInImage (cv::Mat img)
{
//expects a grayscale image on input
//returns a colour image on ouput
Mat cimg;
medianBlur(img, img, 5);
cvtColor(img, cimg, CV_GRAY2RGB);
vector<Vec3f> circles;
HoughCircles(img, circles, CV_HOUGH_GRADIENT, 1, 10,
100, 30, 1, 60 // change the last two parameters
// (min_radius & max_radius) to detect larger circles
);
for( size_t i = 0; i < circles.size(); i++ )
{
Vec3i c = circles[i];
circle( cimg, Point(c[0], c[1]), c[2], Scalar(255,0,0), 3, CV_AA);
circle( cimg, Point(c[0], c[1]), 2, Scalar(0,255,0), 3, CV_AA);
}
return cimg;
}
Note that the input parameters are reduced to one - the input image - for simplicity. Shortly I will post a sample on github which will include some parameters tied to slider controls in the iOS UI, but you should get this version working first.
As the function signature has changed you should follow it up the chain...
Alter the houghcircles.h class definition:
static cv::Mat detectedCirclesInImage (const cv::Mat image);
Modify the CVWrapper class to accept a similarly-structured method which calls detectedCirclesInImage
+ (UIImage*) detectedCirclesInImage:(UIImage*) image
{
UIImage* result = nil;
cv::Mat matImage = [image CVGrayscaleMat];
matImage = CVCircles::detectedCirclesInImage (matImage);
result = [UIImage imageWithCVMat:matImage];
return result;
}
Note that we are converting the input UIImage to grayscale, as the houghcircles function expects a grayscale image on input. Take care to pull the latest version of my github project, I found an error in the CVGrayscaleMat category which is now fixed . Output image is colour (colour applied to grayscale input image to pick out found circles).
If you want your input and output images in colour, you just need to ensure that you make a grayscale conversion of your input image for sending to Houghcircles() - eg cvtColor(input_image, gray_image, CV_RGB2GRAY); and apply your found circles to the colour input image (which becomes your return image).
Finally in your CVViewController, change your messages to CVWrapper to conform to this new signature:
UIImage* image = [CVWrapper detectedCirclesInImage:self.image];
If you follow all of these details your project will produce circle-detected results.
update 2
OpenCVCircles now on Github
With sliders to adjust HoughCircles() parameters

Resources