Image Processing for Augmented Reality - image-processing

I need some help on Augmented Reality.
I have develop a small application.NOw I want to use shape detection algorithm or specially circle detection algorithm.I want that after my camera get open It should only detect circles and if it gets circles it should get replaced with some corresponding image.
I hope you understood what I want to do.

To add shape detection algorithm for (circle), you can consider using circle detection with Hough Transform from OpenCV. Taken from OpenCV tutorial website, here are some snippets:
// Loads an image
cv::Mat src = cv::imread( filename, cv::IMREAD_COLOR );
cv::Mat gray;
cv::cvtColor(src, gray, cv::COLOR_BGR2GRAY);
cv::medianBlur(gray, gray, 5);
cv::vector<Vec3f> circles;
cv::HoughCircles(gray, circles, cv::HOUGH_GRADIENT, 1,
gray.rows/16, // change this value to detect circles with different distances to each other
100, 30, 1, 30 // change the last two parameters
// (min_radius & max_radius) to detect larger circles
);
for( size_t i = 0; i < circles.size(); i++ )
{
cv::Vec3i c = circles[i];
cv::Point center = cv::Point(c[0], c[1]);
// circle center
cv::circle( src, center, 1, cv::Scalar(0,100,100), 3, cv::LINE_AA);
// circle outline
int radius = c[2];
cv::circle( src, center, radius, cv::Scalar(255,0,255), 3, cv::LINE_AA);
}
OpenCV can do the task as you mentioned, and is compatible for AR application.

Related

How to improve Hough Circle Transform to detect a circle made up of scattered points

I have a very basic code that uses the standardized HoughCircles command in openCV to detect a circle. However, my problem is that my data (images) are generated using an algorithm (for the purpose of data simulation) that plots a point in the range of +-15% (randomly in this range) of r (where r is the radius of the circle, that has been randomly generated to be between 5 and 10 (real numbers)) and does so for 360 degrees using the equation of a circle. (Attached a sample image).
http://imgur.com/a/iIZ1N
Now using the Hough circle command, I was able to detect a circle of approximately the same radius by manually playing around with the parameters (by settings up trackbars, inspired from a github code of the same nature) but I want to automate the process as I have over a 1000 images that I want to do this over and over on. Is there a better way to do that? Or if anyone has any suggestions, I would highly appreciate them as I am a beginner in the field of image processing and have a physics background rather than a CS one.
A rough sample of my code (without trackbars etc is below):
Mat img = imread("C:\\Users\\walee\\Documents\\MATLAB\\plot_2 .jpeg", 0);
Mat cimg,copy;
copy = img;
medianBlur(img, img, 5);
GaussianBlur(img, img, Size(1, 5), 1.1, 0);
cvtColor(img, cimg, COLOR_GRAY2BGR);
vector<Vec3f> circles;
HoughCircles(img, circles, HOUGH_GRADIENT,1, 10, 94, 57, 120, 250);
for (size_t i = 0; i < circles.size(); i++)
{
Vec3i c = circles[i];
circle(cimg, Point(c[0], c[1]), c[2], Scalar(0, 0, 255), 1, LINE_AA);
circle(cimg, Point(c[0], c[1]), 2, Scalar(0, 255, 0), 1, LINE_AA);
}
imshow("detected circles", cimg);
waitKey();
return 0;
If all images have the same nature (black axis and points as circles) I would suggest to do following:
1) remove axis by finding black elements and replace them with background
2) invert colours to have black background
3) perform morphological closing to fill the circles and create more solid points
4) (optional) if the density of the points is high you can try to apply another morphological operation, namely dilation to make the data circle thinner
5) apply Hough circle

Detect caps on bottles using opencv and python

I know that there are a hundred topics about my question in all over the web, but i would like to ask specific for my problem because I tried almost all solutions without any success.
I am trying to count circles in an image (yes i have already tried hough circles but due to light reflections, i think, on my object is not very robust).
Then I tried to create a classifier (no success i think there is no enough features so the detection is not good)
I have also tried HSV conversation and tried to find my object with color (again I had some problems because of the light and the variations of colors)
As you can see on image, there are 8 caps and i would like to be able to count them.
Using all of this methods, i was able to detect the objects on an image (because I was optimizing all the parameters of functions for the specific image) but as soon as I load a new, similar, image the results was disappointing.
Please follow this link to see the Image
Bellow you can find parts of everything i have tried:
1. Hough circles
img = cv2.imread('frame71.jpg',1)
img = cv2.medianBlur(img,5)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
if img == None:
print "There is no image file. Quiting..."
quit()
circles = cv2.HoughCircles(img,cv.CV_HOUGH_GRADIENT,3,50,
param1=55,param2=125,minRadius=25,maxRadius=45)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
print len(circles[0,:])
cv2.imshow('detected circles',cimg)
cv2.waitKey(0)
cv2.destroyAllWindows()
2. HSV Transform, color detection
def image_process(frame, h_low, s_low, v_low, h_up, s_up, v_up, ksize):
temp = ksize
if(temp%2==1):
ksize = temp
else:
ksize = temp+1
#if(True):
# return frame
#thresh = frame
#try:
#TODO: optimize as much as possiblle this part of code
try:
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower = np.array([h_low, s_low, v_low],np.uint8)
upper = np.array([h_up,s_up,h_up],np.uint8)
mask = cv2.inRange(hsv, lower, upper)
res = cv2.bitwise_and(hsv,hsv, mask= mask)
thresh = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)
#thresh = cv2.threshold(res, 50, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.threshold(thresh, 50, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.medianBlur(thresh,ksize)
except Exception as inst:
print type(inst)
#cv2.imshow('thresh', thresh)
return thresh
3. Cascade classifier
img = cv2.imread('frame405.jpg', 1)
cap_cascade = cv2.CascadeClassifier('haar_30_17_16_stage.xml')
caps = cap_cascade.detectMultiScale(img, 1.3, 5)
#print caps
for (x,y,w,h) in caps:
cv2.rectangle(img, (x,y), (x+w,y+h), (255,0,0),2)
#cv2.rectangle(img, (10,10),(100,100),(0,255,255),4)
cv2.imshow('image', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
quit()
About training the classifier I really used a lot of variations of images, samples, negatives and positives, number of stages, w and h but the results was not very accurate.
Finally I would like to know from your experience which is the best method I should follow and I will stick on that in order to optimize my detection. Keep in mind that all images are similiar but NOT identical. There are some differences due to light, movement etc
Than you in advance,
I did some experiment with the sample image. I'm posting my results, and if you find it useful, you can improve it further and optimize. Here are the steps:
downsample the image
perform morphological opening
find Hough circles
cluster the circles by radii (bottle circles should get the same label)
filter the circles by a radius threshold
you can also cluster circles by their center x and y coordinates (I haven't done this)
prepare a mask from the filtered circles and extract the possible bottles region
cluster this region by color
Code is in C++. I'm attaching my results.
Mat im = imread(INPUT_FOLDER_PATH + string("frame71.jpg"));
Mat small;
int kernelSize = 9; // try with different kernel sizes. 5 onwards gives good results
pyrDown(im, small); // downsample the image
Mat morph;
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(kernelSize, kernelSize));
morphologyEx(small, morph, MORPH_OPEN, kernel); // open
Mat gray;
cvtColor(morph, gray, CV_BGR2GRAY);
vector<Vec3f> circles;
HoughCircles(gray, circles, CV_HOUGH_GRADIENT, 2, gray.rows/8.0); // find circles
// -------------------------------------------------------
// cluster the circles by radii. similarly you can cluster them by center x and y for further filtering
Mat circ = Mat(circles);
Mat data[3];
split(circ, data);
Mat labels, centers;
kmeans(data[2], 2, labels, TermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 10, 1.0), 2, KMEANS_PP_CENTERS, centers);
// -------------------------------------------------------
Mat rgb;
small.copyTo(rgb);
//cvtColor(gray, rgb, CV_GRAY2BGR);
Mat mask = Mat::zeros(Size(gray.cols, gray.rows), CV_8U);
for(size_t i = 0; i < circles.size(); i++)
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
float r = centers.at<float>(labels.at<int>(i));
if (r > 30.0f && r < 45.0f) // filter circles by radius (values are based on the sample image)
{
// just for display
circle(rgb, center, 3, Scalar(0,255,0), -1, 8, 0);
circle(rgb, center, radius, Scalar(0,0,255), 3, 8, 0);
// prepare a mask
circle(mask, center, radius, Scalar(255,255,255), -1, 8, 0);
}
}
// use each filtered circle as a mask and extract the region from original downsampled image
Mat rgb2;
small.copyTo(rgb2, mask);
// cluster the masked region by color
Mat rgb32fc3, lbl;
rgb2.convertTo(rgb32fc3, CV_32FC3);
int imsize[] = {rgb32fc3.rows, rgb32fc3.cols};
Mat color = rgb32fc3.reshape(1, rgb32fc3.rows*rgb32fc3.cols);
kmeans(color, 4, lbl, TermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 10, 1.0), 2, KMEANS_PP_CENTERS);
Mat lbl2d = lbl.reshape(1, 2, imsize);
Mat lbldisp;
lbl2d.convertTo(lbldisp, CV_8U, 50);
Mat lblColor;
applyColorMap(lbldisp, lblColor, COLORMAP_JET);
Results:
Filtered circles:
Masked:
Segmented:
Hello finally i think I found a way to count caps on bottles.
Read image
Teach (find correct values for HSV up/low limits)
Select desire color (using HSV and mask)
Find contours on the masked image
Find the minCircles for contours
Reject all circles beyond thresholds
I have also order a polarized filter which I think it will reduce glares a lot. I am open to suggestions for further improvement (robustness and speed). Both robustness and speed is crucial for my application.
Thank you.

Face detection after background substraction using openCv

I'm trying to improve face detection from a camera capture so I thought it would be better if before the face detection process I removed the background from the image,
I'm using BackgroundSubtractorMOG and CascadeClassifier with lbpcascade_frontalface for face detection,
My question is: how can I grab the foreground image in order to use it as the input to face detection? this is what I have so far:
while (true) {
capture.retrieve(image);
mog.apply(image, fgMaskMOG, training?LEARNING_RATE:0);
if (counter++ > LEARNING_LIMIT) {
training = false;
}
// I think something should be done HERE to 'apply' the foreground mask
// to the original image before passing it to the classifier..
MatOfRect faces = new MatOfRect();
classifier.detectMultiScale(image, faces);
// draw faces rect
for (Rect rect : faces.toArray()) {
Core.rectangle(image, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height), new Scalar(255, 0, 0));
}
// show capture in JFrame
frame.update(image);
frameFg.update(fgMaskMOG);
Thread.sleep(1000 / FPS);
}
Thanks
I can answer in C++ using the BackgroundSubtractorMOG2:
You can either use erosion or pass a higher threshold value to the MOG background subtractor to remove the noise. In order to completely get rid of the noise and false positives, you can also blur the mask image and then apply a threshold:
// Blur the mask image
blur(fgMaskMOG2, fgMaskMOG2, Size(5,5), Point(-1,-1));
// Remove the shadow parts and the noise
threshold(fgMaskMOG2, fgMaskMOG2, 128, 255, 0);
Now you can easily find the rectangle bounding the foreground region and pass this area to the cascade classifier:
// Find the foreground bounding rectangle
Mat fgPoints;
findNonZero(fgMaskMOG2, fgPoints);
Rect fgBoundRect = boundingRect(fgPoints);
// Crop the foreground ROI
Mat fgROI = image(fgBoundRect);
// Detect the faces
vector<Rect> faces;
face_cascade.detectMultiScale(fgROI, faces, 1.3, 3, 0|CV_HAAR_SCALE_IMAGE, Size(32, 32));
// Display the face ROIs
for(size_t i = 0; i < faces.size(); ++i)
{
Point center(fgBoundRect.x + faces[i].x + faces[i].width*0.5, fgBoundRect.y + faces[i].y + faces[i].height*0.5);
circle(image, center, faces[i].width*0.5, Scalar(255, 255, 0), 4, 8, 0);
}
In this way, you will reduce the search area for the cascade classifier, which not only makes it faster but also reduces the false positive faces.
If you have the input image and the foreground mask, this is straight forward.
In C++, I would simply add (just where you put your comment): image.copyTo(fgimage,fgMaskMOG);
I'm not familiar with the java interface, but this should be quite similar. Just don't forget to correctly initialize fgimage and reset it each frame.

OpenCV warping from one triangle to another

I would like to map one triangle inside an OpenCV Mat to another one, pretty much like warpAffine does (check it here), but for triangles instead of quads, in order to use it in a Delaunay triangulation.
I know one is able to use a mask, but I'd like to know if there's a better solution.
I have copied the above image and the following C++ code from my post Warp one triangle to another using OpenCV ( C++ / Python ). The comments in the code below should provide a good idea what is going on. For more details and for python code you can visit the above link. All the pixels inside triangle tri1 in img1 are transformed to triangle tri2 in img2. Hope this helps.
void warpTriangle(Mat &img1, Mat &img2, vector<Point2f> tri1, vector<Point2f> tri2)
{
// Find bounding rectangle for each triangle
Rect r1 = boundingRect(tri1);
Rect r2 = boundingRect(tri2);
// Offset points by left top corner of the respective rectangles
vector<Point2f> tri1Cropped, tri2Cropped;
vector<Point> tri2CroppedInt;
for(int i = 0; i < 3; i++)
{
tri1Cropped.push_back( Point2f( tri1[i].x - r1.x, tri1[i].y - r1.y) );
tri2Cropped.push_back( Point2f( tri2[i].x - r2.x, tri2[i].y - r2.y) );
// fillConvexPoly needs a vector of Point and not Point2f
tri2CroppedInt.push_back( Point((int)(tri2[i].x - r2.x), (int)(tri2[i].y - r2.y)) );
}
// Apply warpImage to small rectangular patches
Mat img1Cropped;
img1(r1).copyTo(img1Cropped);
// Given a pair of triangles, find the affine transform.
Mat warpMat = getAffineTransform( tri1Cropped, tri2Cropped );
// Apply the Affine Transform just found to the src image
Mat img2Cropped = Mat::zeros(r2.height, r2.width, img1Cropped.type());
warpAffine( img1Cropped, img2Cropped, warpMat, img2Cropped.size(), INTER_LINEAR, BORDER_REFLECT_101);
// Get mask by filling triangle
Mat mask = Mat::zeros(r2.height, r2.width, CV_32FC3);
fillConvexPoly(mask, tri2CroppedInt, Scalar(1.0, 1.0, 1.0), 16, 0);
// Copy triangular region of the rectangular patch to the output image
multiply(img2Cropped,mask, img2Cropped);
multiply(img2(r2), Scalar(1.0,1.0,1.0) - mask, img2(r2));
img2(r2) = img2(r2) + img2Cropped;
}
You should use the getAffineTransform to find the transform, and use warpAffine to apply it

Hough transformation for iris detection in OpenCV

I wrote the code for hough transformation and it works well. Also I can crop the eye location of a face. Now I want to detect the iris of the crop image with applying the Hough transformation(cvHoughCircle). However when I try this procedure, the system is not able to find any circle on the image.
Maybe, the reason is, there are noises in the image but I don't think it's the reason.
So, how can I detect the iris? I have the code of binary thresholding maybe I can use it, but
I don't know how to do?
If anyone helps I really appreciate it. thx :)
You say that with binary thresold you get an iris that is pure white : that is not what you want to have. Use something like cvCanny in order to get only the edge of the iris.
Are you detecting the edges correctly?
Can you display the binary image and see the iris clearly?
circular hough transforms normally have a radius window (otherwise you are searching a 3d solution space) are you setting the window to a reasonable value?
void houghcircle()
{
//cvSmooth( graybin,graybin, CV_GAUSSIAN, 5,5 );
CvMemStorage* storage = cvCreateMemStorage(0);
// smooth it, otherwise a lot of false circles may be detected
CvSeq* circles = cvHoughCircles( edge, storage, CV_HOUGH_GRADIENT, 5, edge->height/4,1,1,2,50, 70 );
int i;
for( i = 0; i < circles->total; i++ )
{
float* p = (float*)cvGetSeqElem( circles, i);
cvCircle( img, cvPoint(cvRound(p[0]),cvRound(p[1])), 2, CV_RGB(0,255,0), -1, 2, 0 );
cvCircle( img, cvPoint(cvRound(p[0]),cvRound(p[1])), cvRound(p[2]), CV_RGB(255,0,0), 1, 2, 0 );
cvNamedWindow( "circles", 1 );
cvShowImage( "circles", img );
cvWaitKey();
}
}

Resources