Is there a way to detect the main ROI? - image-processing

i want detect the important ROI Element in a picture. (i want to get the position)
I've tested a reverted SeamCarving-Method. I hoped, that the most importand Area in a picture have the most energylevel. I've generated one vertical and one horizontal Seam and took the intersection. But this method don't seem to be perfect.
Some examples:
good detection:
good detection http://img713.imageshack.us/img713/2928/seamcastle.jpg
good detection http://img39.imageshack.us/img39/9584/seamente.jpg
good detection http://img193.imageshack.us/img193/2693/seamwuffi.jpg
near aceptable;
good detection http://img440.imageshack.us/img440/7459/seamflower.jpg
worse detection:
good detection http://img836.imageshack.us/img836/5766/seamsun.jpg (maybe the point is a good result. It's the point with the max. energylevel in this picture)
good detection http://img507.imageshack.us/img507/2750/seambluesky1.jpg
Have anyone a idea to detect roi's more better?
greeting,
desire

I think the key terms you are looking for is: Saliency Detection, Salient Object Detection, etc
Perhaps these papers will point you in the right direction:
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis (PDF)
Simulating Human Saccadic Scanpaths on Natural Images (PDF)
Salient Object Detection by Composition (PDF)
Saliency Filters: Contrast Based Filtering for Salient Region Detection (Web)

Related

How to detect hand palm and its orientation (like facing outwards)?

I am working on a hand detection project. There are many good project on web to do this, but what I need is a specific hand pose detection. It needs a totally open palm and the whole palm face to outwards, like the image below:
The first hand faces to inwards, so it will not be detected, and the right one faces to outwards, it will be detected. Now I can detect hand with OpenCV. but how to tell the hand orientation?
Problem of matching with the forehand belongs to the texture classification, it's a classic pattern recognition problem. I suggest you to try one of the following methods:
Gabor filters: it is good to detect the orientation and pixel intensities (as forehand has different features), opencv has getGaborKernel function, the very important params of this function is theta (orientation) and lambd: (frequencies). To make it simple you can apply this process on a cropped zone of palm (as you have already detected it, it would be easy to crop for example the thumb, or a rectangular zone around the gravity center..etc). Then you can convolute it with a small database of images of the same zone to get the a rate of matching, or you can use the SVM classifier, where you have to train your SVM on a set of images by constructing the training matrix needed for SVM (check this question), this paper
Local Binary Patterns (LBP): it's an important feature descriptor used for texture matching, you can apply it on whole palm image or on a cropped zone or finger of image, it's easy to use in opencv, a lot of tutorials with codes are available for this method. I recommend you to read this paper talking about Invariant Texture Classification
with Local Binary Patterns. here is a good tutorial
Haralick Texture: I've read that it works perfectly when a set of features quantifies the entire image (Global Feature Descriptors). it's not implemented in opencv but easy to be implemented, check this useful tutorial
Training Models: I've already suggested a SVM classifier, to be coupled with some descriptor, that can works perfectly.
Opencv has an interesting FaceRecognizer class for face recognition, it could be an interesting idea to use it replacing the face images by the palm ones, (do resizing and rotation to get an unique pose of palm), this class has three methods can be used, one of them is Local Binary Patterns Histograms, which is recommended for texture recognition. and why not to try the other models (Eigenfaces and Fisherfaces ) , check this tutorial
well if you go for a MacGyver way you can notice that the left hand has bones sticking out in a certain direction, while the right hand has all finger lines and a few lines in the hand palms.
These lines are always sort of the same, so you could try to detect them with opencv edge detection or hough lines. Due to the dark color of the lines, you might even be able to threshold them out of it. Then gather the information from those lines, like angles, regressions, see which features you can collect and train a simple decision tree.
That was assuming you do not have enough data, if you have then you go into deeplearning, just take a basic inceptionV3 model and retrain the last dense layer to classify between two classes with a softmax, or to predict the probablity if the hand being up/down with sigmoid. Check this link, Tensorflow got your back on the training of this one, pure already ready code to execute.
Questions? Ask away
Take a look at what leap frog has done with the oculus rift. I'm not sure what they're using internally to segment hand poses, but there is another paper that produces hand poses effectively. If you have a stereo camera setup, you can use this paper's methods: https://arxiv.org/pdf/1610.07214.pdf.
The only promising solutions I've seen for mono camera train on large datasets.
use Haar-Cascade classifier,
you can get the classifier model file then use it here.
Just search for 'Haarcascade detection of Palm in Google' or use below code.
import cv2
cam=cv2.VideoCapture(0)
ccfr2=cv2.CascadeClassifier('haar-cascade-files-master/palm.xml')
while True:
retval,image=cam.read()
grey=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
palm=ccfr2.detectMultiScale(grey,scaleFactor=1.05,minNeighbors=3)
for x,y,w,h in palm:
image=cv2.rectangle(image,(x,y),(x+w,y+h),(256,256,256),2)
cv2.imshow("Window",image)
if cv2.waitKey(1) & 0xFF==ord('q'):
cv2.destroyAllWindows()
break
del(cam)
Best of Luck for your experience using HaarCascade.

best algorithm for face detection and pose estimation

I am looking for algorithms/publications on face detection. There are plenty in the web. But my scenario is somewhat specialized. I want to detect faces accurately in images taken by wearable devices (e.g. narrative clips), so there will be motion blur, and image quality will not be that good. I want to detect faces that are within 15 feet of the camera accurately. Next goal is to estimate the pose, primarily to find out if the person is looking toward the camera ( or better looking at the camera owner).
Any suggestion?
My go to for this would either be a deep-learning framework using convolutional layers for pixel classification, or K-means/ K-Nearest Neighbour algorithm.
This does depend on your data, however. From your post I am assuming that your data isn't labelled? meaning you are unable to feed in the 'truth' to the algorithm for classification.
you could perhaps use a CNN (convolutional neural network) for pixel classification (image segmentation) which should identify the location of a person. given this, perhaps you could run a 'local' CNN i a region close to the face identified to classify the region the body is located in as a certain pose.
This would probably be my first take on the problem but would depend on the exact structure of your data, and the structure of your labels (if you have any).
I have to say it does sound like a fun project!
I found OpenCV's Haar Cascades for Face Detection pretty accurate and robust for motion blur and "live" face recognition.
I'm saying that because I used them for implementing an Eye-Tracker in C++ with a laptop webcam (whose resolution was not excellent and motion blur was naturally always present).
They work in multiresolution and are therefore able to detect faces of any size, but you can easily tune them for your distance of interest.
They might not be your final optimal solution, but since they are already implemented and come with the OpenCV package, they could constitute a good starting point.

Object Recognition by Outlines vs Features

Context:
I have the RGB-D video from a Kinect, which is aimed straight down at a table. There is a library of around 12 objects I need to identify, alone or several at a time. I have been working with SURF extraction and detection from the RGB image, preprocessing by downscaling to 320x240, grayscale, stretching the contrast and balancing the histogram before applying SURF. I built a lasso tool to choose among detected keypoints in a still of the video image. Then those keypoints are used to build object descriptors which are used to identify objects in the live video feed.
Problem:
SURF examples show successful identification of objects with a decent amount of text-like feature detail eg. logos and patterns. The objects I need to identify are relatively plain but have distinctive geometry. The SURF features found in my stills are sometimes consistent but mostly unimportant surface features. For instance, say I have a wooden cube. SURF detects a few bits of grain on one face, then fails on other faces. I need to detect (something like) that there are four corners at equal distances and right angles. None of my objects has much of a pattern but all have distinctive symmetric geometry and color. Think cellphone, lollipop, knife, bowling pin. My thought was that I could build object descriptors for each significantly different-looking orientation of the object, eg. two descriptors for a bowling pin: one standing up and one laying down. For a cellphone, one laying on the front and one on the back. My recognizer needs rotational invariance and some degree of scale invariance in case objects are stacked. Ability to deal with some occlusion is preferable (SURF behaves well enough) but not the most important characteristic. Skew invariance would be preferable and SURF does well with paper printouts of my objects held by hand at a skew.
Questions:
Am I using the wrong SURF parameters to find features at the wrong scale? Is there a better algorithm for this kind of object identification? Is there something as readily usable as SURF that uses the depth data from the Kinect along with or instead of the RGB data?
I was doing something similar for a project, and ended up using a super simple method for object recognition, which was using OpenCV blob detection, and recognizing objects based on their areas. Obviously, there needs to be enough variance for this method to work.
You can see my results here: http://portfolio.jackkalish.com/Secondhand-Stories
I know there are other methods out there, one possible solution for you could be approxPolyDP, which is described here:
How to detect simple geometric shapes using OpenCV
Would love to hear about your progress on this!

Which edge detection algorithm in OpenCV suitable for detecting objects?

I have to make a bot which has to overcome obstacles autonomously in an arena that will be filled with rocks. The bot has to find its way through this area and reach the end point. I am thinking of using edge detector operators like canny and sobel for this problem.
I want to know whether those will be suitable options for this problem. If so, then after detecting the edges, how can I make the bot find the path, overcoming the rock obstacles?
I am using QT IDE and opencv library.
Since you will be analyzing frames of video, and the robot will be moving most of the time, image differences and optical flow too will be helpful. Edge detection alone might not help a lot, unless the surroundings and obstacles are simple and have known properties. Posting a photo of the scene can help those who want to answer the question.
Yes, canny is a very good edge detector. In fact the opencv implementation uses sobel to get the gradient estimate. You may need to apply a Gaussian filter to the image before edge detection. Edges are good features to look for rocks, but depending on the background other features such as color may also be useful. It probably would be easier if you gather 3D scene information via stereo, or laser scanner, or kinect like sensor. Also consider detecting when you bump into rocks and building up a map of where they are.
You can use contours to detect any object. You can estimate its size by finding the area of the contours. Then you can use moments to find the center of the object.

OpenCV intrusion detection

For a project of mine, I'm required to process images differences with OpenCV. The goal is to detect an intrusion in a zone.
To be a little more clear, here are the inputs and outputs:
Inputs:
An image of reference
A second image from approximately the same point of view (can be an error margin)
Outputs:
Detection of new objects in the scene.
Bonus:
Recognition of those objects.
For me, the most difficult part of it is to take off small differences (luminosity, camera position margin error, movement of trees...)
I already read a lot about OpenCV image processing (subtraction, erosion, threshold, SIFT, SURF...) and have some good results.
What I would like is a list of steps you think is the best to have a good detection (humans, cars...), and the algorithms to do each step.
Many thanks for your help.
Track-by-Detect, human tracker:
You apply the Hog detector to detect humans.
You draw a respective rectangle as foreground area on the foreground mask.
You pass this mask to "The OpenCV Video Surveillance / Blob Tracker Facility"
You can, now, group the passing humans based on their blob.{x,y} values into public/restricted areas.
I had to deal with this problem the last year.
I suggest an adaptive background-foreground estimation algorithm which produced a foreground mask.
On top of that, you add a blob detector and tracker, and then calculate if an intersection takes place between the blobs and your intrusion area.
Opencv comes has samples of all of these within the legacy code. Ofcourse, if you want you can also use your own or other versions of these.
Links:
http://opencv.willowgarage.com/wiki/VideoSurveillance
http://experienceopencv.blogspot.gr/2011/03/blob-tracking-video-surveillance-demo.html
I would definitely start with a running average background subtraction if the camera is static. Then you can use findContours() to find the intruding object's location and size. If you want to detect humans that are walking around in a scene, I would recommend looking at using the built-in haar classifier:
http://docs.opencv.org/doc/tutorials/objdetect/cascade_classifier/cascade_classifier.html#cascade-classifier
where you would just replace the xml with the upperbody classifier.

Resources