What would be the best approach for detecting and removing a person's hair in a simple portrait image ? Any useful libraries of algorithms ? I have been looking over openCV which looks like it could be of some use
You're dealing with two different problems here:
detecting if a face in a portrait has hair
"removing" the hair
The first is solvable fairly easily:
Separate the face from the background (as you've mentioned a "simple portrait image", this shouldn't be too hard).
Convert your image to the Y'CbCr color space
Human skin has a fairly narrow range of chrominance values, regardless of race. Check out this paper for the details.
The approach above will help you separate skin areas of the face from non-skin areas
Assume that the non-skin areas consist of hair. Note that facial hair will get picked up as a non-skin area, too.
As far as the second problem goes, you need to clarify exactly what you mean by "removing":
Are you simply cutting out the part of the portrait that has hair? In this case, the solution follows directly from the detection method above.
Are you trying to make it look like the person has no hair at all (e.g. is bald, clean-shaven?) In this case, things will be a lot harder -- there's a reason why professional photo manipulators get paid well.
I think this is hard problem - consider for detecting and removing hair in this case:
I found several papers, maybe they will help you:
Detection and analysis of hair
Dullrazor®: A software approach to hair removal from images
Research on the Expression of Hair in Computer Animation
Cheers!
try this http://betaface.com/demo.html . Color , hair info and more . smile , age, eye etc..
Related
I am looking for sunflare kind of lighting effect on Image. In my case light source is bulb instead of Sun. And I understand that changing light source changes rays effect and even different type of bulb produces different type of light beams.
Here is an example of focus light : http://www.photoshopessentials.com/photo-effects/focus-light/
I have looked into standard photo filters and OpenCV but we didn't find something obvious. I am looking for approach and direction to achieve it.
My knowledge is limited to iOS business apps only. I have heard only name of frameworks like OpenGL,SceneKit and MetalKit.So, I would prefer to have solution which fits into my knowledge stack but yes I would love to know all possible solutions.
Let me know if you want me to explain more.
Any help would be appreciated.
I'm new in StackOverflow and I'd appreciate some help. I'm working in an algorithm for detecting "aguajes", a particular kind of palm. So far, I got good results using a texture feature extractor and other characteristics, but the main problem is that I can't difference between an "aguaje" and other palms (mainly pinnated palms) because the texture is very similar. You cn see two images bellow
1.AGUAJE
AGUAJE
2.Pinnated Palm
Pinnated Palm
I see that the main difference in the second one is the presence of a Central Axis (also called Petiole or Rachis). The question is "How to detect it?". I mean, if I'm able to detect the presence of a Petiole, I'd be able to distinguish between this two types of palm. I think color is not an important issue because even when there's no color (gray image), we can discern where is located that Axis.
Thanks in advance.
I have a face-detection-algorithm that works great and fast (iOS' default CIDetector) when I use it to detect real human faces.
Now I want to detect a test-image I put on a screen somewhere in a room, so I have total control of the image-to-be-detected, the only thing I need to be very sure about is the size on the screen.
I tested it with a simple smiley painted on an HTML5 Canvas (like this:
which is recognized as a face, but not as fast & often as an actual image would (like this: .
As you can expect it's really hard to google this, is there sort of a goto-schematic picture with really pronounced facial features that people use?
EDIT: If someone knows a good algorithm to reverse-engineer the perfect image for an existing, black-box face-detection, then this would also solve the problem ;)
I am working on project in C#/Emgu CV, but answer in any language with OpenCv should be ok.
I have following image: http://i42.tinypic.com/2z89h5g.jpg
Or it might look like this: http://i43.tinypic.com/122iwsk.jpg
I am trying to do automatic calibration and I would like to know how to find corners of the field. They are marked by LEDs, but I would prefer to find it by color tags. If need I can replace all tags by same color tags. (Note that light in room is changing so the colors might be bit different next time)
Edge detection might be ok too, but I am afraid that I would not find the corner correctly.
Please help.
Thank you.
Edit:
Thanks aardvarkk for advice, but I think I need to give you little bit more info.
I am already able to detect and identify robots in field and get their position and rotation. But for that I have to set corners of field manually first. So I was looking for aa automatic way, but I was worried I would not be able to distinguish color tags from background because light in the room is changing quite often.
And as for the camera angle. Point of this is that camera can be every time from different (reasonable) angle.
I would start by searching for the colours. The LEDs won't be much help to you as they're not much brighter than anything else in the scene. I would look for the rectangular pieces of coloured tape. Try segmenting the image based on colour. That may allow you to retrieve the corner tape pieces directly without needing to know their exact colour in advance. After that, you may look for pairs of the same colour blob that are close to each other to define the corners where the pieces of tape are the same. Knowing what kinds of camera angles you are going to have to solve is also very important -- if you need this to work when viewing from the side, then this approach certainly won't work. If it's almost top down, this would probably be a good start. Nobody will be able to provide you with a start to finish solution, but this might be a good base to begin with.
i am trying to subtract 2 image using the function cvAbsDiff(img1, img2, dest);
it working but sometimes when i bring my hand before my head or body the hand is not clear and background comes into picture... the background image(head) overlays my foreground.(hand)..
it works correctly on plain surfaces i.e when the background is even like a wall.
please check out my image...so that you can better understand my problem...!!!!
http://www.2shared.com/photo/hJghiq4b/bg_overlays_foreground.html
if you have any solution/hint please help me.......
There's nothing wrong with your code . Background subtraction is not a preffered way for motion detection or silhoutte detection because its not very robust.The problem is coming because both the background and the foreground are similar in colour at many regions which on subtractions pushes the foreground to back . You might try using
- optical flow for motion detection
- If your task is just detecting silhoutte or hand try training a HOG classifier over it
In case you do not want to try a new approach . You may try around playing with the threshold value(in your case 30).So when you subtract similar colour image there difference is less than 30 . And later you threshold with 30 so it just blacks out. Also you may try HSV or some other colourspace as well .
Putting in the relevant code would help. Also knowing what you're actually trying to achieve.
Which two images are you subtracting? I've done subtracting subsequent images (so, images taken with a delay of a fraction of a second), and the background subtraction generally results in the edges of moving objects, for example the edges of a hand, and not the entire silhouette of a hand. I'm guessing you're taking the difference of the current frame and a static startup frame. It's possible that parts aren't different enough (skin+skin).
I've got some computer problems tonight, I'll test it out tomorrow (pls put up at least the steps you actually carry thorough though) and let you know.
I'm still not sure what your ultimate goal is, although I'm guessing you want to do some gesture-recognition (since you have a vector called "fingers").
As Manpreet said, your biggest problem is robustness, and that is from the subjects having similar color.
I reproduced your image by having my face in the static comparison image, then moving it. If I started with only background, it was already much more robust and in anycase didn't display any "overlaying".
Quick fix is, make sure to have a clean subject-free static image.
Otherwise, you'll want to have dynamic comparison image, simplest would be comparing frame_n with frame_n-1. This will generally give you just the moving edges though, so if you want the entire silhouette you can either:
1) Use a different segmenting algorithm (what I recommend. Background subtraction is fast and you can use it to determine a much smaller ROI in which to search, and then use a different algorithm for more robust segmentation.)
2) Try to make a compromise between the static and dynamic comparison image, for example as an average of the past 10 frames or something like that. I don't know how well this works, but would be quite simple to implement, worth a try :).
Also, try with CV_THRESH_OTSU instead of 30 for your threshold value, see if you like that better.
Also, I noticed often the output flares (regions which haven't changed switch from black to white). Checking with the live stream, I'm quite certain it because of the webcam autofocusing/adjusting white balance etc.. If you're getting that too, turning off the autofocus etc. should help (which btw isn't done through openCV but depends on the camera. Possibly check this: How to programatically disable the auto-focus of a webcam?)