Trying to change a person's skin colour - ios

I am creating an app at the moment which take a picture of a person's face and I want to change the colour of their skin (just for fun!)
I have a piece of code that runs through the image pixel by pixel and finds the skin colour and then amends it to a new colour, which kind of works, but even though I am allowing for the differences in tones and adjusting the new colour in the same way it is still very hit and miss.
can anyone point me in the right direction?? is it even possible? I dont really want to use a filter as I dont think it would give the right effect.
Thanks

You should look at some computer vision techniques such as segmentation and feature extraction.

Related

Know algorithms to change the colour of a product

My question might be off topic, but I didn't a better forum to ask.
I need to change the color of a product on an eCommerce website. We have many styles and many colours, so taking a picture of every combination is out of question (about 100 styles and colours, which will result in 10,000 pictures. We just don't have time to take that many pictures or process them manually). However, I could take a picture of every product and and a picture of one style in every colours and then make a program which generate all the missing pictures. I was thinking using something like OpenCV (and probably python) which provide lots classic computer vision algorithm off the shelf, to do so. Before doing it, I'm sure this is a classic image processin problem. Does it have a name or is there any algorithm or resources on the topic ?
In other world, there are apps and program which allows you to change the colour of our dress or clothes. Does any body knows how it works or have usefull resources related to this problem ?
You separate intesity from colour information. Then you change the colour information and merge both back together.
This will give you an image with changed colours but maintained brightness. So shadows, highlights and so on stay untouched.
You have to convert your RGB touples to a colour space that has separate coordinates for intensity and colour.
https://en.wikipedia.org/wiki/Lab_color_space as one example
Of course you may restrict these operations to your "product" so anything else remains unchanged.

Is it possible to change texture hue programatically in Spritekit project?

I am interested in to how change hue of the texture in efficient way ? I am experimenting to create space dust which will change it's color every few seconds with nice, smooth transition from one color to another.
I find this possible in few ways:
Using core image like in this example. But I don't know how will this work in combination with Spritekit...
Using particle emitters to create space dust and change color of particles over time using particleColorSequnece property.
And easy one that came up on my mind , while playing with Photoshop, which is using two same, but differently colored images, one over another, and changing the opacity of the topmost one.
This gives me the effect I want, and actually looks fabulous, but is there any better way ? Maybe using SKTexture? In this particular case, I just need to change from one color to another , but what would be an efficient way to do this when multiple changes are required one after another ? This way, my third example requires additional images...
Here is the link which most closely describe what I am trying to accomplish. Just look how space dust changes its color overtime(from dark blue to purple and later to green or orange). I suppose this is done programatically... I would like to ask moderators to remove a link if it is not suitable to post it here. Thanks!
It is kind of a hard questions to answer and is rather subjective, however...
I personally would do the Emitter Node approach, because it seems like it is built for the type of use you are looking for and could have some cool effects trailing behind.
With that being said you specifically asked about changing the hue and colorBlendFactor might be what you are really looking for. I don't have a great link for it, but this might get you pointed in the right direction. You can see how they are blending colors to get the desired result.
Your solution with changing the alpha of two separate colors doesn't sound like a bad approach either.
Hopefully that helps and good luck =)

Project ideation using image processing

I am in my final year of BS Computer Science. I have chosen a project in the image processing domain. But I really don't know where to start from! Here is a rough draft of my project idea:
Project Description:
Often people are faced with the problem of deciding which colors to choose to paint their walls, doors and ceilings. They want to know how their rooms will look like after applying a certain color. We want to design a mobile application that can give people the opportunity to preview their rooms/walls/ceilings, etc, with a certain color before applying the color. Through our application the user can take photos of their rooms/walls/ceilings, etc, and change their colors virtually and preview them. This will give them a good estimate about the final look of their house.
Development will be in java using open CV libraries.
Can anyone provide some help?
For starting OpenCV with android you can follow the tutorial here.
And as your above description, I think you need to do the following...
Filter out the color of room's wall or ceiling color.
Replace with your preview color.
But as your room's color is not unique, you may need to mark the color manually and segment it. Here watershed algorithm might be helpful.
And one more thing is that there might be a chance of lighting variation, so you should use HSV color space instead of RGB.
And finally this is not the full solution, but you will get some idea about how to start with your project.
ImageMagick as a famous image processing library.You may look that one too.It can perform numerous operations with images
Thanks

Find corner of field

I am working on project in C#/Emgu CV, but answer in any language with OpenCv should be ok.
I have following image: http://i42.tinypic.com/2z89h5g.jpg
Or it might look like this: http://i43.tinypic.com/122iwsk.jpg
I am trying to do automatic calibration and I would like to know how to find corners of the field. They are marked by LEDs, but I would prefer to find it by color tags. If need I can replace all tags by same color tags. (Note that light in room is changing so the colors might be bit different next time)
Edge detection might be ok too, but I am afraid that I would not find the corner correctly.
Please help.
Thank you.
Edit:
Thanks aardvarkk for advice, but I think I need to give you little bit more info.
I am already able to detect and identify robots in field and get their position and rotation. But for that I have to set corners of field manually first. So I was looking for aa automatic way, but I was worried I would not be able to distinguish color tags from background because light in the room is changing quite often.
And as for the camera angle. Point of this is that camera can be every time from different (reasonable) angle.
I would start by searching for the colours. The LEDs won't be much help to you as they're not much brighter than anything else in the scene. I would look for the rectangular pieces of coloured tape. Try segmenting the image based on colour. That may allow you to retrieve the corner tape pieces directly without needing to know their exact colour in advance. After that, you may look for pairs of the same colour blob that are close to each other to define the corners where the pieces of tape are the same. Knowing what kinds of camera angles you are going to have to solve is also very important -- if you need this to work when viewing from the side, then this approach certainly won't work. If it's almost top down, this would probably be a good start. Nobody will be able to provide you with a start to finish solution, but this might be a good base to begin with.

OpenCv Issue of Image Subtraction?

i am trying to subtract 2 image using the function cvAbsDiff(img1, img2, dest);
it working but sometimes when i bring my hand before my head or body the hand is not clear and background comes into picture... the background image(head) overlays my foreground.(hand)..
it works correctly on plain surfaces i.e when the background is even like a wall.
please check out my image...so that you can better understand my problem...!!!!
http://www.2shared.com/photo/hJghiq4b/bg_overlays_foreground.html
if you have any solution/hint please help me.......
There's nothing wrong with your code . Background subtraction is not a preffered way for motion detection or silhoutte detection because its not very robust.The problem is coming because both the background and the foreground are similar in colour at many regions which on subtractions pushes the foreground to back . You might try using
- optical flow for motion detection
- If your task is just detecting silhoutte or hand try training a HOG classifier over it
In case you do not want to try a new approach . You may try around playing with the threshold value(in your case 30).So when you subtract similar colour image there difference is less than 30 . And later you threshold with 30 so it just blacks out. Also you may try HSV or some other colourspace as well .
Putting in the relevant code would help. Also knowing what you're actually trying to achieve.
Which two images are you subtracting? I've done subtracting subsequent images (so, images taken with a delay of a fraction of a second), and the background subtraction generally results in the edges of moving objects, for example the edges of a hand, and not the entire silhouette of a hand. I'm guessing you're taking the difference of the current frame and a static startup frame. It's possible that parts aren't different enough (skin+skin).
I've got some computer problems tonight, I'll test it out tomorrow (pls put up at least the steps you actually carry thorough though) and let you know.
I'm still not sure what your ultimate goal is, although I'm guessing you want to do some gesture-recognition (since you have a vector called "fingers").
As Manpreet said, your biggest problem is robustness, and that is from the subjects having similar color.
I reproduced your image by having my face in the static comparison image, then moving it. If I started with only background, it was already much more robust and in anycase didn't display any "overlaying".
Quick fix is, make sure to have a clean subject-free static image.
Otherwise, you'll want to have dynamic comparison image, simplest would be comparing frame_n with frame_n-1. This will generally give you just the moving edges though, so if you want the entire silhouette you can either:
1) Use a different segmenting algorithm (what I recommend. Background subtraction is fast and you can use it to determine a much smaller ROI in which to search, and then use a different algorithm for more robust segmentation.)
2) Try to make a compromise between the static and dynamic comparison image, for example as an average of the past 10 frames or something like that. I don't know how well this works, but would be quite simple to implement, worth a try :).
Also, try with CV_THRESH_OTSU instead of 30 for your threshold value, see if you like that better.
Also, I noticed often the output flares (regions which haven't changed switch from black to white). Checking with the live stream, I'm quite certain it because of the webcam autofocusing/adjusting white balance etc.. If you're getting that too, turning off the autofocus etc. should help (which btw isn't done through openCV but depends on the camera. Possibly check this: How to programatically disable the auto-focus of a webcam?)

Resources