I want to detect the color of an image, which contains two objects: an alphanumeric character embedded in a shape (a square, rectangle etc.). Can someone tell me how to achieve the goal?
PS - I have the code to identify colors from an image like this one:
Related
Currently, I am doing Image COLOUR filtering operation second MEDIAN filtering then CANNY EDGE DETECTION ALGORITHM.
Then, I read pixels using for loop and I draw lines using pixel, but I do not getting proper result for palm scanning and showing lines on human Palm.
So if anybody has any types of idea regarding this then please let me know.
Currently i am getting this type of result:
but I need this type of output:
Oh I got your problem, You can do this by following steps.
1.process your hand image with canny edge detection algo lets name that cannyImage.
2.now create the bitmap of cannyImage and remove black pixels from the image and replace them with transparent pixels, black only because canny image will be filled with black color and objects lines in white when you process the image through the algo, now you have extracted the image with palm lines white in color, lets name that palmLineImage.
3.now the main part is MASKING you need to mask the palmLineImage on the original image.
These three steps will give you your desire O/P.
Tools you can use GPUImage awsesome library by BradLarson for this https://github.com/BradLarson/GPUImage2
For refining the palm image from background which I'm sure you have to use in future you can use GrabCut algo
LINK - https://github.com/naver/grabcutios
and now the apple has launched Photos captured in Portrait Mode on iOS 12 contain an embedded person segmentation matte that made it easy to create creative visual effects like background replacement.
Links - https://developer.apple.com/videos/play/wwdc2019/260/ , https://developer.apple.com/videos/play/wwdc2019/225/
Looks like you need to use something like the douglas peucker algorithm - to simplify the number of data points and smooth the lines. link - https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm
I just start exploring computer-vision field and I'm trying to create something like this (this image is what I'm trying to achieve not what I've already achieve)
My approach is (just a logical solution, havent try it yet):
Color detection.
First, get pixel position of lines with red and green color then add all that value to arrayRed and arrayGreen.
Segmentation
Get base image from cache then get all pixel with value that close to arrayRed and label it as label background. Do the same for arrayGreen
Convert color space to RGBA and set the Alpha of label background to 0
My question :
Am I on the right path?
Is this possible to achieve with OpenCV library?
If my approach is wrong, what's the efficient and actually right approach(in pseudo-code or python) to achieve the goal?
How can I change hue of an UIImage programmatically only in few parts? I have followed this link
How to programmatically change the hue of UIImage?
and used the same code in my application. It's working fine but the complete image hue is getting changed. According to my requirement I want to change only the tree color in the above snap. How can I do that?
This is a specific case of a more general problem of using masking. I assume you have some way of knowing what pixels are in the "tree" part, and which ones are not. (If not, that's a whole other question/problem).
If so, first draw the original to the result context, then create a mask (see here: http://mobiledevelopertips.com/cocoa/how-to-mask-an-image.html), and draw the changed-hue version with the mask representing the tree active.
I recommend you take a look at the CoreImage API and the CIColorCube or CIColorMap filter in particular. Now how to define the color cube or color map is where the real magic lies. You'll need to transform tree tones (browns, etc), though this will obviously transform all browns, not just your tree.
I have a project to customize clothes ,let say a t-shirt, that have following features:
change colors.
add few lines of text ( <= 4) and change the font from a list.
add image or photo to the t-shirt.
rotate the t-shirt to custom back side.
rotate the image and zoom in/out.
save the result as a project locally and send it to a webservice ( i think to use NSDictionary/json ).
save as an image.
so my question is :
Should I use multiples images to simulate colors changes. Or should I use QuartzCore ( I am not an expert in QuartzCore but if I have to use it I'll learn). Or is there a better approach for this ?
Thank you.
The simple way to do this is to render the T-Shirt image into a CGContext, then walk the rows and columns and change pixels showing a "strong" primary color to the desired tint. You would take a photo of a person wearing a bright red (or other primary color) t-shirt, then in your code only change pixels where the red color has a high luminance and saturation (i.e. the "r" value is over some threshold and the b and g components are low).
The modified image is then going to look a bit flat, as when you change the pixels to one value (the new tint) there will be no variation in luminance. To make this more realistic, you would want to make each pixel have the same luminance as it did before. You can do this by converting back and forth from RGB to a color space like HCL. Apple has a great doc on color (in the Mac section) that explains color spaces (google 'site:developer.apple.com "Color Spaces"')
To reach your goal, you will have to tackle these technologies:
create a CGContext and render an image into it using Quartz
figure out how to read each pixel (pixels can have alpha and different orderings)
figure out a good way to identify the proper pixels (test by making these black or white)
for each pixel you want to change, convert the RGB to HCL to get its luminance
replace the pixel with a pixel of a different Color and Hue but the same Luminence
use the CGContext to make a new image
If all this seems to difficult then you'll have to have different images for every color you want.
I have two images that I want to display on top of each other. one image a single channel image and the second image is a RGB image but with most of the area being transparent.
How these two images are generated in different functions. I know to just display these on top of each other, i can use the same window name when calling cvShowImage() but this doesn't work when they are drawn from different functions. When trying this, I used cvCvtcolor() to convert he binary image from single channel to RGB and then displaying the second image from another function. But this didn't work. Both images are same dimension, depth and number of channels (after conversion).
I want to avoid passing in one image into the second function and then draw them. So I'm looking for a quick dirty trick to display these two images overlapped.
Thank you
EDIT:
I don't think that's possible. You'll have to create a new image or modify an existing one. Here's an article that shows how to do this: Transparent image overlays in OpenCV
There is no way to "overlay" images. cvShowImage() displays a single image from memory. You'll need to blend/combine them together. There are several ways to do this.
You can copy one into 1 or 2 channels of the other, you can use logical operations like AND, OR or XOR, you can use arithmetic operations like Add, Multiply and MultiplyScale (these operations will saturate values larger than 255). All these can also be done with an optional mask image like your blob image.
Naturally, you may want to do this into a third buffer so as not to overwrite your originals.
Apparently now it can be done using OpenCV 2.1 version
http://opencv.willowgarage.com/documentation/cpp/highgui_qt_new_functions.html#cv-displayoverlay