I have a project to customize clothes ,let say a t-shirt, that have following features:
change colors.
add few lines of text ( <= 4) and change the font from a list.
add image or photo to the t-shirt.
rotate the t-shirt to custom back side.
rotate the image and zoom in/out.
save the result as a project locally and send it to a webservice ( i think to use NSDictionary/json ).
save as an image.
so my question is :
Should I use multiples images to simulate colors changes. Or should I use QuartzCore ( I am not an expert in QuartzCore but if I have to use it I'll learn). Or is there a better approach for this ?
Thank you.
The simple way to do this is to render the T-Shirt image into a CGContext, then walk the rows and columns and change pixels showing a "strong" primary color to the desired tint. You would take a photo of a person wearing a bright red (or other primary color) t-shirt, then in your code only change pixels where the red color has a high luminance and saturation (i.e. the "r" value is over some threshold and the b and g components are low).
The modified image is then going to look a bit flat, as when you change the pixels to one value (the new tint) there will be no variation in luminance. To make this more realistic, you would want to make each pixel have the same luminance as it did before. You can do this by converting back and forth from RGB to a color space like HCL. Apple has a great doc on color (in the Mac section) that explains color spaces (google 'site:developer.apple.com "Color Spaces"')
To reach your goal, you will have to tackle these technologies:
create a CGContext and render an image into it using Quartz
figure out how to read each pixel (pixels can have alpha and different orderings)
figure out a good way to identify the proper pixels (test by making these black or white)
for each pixel you want to change, convert the RGB to HCL to get its luminance
replace the pixel with a pixel of a different Color and Hue but the same Luminence
use the CGContext to make a new image
If all this seems to difficult then you'll have to have different images for every color you want.
Related
I just start exploring computer-vision field and I'm trying to create something like this (this image is what I'm trying to achieve not what I've already achieve)
My approach is (just a logical solution, havent try it yet):
Color detection.
First, get pixel position of lines with red and green color then add all that value to arrayRed and arrayGreen.
Segmentation
Get base image from cache then get all pixel with value that close to arrayRed and label it as label background. Do the same for arrayGreen
Convert color space to RGBA and set the Alpha of label background to 0
My question :
Am I on the right path?
Is this possible to achieve with OpenCV library?
If my approach is wrong, what's the efficient and actually right approach(in pseudo-code or python) to achieve the goal?
I am struggling with the fuzzy selection tool in gimp - try to make the dark part in the picture black while retaining the dark parts below the light border - any advice how to achieve that?
Not with the fuzzy selection or the color selection :)
Very often the best way is to use the image itself (or a copy). If I understand you, you want to select the top dark part (which is many shades of grey), but not the bottom one. In practice we want an image where the selected parts are white and the un-selected are black, so
duplicate the layer
Color>Desaturate (if the image has colors)
Color>Invert
Filters>Blur>Gaussian blur (around 12px in the image you show). In Gimp 2.10, the median blur can also give interesting results)
Use Threshold to make the white very white and the black very. Use the threshold value that keeps a continuous black line across the picture
Bucket fill the lower white part with black (in Gimp 2.10.10 you can use the new Fill by line art detection option of the bucket
Open the Channels list, right click on any f the R, G or B channels and Channel to selection.
Back to the Layers list, hide or delete the work layer, and select the initial layer to continue.
Using the selection with the Curves tool to set the black and white points on the top part:
I'm developing an OCR app that reads the digits and copy them to clipboard automatically instead of manually typing...
I'm using (TesseractOCR) ... But before recognizing and in the image manipulating I'm improving the image for better recognition.
I used ImageMagick library and the filtered image looks like this :
But the Output of recognition is :
446929231986789 //The first and last numbers (4 & 9) were added
So I Want to detect only the white box to crop ...
I know that OpenCV do the trick but unfortunately it's C++ library and I don't speak that language :(
And I knew that iOS8 has a new CIDetector of type Rectangles but I don't want to neglect the previous versions of iOS
MY IMAGEMAGICK Filter CODE :
//Starting
MagickWandGenesis();
magick_wand = NewMagickWand();
//Reading the image....
NSString *tempFilePath = //Path of image
// Monochrome image
MagickQuantizeImage(magick_wand,2,GRAYColorspace,1,MagickFalse,MagickFalse);
// Write to temporary file
MagickWriteImage(magick_wand,
[tempFilePath cStringUsingEncoding:NSASCIIStringEncoding]
);
DestroyMagickWand(magick_wand);//Free up memory
// Load UIImage from temporary file
UIImage *imgObj = [UIImage imageWithContentsOfFile:tempFilePath];
// Display on device
Many thanks ..
I would go with simple pixel search. Since you want to crop the white area with digits all you need to do is to find left, right, top and bottom borders of the rectangle. Provided that rectangle is axis aligned and has enough white space around digits you should find first row or column that has continuous number of white pixels. For example to find left border (which I guess would be around 78th column) start searching from column 0 and go right. For each column count continuous white pixels (single for-loop from top to bottom). By continuous I mean series that is not interrupted by black one. If count will reach, say, 80% of height you have your left border. Do the rest accordingly starting from right side, top or bottom and move in the opposite direction. I guess there are some fancy procedures to detect the rectangle but your input has quite distinguishable characteristics. So instead of linking to some lib I suggest DIY. To speed things up you could increase your row by 2 or more. Or you could scale your image down, treshold it do 2 colors.
There is also one more way to do this. Flood-fill with white starting from one of the corners.
Basically I want to apply chroma key filter to ios live camera feed but I want user to pick the color which will be replaced by another color.
I found some examples using green screen but I don't know how to replace color dynamically instead of just green color.
Any idea how can I achieve that with best performance?
You've previously asked about my GPUImage framework, so I assume that you're familiar with it. Within that framework are two filters, a GPUImageChromaKeyFilter and a GPUImageChromaKeyBlendFilter. Both will key off of whatever color you specify via the -setColorToReplaceRed:green:blue: method, with a threshold set using the thresholdSensitivity property.
The former filter merely turns areas matching the color within the threshold to an alpha of 0, with the latter actually doing a blend against another image or video source based areas of the input image or video that match. The FilterShowcase example application shows how to do this for green, but you can set the keying color to anything you want.
Please be kind, I'm new to this....
I have an application that I'm developing where I need to take a PNG image, which has a transparency layer, and treat another color (I'm thinking of using RGB( 1, 1, 1 ), since it's so close to pure black that I can hard-code it) as a separate transparency layer. The reason for this is that I have a background image sitting behind the PNG image that I would like to still display as my sprite gets filled (by adding a progress bar to the sprite), and I only want the portions of the sprite that aren't of the given color to reflect the color fill of the progress bar. In this way, I can avoid having to deal with vector computations for the outline of the image within the sprite, flood the area outside of the discernable image with my new "transparent" color, and be on my merry way.
I've tried using shaders, but they seem to be less than helpful.
There is no way that I know of due to Open GL not allowing you to do it. You will either have to modify the pixel data manually or write a shader (Which you hav already done ).