OCR: Image to text? - ios

Before mark as copy or repeat question, please read the whole question first.
I am able to do at pressent is as below:
To get image and crop the desired part for OCR.
Process the image using tesseract and leptonica.
When the applied document is cropped in chunks ie 1 character per image it provides 96% of accuracy.
If I don't do that and the document background is in white color and text is in black color it gives almost same accuracy.
For example if the input is as this photo :
Photo start
Photo end
What I want is to able to get the same accuracy for this photo
without generating blocks.
The code I used to init tesseract and extract text from image is as below:
For init of tesseract
in .h file
tesseract::TessBaseAPI *tesseract;
uint32_t *pixels;
in .m file
tesseract = new tesseract::TessBaseAPI();
tesseract->Init([dataPath cStringUsingEncoding:NSUTF8StringEncoding], "eng");
tesseract->SetPageSegMode(tesseract::PSM_SINGLE_LINE);
tesseract->SetVariable("tessedit_char_whitelist", "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ");
tesseract->SetVariable("language_model_penalty_non_freq_dict_word", "1");
tesseract->SetVariable("language_model_penalty_non_dict_word ", "1");
tesseract->SetVariable("tessedit_flip_0O", "1");
tesseract->SetVariable("tessedit_single_match", "0");
tesseract->SetVariable("textord_noise_normratio", "5");
tesseract->SetVariable("matcher_avg_noise_size", "22");
tesseract->SetVariable("image_default_resolution", "450");
tesseract->SetVariable("editor_image_text_color", "40");
tesseract->SetVariable("textord_projection_scale", "0.25");
tesseract->SetVariable("tessedit_minimal_rejection", "1");
tesseract->SetVariable("tessedit_zero_kelvin_rejection", "1");
For get text from image
- (void)processOcrAt:(UIImage *)image
{
[self setTesseractImage:image];
tesseract->Recognize(NULL);
char* utf8Text = tesseract->GetUTF8Text();
int conf = tesseract->MeanTextConf();
NSArray *arr = [[NSArray alloc]initWithObjects:[NSString stringWithUTF8String:utf8Text],[NSString stringWithFormat:#"%d%#",conf,#"%"], nil];
[self performSelectorOnMainThread:#selector(ocrProcessingFinished:)
withObject:arr
waitUntilDone:YES];
free(utf8Text);
}
- (void)ocrProcessingFinished0:(NSArray *)result
{
UIAlertView *alt = [[UIAlertView alloc]initWithTitle:#"Data" message:[result objectAtIndex:0] delegate:self cancelButtonTitle:nil otherButtonTitles:#"OK", nil];
[alt show];
}
But I don't get proper output for the number plate image either it is null or it gives some garbage data for the image.
And if I use the image which is the first one ie white background with text as black then the output is 89 to 95% accurate.
Please help me out.
Any suggestion will be appreciated.
Update
Thanks to #jcesar for providing the link and also to #konstantin pribluda to provide valuable information and guide.
I am able to convert images in to proper black and white form (almost). and so the recognition is better for all images :)
Need help with proper binarization of images. Any Idea will be appreciated

Hi all Thanks for your replies, from all of that replies I am able to get this conclusion as below:
I need to get the only one cropped image block with number plate contained in it.
From that plate need to find out the portion of the number portion using the data I got using the method provided here.
Then converting the image data to almost black and white using the RGB data found through the above method.
Then the data is converted to the Image using the method provided here.
Above 4 steps are combined in to one method like this as below :
-(void)getRGBAsFromImage:(UIImage*)image
{
NSInteger count = (image.size.width * image.size.height);
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = 0;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) ;
CGFloat green = (rawData[byteIndex + 1] * 1.0) ;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) ;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) ;
NSLog(#"red %f \t green %f \t blue %f \t alpha %f rawData [%d] %d",red,green,blue,alpha,ii,rawData[ii]);
if(red > Required_Value_of_red || green > Required_Value_of_green || blue > Required_Value_of_blue)//all values are between 0 to 255
{
red = 255.0;
green = 255.0;
blue = 255.0;
alpha = 255.0;
// all value set to 255 to get white background.
}
rawData[byteIndex] = red;
rawData[byteIndex + 1] = green;
rawData[byteIndex + 2] = blue;
rawData[byteIndex + 3] = alpha;
byteIndex += 4;
}
colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContext = CGBitmapContextCreate(
rawData,
width,
height,
8, // bitsPerComponent
4*width, // bytesPerRow
colorSpace,
kCGImageAlphaNoneSkipLast);
CFRelease(colorSpace);
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapContext);
UIImage *img = [UIImage imageWithCGImage:cgImage];
//use the img for further use of ocr
free(rawData);
}
Note:
The only drawback of this method is the time consumed and the RGB value to convert to white and other to black.
UPDATE :
CGImageRef imageRef = [plate CGImage];
CIContext *context = [CIContext contextWithOptions:nil]; // 1
CIImage *ciImage = [CIImage imageWithCGImage:imageRef]; // 2
CIFilter *filter = [CIFilter filterWithName:#"CIColorMonochrome" keysAndValues:#"inputImage", ciImage, #"inputColor", [CIColor colorWithRed:1.f green:1.f blue:1.f alpha:1.0f], #"inputIntensity", [NSNumber numberWithFloat:1.f], nil]; // 3
CIImage *ciResult = [filter valueForKey:kCIOutputImageKey]; // 4
CGImageRef cgImage = [context createCGImage:ciResult fromRect:[ciResult extent]];
UIImage *img = [UIImage imageWithCGImage:cgImage];
Just replace the above method's(getRGBAsFromImage:) code with this one and the result is same but the time taken is just 0.1 to 0.3 second only.

I was able to achieve near instant results using the demo photo provided as well as it generating the correct letters.
I pre-processed the image using GPUImage
// Pre-processing for OCR
GPUImageLuminanceThresholdFilter * adaptiveThreshold = [[GPUImageLuminanceThresholdFilter alloc] init];
[adaptiveThreshold setThreshold:0.3f];
[self setProcessedImage:[adaptiveThreshold imageByFilteringImage:_image]];
And then sending that processed image to TESS
- (NSArray *)processOcrAt:(UIImage *)image {
[self setTesseractImage:image];
_tesseract->Recognize(NULL);
char* utf8Text = _tesseract->GetUTF8Text();
return [self ocrProcessingFinished:[NSString stringWithUTF8String:utf8Text]];
}
- (NSArray *)ocrProcessingFinished:(NSString *)result {
// Strip extra characters, whitespace/newlines
NSString * results_noNewLine = [result stringByReplacingOccurrencesOfString:#"\n" withString:#""];
NSArray * results_noWhitespace = [results_noNewLine componentsSeparatedByCharactersInSet:[NSCharacterSet whitespaceCharacterSet]];
NSString * results_final = [results_noWhitespace componentsJoinedByString:#""];
results_final = [results_final lowercaseString];
// Separate out individual letters
NSMutableArray * letters = [[NSMutableArray alloc] initWithCapacity:results_final.length];
for (int i = 0; i < [results_final length]; i++) {
NSString * newTile = [results_final substringWithRange:NSMakeRange(i, 1)];
[letters addObject:newTile];
}
return [NSArray arrayWithArray:letters];
}
- (void)setTesseractImage:(UIImage *)image {
free(_pixels);
CGSize size = [image size];
int width = size.width;
int height = size.height;
if (width <= 0 || height <= 0)
return;
// the pixels will be painted to this array
_pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(_pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(_pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [image CGImage]);
_tesseract->SetImage((const unsigned char *) _pixels, width, height, sizeof(uint32_t), width * sizeof(uint32_t));
}
This left ' marks for the - but these are also easy to remove. Depending on the image set that you have you may have to fine tune it a bit but it should get you moving in the right direction.
Let me know if you have problems using it, it's from a project I'm using and I didn't want to have to strip everything out or create a project from scratch for it.

I daresay that tesseract will be overkill for your purpose. You do not need dictionary matching to improve recognition quality ( you do not have this dictionary , but maybe means to compute checksum on license number ), and you have font optimised for OCR.
And best of all, you have markers (orange and blue color areas nearby are good) to find region in the image.
I my OCR apps I use human assisted area of interest retrieval ( just aiming help overlay over camera preview). Usually ones uses something like haar cascade to locate interesting features like faces. You may also calculate centroid of orange area, or just bounding box of orange pixels simply by traversing all the image and stoing leftmost / rightmost / topmost / bottommost pixels of suitable color
As for recognition itselff I would recommend to use invariant moments ( not sure whether implemented in tesseract, but you can easily port it from out java project: http://sourceforge.net/projects/javaocr/ )
I tried my demo app on monitor image and it recognized digits on the sport (is not trained
for characters)
As for binarisation ( separating black from white ) I would recommend sauvola method as this gives best tolerance to luminance changes ( also implemented in our OCR project )

Related

Fastest and most effiecient way to find out non-transparent pixel of UIImage on iOS

I want to ask about image processing mechanism. I develop an iOS app which using OpenGLES for hand-writing on a view. I have a function save that convert a view with all drawing to an Image and save to Photo Library.
I can properly convert content of view to image easily using below code
(Note: The following code is not the problem. Its purpose is just to convert content of view to image and it worked perfect, but I show here for reference)
// Get the size of the backing CAEAGLLayer
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != &UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
} else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
The problem is I want to determine if the view has any drawing or not. If no drawing -> can't save because saving a blank image is useless so my thinking is to check if image has any non-transparent pixel or not
My solution
Convert my drawing view to Image (its pixels have alpha channel)
Check if the Image has any non-zero alpha channel pixel
If yes, user properly draws something -> can Save
If no, user not draws anything or user erases everything -> not Save
I know the BruteForce algorithim to go through all pixels but it seems the worst way and just be implemented if there is no other efficient ways
So is there any efficient way to check it
I found that the BruteForce algorithm is not slower as I though. It just take about less than 200 miliseconds to go through all pixel datas of an image has size of iPad Pro as well as iPad mini 2
So I though using BruteForce is acceptable
Following is code to check
CGImageRef imageRef = [selfImage CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
float total = width * height * 4;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(total, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef tempContext = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(tempContext, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(tempContext);
// Now your rawData contains the image data in the RGBA8888 pixel format
BOOL empty = YES;
for (int i = 0 ; i < total ;) {
CGFloat alpha = ((CGFloat) rawData[i + 3] ) / 255.0f;
// CGFloat red = ((CGFloat) rawData[i] ) / alpha;
// CGFloat green = ((CGFloat) rawData[i + 1] ) / alpha;
// CGFloat blue = ((CGFloat) rawData[i + 2] ) / alpha;
i += bytesPerPixel;
if (alpha != 0) {
empty = NO;
break;
}
}
if (empty) {
//Do something
} else {
//Do other thing
}
If is there any improvement or other effiecient algorithms, please post here, I really appreciate

How to get the pixel color values of custom image inside imageview ios?

I know similar question have been asked before.
What I want is to get the RGB pixel value of the Image Inside the Imageview, so it can be any image that pixel values we want to get.
This is what I have used to get the point where the image is clicked.
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
if ([touch tapCount] == 2) {
//self.imageView.image = nil;
return;
}
CGPoint lastPoint = [touch locationInView:self.imageViewGallery];
NSLog(#"%f",lastPoint.x);
NSLog(#"%f",lastPoint.y);
}
And To get the Image I have Pasted this code.
+ (NSArray*)getRGBAsFromImage:(UIImage *)image atX:(int)xx andY:(int)yy count:(int)count
{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
/** It requires to get the image into your data buffer, so how can we get the `ImageViewImage` **/
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
I am new to ios so please explain and It and suggesting some tutorial will be great.
Use this example article. It is talking about a color picker using images. You can understand required info very easily from it. Helped me in my app. Let me know if any help/suggestion needed ..:)
EDIT:
update your getPixelColorAtLocation: like this. It will give you correct color then.
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
CGImageRef inImage = self.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
/** Extra Added code for Resized Images ****/
float xscale = w / self.frame.size.width;
float yscale = h / self.frame.size.height;
point.x = point.x * xscale;
point.y = point.y * yscale;
/** ****************************************/
/** Extra Code Added for Resolution ***********/
CGFloat x = 1.0;
if ([self.image respondsToSelector:#selector(scale)]) x = self.image.scale;
/*********************************************/
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
// int offset = 4*((w*round(point.y))+round(point.x));
int offset = 4*((w*round(point.y))+round(point.x))*x; //Replacement for Resolution
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
NSLog(#"offset: %i colors: RGB A %i %i %i %i",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data) { free(data); }
return color;
}
Let me know this fix does not work .. :)
Here is the GitHub for my code. Use it to implement picker image. Let me know if more info needed
Just use this method, it works for me:
- (UIColor*) getPixelColorAtLocation:(CGPoint)point
{
UIColor* color = nil;
CGImageRef inImage;
inImage = imgZoneWheel.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((w*round(point.y))+round(point.x));
alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
//CGContextRelease(cgctx);
// Free image data memory for the context
if (data) { free(data); }
return color;
}
Just use like this:
UIColor *color = [self getPixelColorAtLocation:lastPoint];
If you try to get color from image through CGBitmapContextGetData and set, for example, in background of view, this will be different colors in iPhone 6 and later. In iPhone 5 everything will be ok) More information about this Getting the right colors in your iOS app
This solution through UIImage give you right color:
- (UIColor *)getColorFromImage:(UIImage *)image pixelPoint:(CGPoint)pixelPoint {
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], CGRectMake(pixelPoint.x, pixelPoint.y, 1.f, 1.f));
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return [UIColor colorWithPatternImage:croppedImage];
}
You want to know how to get the image from an image view?
UIImageView has a property, image. Simply use that property.

I am trying to create a partial Grayscale image

I am trying to create a partial gray scale image in which i am reading each and every pixel in that image and replacing the pixel data to gray color, and if the pixel color matches the desired color i restrict it to be applied so that the specific pixel color doesn't change.i don't know where i am going wrong it changes the whole image to gray scale and rotates the image 90 degrees. can some one help me out with this issue thanks in advance.
-(UIImage *) toPartialGrayscale{
const int RED = 1;
const int GREEN = 2;
const int BLUE = 3;
initialR=255.0;
initialG=0.0;
initialB=0.0;//218-112-214//0-191-255
float r;
float g;
float b;
tollerance=50;
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, originalImageView.image.size.width * scale, originalImageView.image.size.height * scale);
int width = imageRect.size.width;
int height = imageRect.size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [originalImageView.image CGImage]);
for(int y = 0; y < height; y++)
{
for(int x = 0; x < width; x++)
{
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint8_t gray = (uint8_t) ((30 * rgbaPixel[RED] + 59 * rgbaPixel[GREEN] + 11 * rgbaPixel[BLUE]) / 100);
// set the pixels to grayi
r= initialR-rgbaPixel[RED];
g= initialG-rgbaPixel[GREEN];
b= initialB-rgbaPixel[BLUE];
if ((r<tollerance&&r>-tollerance)&&(g<tollerance&&g>-tollerance)&&(b<tollerance&&b>-tollerance))
{
rgbaPixel[RED] = (uint8_t)r;
rgbaPixel[GREEN] = (uint8_t)g;
rgbaPixel[BLUE] = (uint8_t)b;
}
else
{
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
}
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image
scale:scale
orientation:UIImageOrientationUp];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
}
This is the code i am using any kind of help will be appreciated thanks again in advance.
Hopefully the orientation piece is easy enough to resolve by playing with the UIImageOrientationUp constant that you're passing in when you create the final image. Try left or right until you get what you need.
As for the threshold not working, can you verify that your "tollerance" really is behaving like you expect. Change it to 255 and see if the entire image retains it's color (it should). If it's still grey, then you know that your conditional statement is where the problem lies.

Can CGContextClipToMask mask all non-transparent pixels with alpha=1?

Is it possible to make CGContextClipToMask ignore the grayscale values of the mask image and work as if it was plain black and white?
I have a grayscale image, and when I use it as a mask gray color are interpreted as an alpha channel. This is fine except for a point where I need to completely mask those pixels that are not transparent.
Short example:
UIImage *mask = [self prepareMaskImage];
UIGraphicsBeginImageContextWithOptions(mask.size, NO, mask.scale); {
// Custom code
CGContextClipToMask(UIGraphicsGetCurrentContext(), mask.size, mask.CGImage);
// Custom code
}
Is it possible to adapt this code to achieve my goal?
Long story short: I need to make a transparent grayscale image become transparent where it originally was and completely black where it's solid-colored.
Interesting problem! Here's code that does what I think you want in a simple sample project. Similar to above but handles scale properly. Also has option to retain the alpha in the mask image if you want. Quick hacked together test that seems to work.
My rough idea would be the following:
You convert your input image to readable byte data in grey + alpha format. Maybe you have to do RGBA instead due to iOS limitations.
You iterate over the byte data modifying the values in place.
To simplify access, use a
typedef struct RGBA {
UInt8 red;
UInt8 green;
UInt8 blue;
UInt8 alpha;
} RGBA;
Let's assume image your input mask.
// First step, using RGBA (because I know it works and does not harm, just writes/consumes twice the amount of memory)
CGImageRef imageRef = image.CGImage;
NSInteger rawWidth = CGImageGetWidth(imageRef);
NSInteger rawHeight = CGImageGetHeight(imageRef);
NSInteger rawBitsPerComponent = 8;
NSInteger rawBytesPerPixel = 4;
NSInteger rawBytesPerRow = rawBytesPerPixel * rawWidth;
CGRect rawRect = CGRectMake(0, 0, rawWidth, rawHeight);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
UInt8 *rawImage = (UInt8 *)malloc(rawHeight * rawWidth * rawBytesPerPixel);
CGContextRef rawContext = CGBitmapContextCreate(rawImage,
rawWidth,
rawHeight,
rawBitsPerComponent,
rawBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
// At this point, rawContext is ready for drawing, everything drawn will be in rawImage's byte array.
CGContextDrawImage(rawContext, rawRect, imageRef);
// Second step, crawl the byte array and do the evil work:
for (NSInteger y = 0; y < rawHeight; ++y) {
for (NSInteger x = 0; x < rawWidth; ++x) {
UInt8 *address = rawImage + x * rawBytesPerPixel + y * rawBytesPerRow;
RGBA *pixel = (RGBA *)address;
// If it is a grey input image, it does not matter what RGB channel to use - they shall all be the same
if (0 != pixel->red) {
pixel->alpha = 0;
} else {
pixel->alpha = UINT8_MAX;
}
pixel->red = 0;
pixel->green = 0;
pixel->blue = 0;
// I am still not sure if this is the transformation you are searching for, but it may give you the idea.
}
}
// Third: rawContext is ready, transformation is done. Get the image out of it
CGImageRef outputImage1 = CGBitmapContextCreateImage(rawContext);
UIImage *outputImage2 = [UIImage imageWithCGImage:outputImage1];
CGImageRelease(outputImage1);
Okay... the output is RGBA, but you can create a greyscale + alpha context and just blit your image there for conversion.
This piece of code helped me applying hue on non-transparent area of an image.
- (UIImage*)imageWithImage:(UIImageView*)source colorValue:(CGFloat)hue {
CGSize imageSize = [source.image size];
CGRect imageExtent = CGRectMake(0,0,imageSize.width,imageSize.height);
// Create a context containing the image.
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
[source.image drawAtPoint:CGPointMake(0, 0)];
// Setup a clip region using the image
CGContextSaveGState(context);
CGContextClipToMask(context, source.bounds, source.image.CGImage);
self.imageColor = [UIColor colorWithHue:hue saturation:1.0 brightness:1 alpha:1.0];
[self.imageColor set];
CGContextFillRect(context, source.bounds);
// Draw the hue on top of the image.
CGContextSetBlendMode(context, kCGBlendModeHue);
[self.imageColor set];
UIBezierPath *imagePath = [UIBezierPath bezierPathWithRect:imageExtent];
[imagePath fill];
CGContextRestoreGState(context); // remove clip region
// Retrieve the new image.
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}

iOS - CoreImage - Add an effect to partial of image

I just have a look on CoreImage framework on iOS 5, found that it's easy to add an effect to whole image.
I wonder if possible to add an effect on special part of image (a rectangle). for example add gray scale effect on partial of image/
I look forward to your help.
Thanks,
Huy
Watch session 510 from the WWDC 2012 videos. They present a technique how to apply a mask to a CIImage. You need to learn how to chain the filters together. In particular take a look at:
CICrop, CILinearGradient, CIRadialGradient (could be used to create the mask)
CISourceOverCompositing (put mask images together)
CIBlendWithMask (create final image)
The filters are documented here:
https://developer.apple.com/library/archive/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html
Your best bet would be to copy the CIImage (so you now have two), crop the copied CIImage to the rect you want to effect, perform the effect on that cropped version, then use an overlay effect to create a new CIImage based on the two older CIImages.
It seems like a lot of effort, but when you understand all of this is being set up as a bunch of GPU shaders it makes a lot more sense.
typedef enum {
ALPHA = 0,
BLUE = 1,
GREEN = 2,
RED = 3
} PIXELS;
- (UIImage *)convertToGrayscale:(UIImage *) originalImage inRect: (CGRect) rect{
CGSize size = [originalImage size];
int width = size.width;
int height = size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [originalImage CGImage]);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
if(x > rect.origin.x && y > rect.origin.y && x < rect.origin.x + rect.size.width && y < rect.origin.y + rect.size.height) {
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint32_t gray = 0.3 * rgbaPixel[RED] + 0.59 * rgbaPixel[GREEN] + 0.11 * rgbaPixel[BLUE];
// set the pixels to gray in your rect
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
}
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
}
You can test it in a UIImageView:
imageview.image = [self convertToGrayscale:imageview.image inRect:CGRectMake(50, 50, 100, 100)];

Resources