Find average color of an area inside UIImageView [duplicate] - ios

I am writing this method to calculate the average R,G,B values of an image. The following method takes a UIImage as an input and returns an array containing the R,G,B values of the input image. I have one question though: How/Where do I properly release the CGImageRef?
-(NSArray *)getAverageRGBValuesFromImage:(UIImage *)image
{
CGImageRef rawImageRef = [image CGImage];
//This function returns the raw pixel values
const UInt8 *rawPixelData = CFDataGetBytePtr(CGDataProviderCopyData(CGImageGetDataProvider(rawImageRef)));
NSUInteger imageHeight = CGImageGetHeight(rawImageRef);
NSUInteger imageWidth = CGImageGetWidth(rawImageRef);
//Here I sort the R,G,B, values and get the average over the whole image
int i = 0;
unsigned int red = 0;
unsigned int green = 0;
unsigned int blue = 0;
for (int column = 0; column< imageWidth; column++)
{
int r_temp = 0;
int g_temp = 0;
int b_temp = 0;
for (int row = 0; row < imageHeight; row++) {
i = (row * imageWidth + column)*4;
r_temp += (unsigned int)rawPixelData[i];
g_temp += (unsigned int)rawPixelData[i+1];
b_temp += (unsigned int)rawPixelData[i+2];
}
red += r_temp;
green += g_temp;
blue += b_temp;
}
NSNumber *averageRed = [NSNumber numberWithFloat:(1.0*red)/(imageHeight*imageWidth)];
NSNumber *averageGreen = [NSNumber numberWithFloat:(1.0*green)/(imageHeight*imageWidth)];
NSNumber *averageBlue = [NSNumber numberWithFloat:(1.0*blue)/(imageHeight*imageWidth)];
//Then I store the result in an array
NSArray *result = [NSArray arrayWithObjects:averageRed,averageGreen,averageBlue, nil];
return result;
}
I tried two things:
Option 1:
I leave it as it is, but then after a few cycles (5+) the program crashes and I get the "low memory warning error"
Option 2:
I add one line
CGImageRelease(rawImageRef)
before the method returns. Now it crashes after the second cycle, I get the EXC_BAD_ACCESS error for the UIImage that I pass to the method. When I try to analyze (instead of RUN) in Xcode I get the following warning at this line
"Incorrect decrement of the reference count of an object that is not owned at this point by the caller"
Where and how should I release the CGImageRef?
Thanks!

Your memory issue results from the copied data, as others have stated. But here's another idea: Use Core Graphics's optimized pixel interpolation to calculate the average.
Create a 1x1 bitmap context.
Set the interpolation quality to medium (see later).
Draw your image scaled down to exactly this one pixel.
Read the RGB value from the context's buffer.
(Release the context, of course.)
This might result in better performance because Core Graphics is highly optimized and might even use the GPU for the downscaling.
Testing showed that medium quality seems to interpolate pixels by taking the average of color values. That's what we want here.
Worth a try, at least.
Edit: OK, this idea seemed too interesting not to try. So here's an example project showing the difference. Below measurements were taken with the contained 512x512 test image, but you can change the image if you want.
It takes about 12.2 ms to calculate the average by iterating over all pixels in the image data. The draw-to-one-pixel approach takes 3 ms, so it's roughly 4 times faster. It seems to produce the same results when using kCGInterpolationQualityMedium.
I assume that the huge performance gain is a result from Quartz noticing that it does not have to decompress the JPEG fully but that it can use the lower frequency parts of the DCT only. That's an interesting optimization strategy when composing JPEG compressed pixels with a scale below 0.5. But I'm only guessing here.
Interestingly, when using your method, 70% of the time is spent in CGDataProviderCopyData and only 30% in the pixel data traversal. This hints to a lot of time spent in JPEG decompression.
Note: Here's a late follow up on the example image above.

You don't own the CGImageRef rawImageRef because you obtain it using [image CGImage]. So you don't need to release it.
However, you own rawPixelData because you obtained it using CGDataProviderCopyData and must release it.
CGDataProviderCopyData
Return Value:
A new data object containing a copy of the provider’s data. You are responsible for releasing this object.

I believe your issue is in this statement:
const UInt8 *rawPixelData = CFDataGetBytePtr(CGDataProviderCopyData(CGImageGetDataProvider(rawImageRef)));
You should be releasing the return value of CGDataProviderCopyData.

Your mergedColor works great on an image loaded from a file, but not for an image capture by the camera. Because CGBitmapContextGetData() on the context created from a captured sample buffer doesn't return it bitmap. I changed your code to as following. It works on any image and it is as fast as your code.
- (UIColor *)mergedColor
{
CGImageRef rawImageRef = [self CGImage];
// scale image to an one pixel image
uint8_t bitmapData[4];
int bitmapByteCount;
int bitmapBytesPerRow;
int width = 1;
int height = 1;
bitmapBytesPerRow = (width * 4);
bitmapByteCount = (bitmapBytesPerRow * height);
memset(bitmapData, 0, bitmapByteCount);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate (bitmapData,width,height,8,bitmapBytesPerRow,
colorspace,kCGBitmapByteOrder32Little|kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorspace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextSetInterpolationQuality(context, kCGInterpolationMedium);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), rawImageRef);
CGContextRelease(context);
return [UIColor colorWithRed:bitmapData[2] / 255.0f
green:bitmapData[1] / 255.0f
blue:bitmapData[0] / 255.0f
alpha:1];
}

CFDataRef abgrData = CGDataProviderCopyData(CGImageGetDataProvider(rawImageRef));
const UInt8 *rawPixelData = CFDataGetBytePtr(abgrData);
...
CFRelease(abgrData);

Related

Xcode Analyzer issue on memory leak and incorrect decrement with ARC

I am Using ARC in my project but still when i ran Analyser i got following issues.
And
Following is my code :-
#import "UIImage+ImageSize.h"
#implementation UIImage (ImageSize)
- (CGRect)cropRectForImage:(UIImage *)image {
CGImageRef cgImage = image.CGImage;
CGContextRef context = [self createARGBBitmapContextFromImage:cgImage];
if (context == NULL) return CGRectZero;
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
CGRect rect = CGRectMake(0, 0, width, height);
CGContextDrawImage(context, rect, cgImage);
unsigned char *data = CGBitmapContextGetData(context);
CGContextRelease(context);
//Filter through data and look for non-transparent pixels.
int lowX = (int)width;
int lowY = (int)height;
int highX = 0;
int highY = 0;
if (data != NULL) {
for (int y=0; y<height; y++) {
for (int x=0; x<width; x++) {
int pixelIndex = (int)(width * y + x) * 4 /* 4 for A, R, G, B */;
if (data[pixelIndex] != 0) { //Alpha value is not zero; pixel is not transparent.
if (x < lowX) lowX = x;
if (x > highX) highX = x;
if (y < lowY) lowY = y;
if (y > highY) highY = y;
}
}
}
free(data);
} else {
return CGRectZero;
}
return CGRectMake(lowX, lowY, highX-lowX, highY-lowY);
}
- (CGContextRef)createARGBBitmapContextFromImage:(CGImageRef)inImage {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void *bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t width = CGImageGetWidth(inImage);
size_t height = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (int)(width * 4);
bitmapByteCount = (int)(bitmapBytesPerRow * height);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL) return NULL;
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
CGColorSpaceRelease(colorSpace);
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
width,
height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL) free (bitmapData);
// Make sure and release colorspace before returning
CGColorSpaceRelease(colorSpace);
return context;
}
#end
How can i make it correct ?
Please help me understand what does this issue means and why this is happening, because i used to think ARC handles all memory clean up issues by it self. there are other SO questions already asked on error almost same as this. but not on CGContextRef. so i had to ask a new question.
ARC only handles object pointer types and block pointer types. It does not handle Core Foundation-style reference types (e.g. CGContextRef).
Now to the analyzer issue. The analyzer (similar to ARC) pays attention to naming conventions to determine how a method or function is assumed to behave. For the implementation of the method or function, it then checks if it actually does behave in accordance to its naming convention. At call sites, it assumes it does and then checks that the surrounding code operates in accordance with that assumption.
Now, you may be aware that Core Foundation has the "Create Rule" where functions whose names contain "Create" or "Copy" generally return a +1 reference, while other functions generally return a +0 reference (the "Get Rule"). Maybe that's why you named the your method which returns a CGContext with "create" in its name. Unfortunately, the Core Foundation rules don't apply to Objective-C methods.
The Cocoa naming conventions are that methods whose names begin with "alloc", "new", "copy", or "mutableCopy" return a +1 reference. (If you weren't using ARC, the release method also returns a +1 reference.) Other methods return a +0 reference.
By the Cocoa naming conventions, your -createARGBBitmapContextFromImage: method is assumed to return a +0 reference. But, the actual implementation returns a +1 reference. That's one of the issues the analyzer reported. Then, at the call site, the calling code is assumed to receive a +0 reference. Therefore, it's not entitled to release that reference using CGContextRelease(). That's the other issue the analyzer is reporting.
You can fix this by renaming -createARGBBitmapContextFromImage: to -newARGBBitmapContextFromImage:. Then, by the Cocoa conventions, it would be expected to return a +1 reference and both the implementation and the call site would conform to this expectation.
Alternatively, you can have -createARGBBitmapContextFromImage: do return (CGContextRef)CFAutorelease(context); instead of just return context;. Then change the caller to not attempt to release the context. In this case, the method's name indicates it returns a +0 reference and, again, the implementation and call site both conform to that.

High dynamic range imaging using openCV on iOS produces garbled output

I'm trying to use openCV 3 on iOS to produce an HDR image from multiple exposures that will eventually be output as an EXR file. I noticed I was getting garbled output when I tried to create an HDR image. Thinking it was a mistake in trying to create a camera response, I started from scratch and adapted the HDR imaging tutorial material on the openCV to iOS but it produces similar results. The following C++ code returns a garbled image:
cv::Mat mergeToHDR (vector<Mat>& images, vector<float>& times)
{
imgs = images;
Mat response;
//Ptr<CalibrateDebevec> calibrate = createCalibrateDebevec();
//calibrate->process(images, response, times);
Ptr<CalibrateRobertson> calibrate = createCalibrateRobertson();
calibrate->process(images, response, times);
// create HDR
Mat hdr;
Ptr<MergeDebevec> merge_debevec = createMergeDebevec();
merge_debevec->process(images, hdr, times, response);
// create LDR
Mat ldr;
Ptr<TonemapDurand> tonemap = createTonemapDurand(2.2f);
tonemap->process(hdr, ldr);
// create fusion
Mat fusion;
Ptr<MergeMertens> merge_mertens = createMergeMertens();
merge_mertens->process(images, fusion);
/*
Uncomment what kind of tonemapped image or hdr to return
Returning one of the images in the array produces ungarbled output
so we know the problem is unlikely with the openCV to UIImage conversion
*/
//give back one of the images from the image array
//return images[0];
//give back one of the hdr images
return fusion * 255;
//return ldr * 255;
//return hdr
}
This is what the image looks like:
Bad image output
I've analysed the image, tried various colour space conversions, but the data appears to be junk.
The openCV framework is the latest compiled 3.0.0 version from the openCV.org website. The RC and alpha produce the same results, and the current version won't build (for iOS or OSX). I was thinking my next steps would be to try and get the framework to compile from scratch, or to get the example working under another platform to see if the issue is platform specific or with the openCV HDR functions themselves. But before I do that I thought I would throw the issue up on stack overflow to see if anyone had come across the same issue or if I am missing something blindingly obvious.
I have uploaded the example xcode project to here:
https://github.com/artandmath/openCVHDRSwiftExample
Getting openCV to work with swift was with the help from user foundry on Github
Thanks foundry for pointing me in the right direction. The UIImage+OpenCV class extension is expecting 8-bits per colour channel, however the HDR functions are spitting out 32-bits per channel (which is actually what I want). Converting the image matrix back to 8-bits per channel for display purposes before converting it to a UIImage fixes the issue.
Here is the resulting image:
The expected result!
Here is the fixed function:
cv::Mat mergeToHDR (vector<Mat>& images, vector<float>& times)
{
imgs = images;
Mat response;
//Ptr<CalibrateDebevec> calibrate = createCalibrateDebevec();
//calibrate->process(images, response, times);
Ptr<CalibrateRobertson> calibrate = createCalibrateRobertson();
calibrate->process(images, response, times);
// create HDR
Mat hdr;
Ptr<MergeDebevec> merge_debevec = createMergeDebevec();
merge_debevec->process(images, hdr, times, response);
// create LDR
Mat ldr;
Ptr<TonemapDurand> tonemap = createTonemapDurand(2.2f);
tonemap->process(hdr, ldr);
// create fusion
Mat fusion;
Ptr<MergeMertens> merge_mertens = createMergeMertens();
merge_mertens->process(images, fusion);
/*
Uncomment what kind of tonemapped image or hdr to return
Convert back to 8-bits per channel because that is what
the UIImage+OpenCV class extension is expecting
*/
// tone mapped
/*
Mat ldr8bit;
ldr = ldr * 255;
ldr.convertTo(ldr8bit, CV_8U);
return ldr8bit;
*/
// fusion
Mat fusion8bit;
fusion = fusion * 255;
fusion.convertTo(fusion8bit, CV_8U);
return fusion8bit;
// hdr
/*
Mat hdr8bit;
hdr = hdr * 255;
hdr.convertTo(hdr8bit, CV_8U);
return hdr8bit;
*/
}
Alternatively here is a fix for the initWithCVMat method in the OpenCV+UIImage class extension based on one of the iOS tutorials in the iOS section on opencv.org:
http://docs.opencv.org/2.4/doc/tutorials/ios/image_manipulation/image_manipulation.html#opencviosimagemanipulation
When creating a new CGImageRef with floating point data, it needs to be explicitly told that it expects floating point data, and the byte order of the image data from openCV needs to be reversed. Now iOS/Quartz has the float data! It's a bit of a hacky fix, because the method still only deals with 8 bit or 32 bits per channel or alphas and doesn't take into account every kind of image that could be passed from Mat to UIImage.
- (id)initWithCVMat:(const cv::Mat&)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
size_t elemSize = cvMat.elemSize();
size_t elemSize1 = cvMat.elemSize1();
size_t channelCount = elemSize/elemSize1;
size_t bitsPerChannel = 8 * elemSize1;
size_t bitsPerPixel = bitsPerChannel * channelCount;
if (channelCount == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
// Tell CGIImageRef different bitmap info if handed 32-bit
uint32_t bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
if (bitsPerChannel == 32 ){
bitmapInfo = kCGImageAlphaNoneSkipLast | kCGBitmapFloatComponents | kCGBitmapByteOrder32Little;
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(cvMat.cols, //width
cvMat.rows, //height
bitsPerChannel, //bits per component
bitsPerPixel, //bits per pixel
cvMat.step[0], //bytesPerRow
colorSpace, //colorspace
bitmapInfo, // bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
self = [self initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return self;
}

Release CGImageRef to avoid memory leak

I want call multiple times per second the method of image but I have a memory leak.
I tried to do CFRelease(rawImageRef); but returns the next error:
-[Not A Type retain]: message sent to deallocated instance 0x14dd3770
Update with code:
- (CGColorRef)averageColorRect:(CGRect)rect {
CGImageRef rawImageRef = CGImageCreateWithImageInRect(_imageRaster, rect);
// This function returns the raw pixel values
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(rawImageRef));
const UInt8 *rawPixelData = CFDataGetBytePtr(data);
NSUInteger imageHeight = CGImageGetHeight(rawImageRef);
NSUInteger imageWidth = CGImageGetWidth(rawImageRef);
NSUInteger bytesPerRow = CGImageGetBytesPerRow(rawImageRef);
NSUInteger stride = CGImageGetBitsPerPixel(rawImageRef) / 8;
// Here I sort the R,G,B, values and get the average over the whole image
unsigned int red = 0;
unsigned int green = 0;
unsigned int blue = 0;
for (int row = 0; row < imageHeight; row++) {
const UInt8 *rowPtr = rawPixelData + bytesPerRow * row;
for (int column = 0; column < imageWidth; column++) {
red += rowPtr[0];
green += rowPtr[1];
blue += rowPtr[2];
rowPtr += stride;
}
}
CFRelease(data);
CGFloat f = 1.0f / (255.0f * imageWidth * imageHeight);
return [UIColor colorWithRed:f * red green:f * green blue:f * blue alpha:1].CGColor;
}
You never release rawImageRef.
I'd call CGImageRelease(rawImageRef); right after releasing data.
Solved!
I added _imageRaster.CGIMage and after release it!!
Thanks!
CGImageRef rawImageRef = CGImageCreateWithImageInRect(_imageRaster.CGImage, rect);
// ......
CGImageRelease(rawImageRef);
Yes, the static analyzer is correct, that you need CGImageRelease(rawImageRef) within this method.
Your error, though, is reporting the over-release of some object. On the basis of the code snippet provided, I don't think the rawImageRef is the object in question.
Now, I don't know what you're doing with the returned CGColorRef object, but let's say you had something like the following:
CGColor colorRef = [self averageColorRect:CGRectMake(0, 0, 20, 20)];
// do something with `colorRef`
// now, all done, clean up
CGColorRelease(colorRef); // error; you don't want this line
If you have zombies turned on, that produces the error you describe:
-[Not A Type release]: message sent to deallocated instance 0x8de4e90
This is because the colorRef object that you're returning is linked to an autorelease UIColor object, and therefore, when the pool is drained, the CGColorRef object will be released unless you did an explicit CGColorRetain. In this case, this error will go away if you remove the unnecessary CGColorRelease.
I'm not suggesting that this is precisely what you've done, but it's an illustration of the sort of thing that can generate the error you report. Perhaps you can share your code that uses the resulting CGColorRef and we can see if there's anything there that would manifest the error you shared with us. It's not clear why the introduction of the CGImageRelease(rawImageRef) would cause this error to appear whereas its not produced in the absence of that CGImageRelease. But the CGImageRelease(rawImageRef) is not the root of the problem, but rather the issue undoubtedly rests elsewhere.

RGBA from a specific pixel in CCSprite (takes much time)

I have tried McDevon's (How to get RGBA color of a specific pixel in CCSprite) approach, but it is taking too much time to process and so my app is lacking smooth movements.
My app has some pieces that are moved around the screen by user touch. I want to check for every move if the pixel is of a certain color.
When I tried McDevon's, the app starts to skip some of the sprite's movements, almost printing only its final place of move.
Here's McDevon's:
-(BOOL)checkPixel: (CCSprite*)background : (CGFloat)x :(CGFloat)y{
BOOL result = FALSE;
CGPoint location;
location = ccp(x * CC_CONTENT_SCALE_FACTOR(), y * CC_CONTENT_SCALE_FACTOR());
UInt8 data[4];
CCRenderTexture* renderTexture = [[CCRenderTexture alloc] initWithWidth:background.boundingBox.size.width * CC_CONTENT_SCALE_FACTOR()
height:background.boundingBox.size.height * CC_CONTENT_SCALE_FACTOR()
pixelFormat:kCCTexture2DPixelFormat_RGBA8888];
[renderTexture begin];
[background draw];
glReadPixels((GLint)location.x,(GLint)location.y, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, data);
[renderTexture end];
[renderTexture release];
NSLog(#"R: %d, G: %d, B: %d, A: %d", data[0], data[1], data[2], data[3]);
if((data[0]==0)&&(data[1]==0)&&(data[2]==0)){
result = TRUE;
}
return result;
}
Here's a piece of my code:
futurePos = ccpAdd(sprite.position, translation);
// Check Area on pixels
if([self checkPixel:background :futurePos.x :futurePos.y]){
sprite.position = futurePos;
}
Any ideas to make it faster / smoothier?
Thanks!
Best solution:
I extracted the RAW image data into 8 bit array. But instead of using 24 bits (RGB) images I am now using monochromatic bitmap images.
Super fast. I just have to convert the image layers to single bmp file.

iOS .Tesseract OCR why recognition is so pure. Engine principle

I have a question about Tesseract OCR principle. As far as I understand, after shapes detection , symbols (their forms) are scaled(resized) to have some specific font size.
Such font size is based on trained data. Basically, trained set defines symbols (their geometry,shape), maybe their representation.
I am using Tesseract 3.01 (the latest) version on iOS platform.
I check Tesseract FAQ, looked at forum, but I do not understand why for some images I have low quality of recognition.
It is said that font should be bigger than 12pt & image should have more than 300 DPI. I did all necessary preprocessing such as blurring (if it is needed), contrast enhancement.
I even used other engine in Tesseract OCR - it is called CUBE.
But for some images (in spite of fact that they are bigger MIN(width, height) >1000 - I rescale them for tesseract, I get bad results for recognition
http://goo.gl/l9uJMe
However on other set of images results are better:
http://goo.gl/cwA9DC
Those images smaller I do not resize them, (just convert to grayscale mode).
If what I wrote about engine is correct.
Suppose trained set is based on font with size 14pt. Symbols from pictures are resized to some specific size, and I do not see any reason why they are not recognised in such case.
I also tried custom dictionaries, to penalise non dictionary words - did not give too much benefit to recognition.
tesseract = new tesseract::TessBaseAPI();
GenericVector<STRING> variables_name(1),variables_value(1);
variables_name.push_back("user_words_suffix");
variables_value.push_back("user-words");
int retVal = tesseract->Init([self.tesseractDataPath cStringUsingEncoding:NSUTF8StringEncoding], NULL,tesseract::OEM_TESSERACT_ONLY, NULL, 0, &variables_name, &variables_value, false);
ok |= retVal == 0;
ok |= tesseract->SetVariable("language_model_penalty_non_dict_word", "0.2");
ok |= tesseract->SetVariable("language_model_penalty_non_freq_dict_word", "0.2");
if (!ok)
{
NSLog(#"Error initializing tesseract!");
}
So my question is should I train tesseract on another font?
And ,honestly speaking, why I should train it? on default trained data text from Internet, or screen of PC(Mac) I get good recognition.
I also checked original tesseract English trained data it has 38 tiff files, that belong to the following families:
1) Аrial
2) verdana
3 )trebuc
4) times
5) georigia
6 ) cour
It seems that font from image does not belong to this set.
In your case the size of the image is not the problem. As I can see from your attached images (and I'm surprised that nobody mentioned it before) the problem is that the text on images from which you get bad results is not placed on straight lines.
One of the things that Tesseract does at early stages of OCR process is to detect image layout and extracting whole lines of text.
This image is the best example to illustrate this part of the process:
As you can see the engine is expecting the text to be perpendicular to the edge of the image.
If you done with all necessary image processing then try this, It may helpful for you
CGSize size = [image size];
int width = size.width;
int height = size.height;
uint32_t* _pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
if (!_pixels) {
return;//Invalid image
}
// Clear the pixels so any transparency is preserved
memset(_pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a context with RGBA _pixels
CGContextRef context = CGBitmapContextCreate(_pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// Paint the bitmap to our context which will fill in the _pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [image CGImage]);
// We're done with the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
_tesseract->SetImage((const unsigned char *) _pixels, width, height, sizeof(uint32_t), width * sizeof(uint32_t));
_tesseract->SetVariable("tessedit_char_whitelist", ".#0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz/-!");
_tesseract->SetVariable("tessedit_consistent_reps", "0");
char* utf8Text = _tesseract->GetUTF8Text();
NSString *str = nil;
if (utf8Text) {
str = [NSString stringWithUTF8String:utf8Text];
}

Resources