I'm trying prepare image for OCR,I use GPUImage to do it,code work fine till i crop image!!After cropping i got bad result...
Crop area:
https://www.dropbox.com/s/e3mlp25sl6m55yk/IMG_0709.PNG
Bad Result=(
https://www.dropbox.com/s/wtxw7li6paltx21/IMG_0710.PNG
+ (UIImage *) doBinarize:(UIImage *)sourceImage
{
//first off, try to grayscale the image using iOS core Image routine
UIImage * grayScaledImg = [self grayImage:sourceImage];
GPUImagePicture *imageSource = [[GPUImagePicture alloc] initWithImage:grayScaledImg];
GPUImageAdaptiveThresholdFilter *stillImageFilter = [[GPUImageAdaptiveThresholdFilter alloc] init];
stillImageFilter.blurRadiusInPixels = 8.0;
[stillImageFilter prepareForImageCapture];
[imageSource addTarget:stillImageFilter];
[imageSource processImage];
UIImage *retImage = [stillImageFilter imageFromCurrentlyProcessedOutput];
[imageSource removeAllTargets];
return retImage;
}
+ (UIImage *) grayImage :(UIImage *)inputImage
{
// Create a graphic context.
UIGraphicsBeginImageContextWithOptions(inputImage.size, NO, 1.0);
CGRect imageRect = CGRectMake(0, 0, inputImage.size.width, inputImage.size.height);
// Draw the image with the luminosity blend mode.
// On top of a white background, this will give a black and white image.
[inputImage drawInRect:imageRect blendMode:kCGBlendModeLuminosity alpha:1.0];
// Get the resulting image.
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
UPDATE:
In the meantime, when you crop your images, do so to the nearest
multiple of 8 pixels in width and you should see the correct result
Thank u #Brad Larson ! i resize image width to nearest multiple of 8 and get what i want
-(UIImage*)imageWithMultiple8ImageWidth:(UIImage*)image
{
float fixSize = next8(image.size.width);
CGSize newSize = CGSizeMake(fixSize, image.size.height);
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
float next8(float n) {
int bits = (int)n & 7; // give us the distance to the previous 8
if (bits == 0)
return (float)n;
return (float)n + (8-bits);
}
Before I even get to the core issue here, I should point out that the GPUImageAdaptiveThresholdFilter already does a conversion to grayscale as a first step, so your -grayImage: code in the above is unnecessary and will only slow things down. You can remove all that code and just pass your input image directly to the adaptive threshold filter.
What I believe is the problem here is a recent set of changes to the way that GPUImagePicture pulls in image data. It appears that images which aren't a multiple of 8 pixels wide end up looking like the above when imported. Some fixes were proposed about this, but if the latest code from the repository (not CocoaPods, which is often out of date relative to the GitHub repository) is still doing this, some more work may need to be done.
In the meantime, when you crop your images, do so to the nearest multiple of 8 pixels in width and you should see the correct result.
Related
Here is the code that I'm using to scale images in iOS i.e. scale 500x500 image to 100x100 image and then store scaled copy:
+ (UIImage *)image:(UIImage *)originalImage scaledToSize:(CGSize)desiredSize {
UIGraphicsBeginImageContextWithOptions(desiredSize, YES, [UIScreen mainScreen].scale);
[originalImage drawInRect:CGRectMake(0, 0, desiredSize.width, desiredSize.height)];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext;
return finalImage;
}
Now I need to implement the same functionality in my macOS app. How can I do that? I saw the question like this but I still can't understand the logic of doing this in macOS.
After some search I found a question like mine but in Swift. So I translated it in Objective-C and that's it:
+ (NSImage *)image:(NSImage *)originalImage scaledToSize:(NSSize)desiredSize {
NSImage *newImage = [[NSImage alloc] initWithSize:desiredSize];
[newImage lockFocus];
[originalImage drawInRect:NSMakeRect(0, 0, desiredSize.width, desiredSize.height) fromRect:NSMakeRect(0, 0, imageWidth, imageHeight) operation:NSCompositingOperationSourceOver fraction:1];
[newImage unlockFocus];
newImage.size = desiredSize;
return [[NSImage alloc] initWithData:[newImage TIFFRepresentation]];
}
But there's still an issue: if desiredSize = NSMakeSize(50,50); it will return 50 by 50 pixels. It's should be something with screen scale i guess.
There is Swift code that I translated:
Example 1
Example 2
I am trying to crop a UIImage to a face that has been detected using the built-in CoreImage face detection functionality. I seem to be able to detect the face properly, but when I attempt to crop my UIImage to the bounds of the face, it is nowhere near correct. My face detection code looks like this:
-(NSArray *)facesForImage:(UIImage *)image {
CIImage *ciImage = [CIImage imageWithCGImage:image.CGImage];
CIContext *context = [CIContext contextWithOptions:nil];
NSDictionary *opts = #{CIDetectorAccuracy : CIDetectorAccuracyHigh};
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:context options:opts];
NSArray *features = [detector featuresInImage:ciImage];
return features;
}
...and the code to crop the image looks like this:
-(UIImage *)imageCroppedToFaceAtIndex:(NSInteger)index forImage:(UIImage *)image {
NSArray *faces = [self facesForImage:image];
if((index < 0) || (index >= faces.count)) {
DDLogError(#"Invalid face index provided");
return nil;
}
CIFaceFeature *face = [faces objectAtIndex:index];
CGRect faceBounds = face.bounds;
CGImageRef imageRef = CGImageCreateWithImageInRect(image.CGImage, faceBounds);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
return croppedImage;
}
I have an image with only 1 face in it I'm using for testing, and it appears to detect it with no problem. But the crop is way off. Any idea what could be the problem with this code?
For anyone else having a similar issue -- transforming CGImage coordinates to UIImage coordinates -- I found this great article explaining how to use CGAffineTransform to accomplish exactly what I was looking for.
The code to convert the face geometry from Core Image to UIImage coordinates is fussy. I haven't messed with it in quite a while, but I remember it giving me fits, especially when dealing with images that are rotated.
I suggest looking at the demo app "SquareCam", which you can find with a search of the Xcode docs. It draws red squares around faces, which is a good start.
Note that the rectangle you get from Core Image is always a square, and sometimes crops a little too closely. You may have to make your cropping rectangles taller and wider.
This class does the trick! A quite flexible and handy override of UIImage. https://github.com/kylestew/KSMagicalCrop
Do It with this code It is worked for me.
CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage];
//Container for the face attributes
UIView* faceContainer = [[UIView alloc] initWithFrame:facePicture.frame];
// flip faceContainer on y-axis to match coordinate system used by core image
[faceContainer setTransform:CGAffineTransformMakeScale(1, -1)];
// create a face detector - since speed is not an issue we'll use a high accuracy
// detector
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
// create an array containing all the detected faces from the detector
NSArray* features = [detector featuresInImage:image];
for(CIFaceFeature* faceFeature in features)
{
// get the width of the face
CGFloat faceWidth = faceFeature.bounds.size.width;
// create a UIView using the bounds of the face
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
if(faceFeature.hasLeftEyePosition)
{
// create a UIView with a size based on the width of the face
leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.leftEyePosition.x-faceWidth*0.15, faceFeature.leftEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
}
if(faceFeature.hasRightEyePosition)
{
// create a UIView with a size based on the width of the face
RightEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.rightEyePosition.x-faceWidth*0.15, faceFeature.rightEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
}
if(faceFeature.hasMouthPosition)
{
// create a UIView with a size based on the width of the face
mouth = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];
}
[view addSubview:faceContainer];
CGFloat y = view.frame.size.height - (faceView.frame.origin.y+faceView.frame.size.height);
CGRect rect = CGRectMake(faceView.frame.origin.x, y, faceView.frame.size.width, faceView.frame.size.height);
CGImageRef imageRef = CGImageCreateWithImageInRect([<Original Image> CGImage],rect);
croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
//----cropped Image-------//
UIImageView *img = [[UIImageView alloc]initWithFrame:CGRectMake(faceView.frame.origin.x, y, faceView.frame.size.width, faceView.frame.size.height)];
img.image = croppedImage;
I'm trying prepare image for OCR,I use GPUImage to do it,code work fine till i crop image!!After cropping i got bad result...
Crop area:
https://www.dropbox.com/s/e3mlp25sl6m55yk/IMG_0709.PNG
Bad Result=(
https://www.dropbox.com/s/wtxw7li6paltx21/IMG_0710.PNG
+ (UIImage *) doBinarize:(UIImage *)sourceImage
{
//first off, try to grayscale the image using iOS core Image routine
UIImage * grayScaledImg = [self grayImage:sourceImage];
GPUImagePicture *imageSource = [[GPUImagePicture alloc] initWithImage:grayScaledImg];
GPUImageAdaptiveThresholdFilter *stillImageFilter = [[GPUImageAdaptiveThresholdFilter alloc] init];
stillImageFilter.blurRadiusInPixels = 8.0;
[stillImageFilter prepareForImageCapture];
[imageSource addTarget:stillImageFilter];
[imageSource processImage];
UIImage *retImage = [stillImageFilter imageFromCurrentlyProcessedOutput];
[imageSource removeAllTargets];
return retImage;
}
+ (UIImage *) grayImage :(UIImage *)inputImage
{
// Create a graphic context.
UIGraphicsBeginImageContextWithOptions(inputImage.size, NO, 1.0);
CGRect imageRect = CGRectMake(0, 0, inputImage.size.width, inputImage.size.height);
// Draw the image with the luminosity blend mode.
// On top of a white background, this will give a black and white image.
[inputImage drawInRect:imageRect blendMode:kCGBlendModeLuminosity alpha:1.0];
// Get the resulting image.
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
UPDATE:
In the meantime, when you crop your images, do so to the nearest
multiple of 8 pixels in width and you should see the correct result
Thank u #Brad Larson ! i resize image width to nearest multiple of 8 and get what i want
-(UIImage*)imageWithMultiple8ImageWidth:(UIImage*)image
{
float fixSize = next8(image.size.width);
CGSize newSize = CGSizeMake(fixSize, image.size.height);
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
float next8(float n) {
int bits = (int)n & 7; // give us the distance to the previous 8
if (bits == 0)
return (float)n;
return (float)n + (8-bits);
}
Before I even get to the core issue here, I should point out that the GPUImageAdaptiveThresholdFilter already does a conversion to grayscale as a first step, so your -grayImage: code in the above is unnecessary and will only slow things down. You can remove all that code and just pass your input image directly to the adaptive threshold filter.
What I believe is the problem here is a recent set of changes to the way that GPUImagePicture pulls in image data. It appears that images which aren't a multiple of 8 pixels wide end up looking like the above when imported. Some fixes were proposed about this, but if the latest code from the repository (not CocoaPods, which is often out of date relative to the GitHub repository) is still doing this, some more work may need to be done.
In the meantime, when you crop your images, do so to the nearest multiple of 8 pixels in width and you should see the correct result.
Thanks to everyone. I have got solution for cutting an image into irregular shapes in java. But now i want to achieved this in iOS.
Here is my requirement:
I am using fingers to select particular part by touch and draw on an image , after completing drawing, i want to draw that part of the image. I am able to draw using touches, but how can i cut that particular part?
i know cutting an image into rectangular shape and circle but not into a particular shape.
If any one know please help me.
Draw a closed CGPath and turn it into an Image Mask. If I had more experience I'd give more details, but I've only done the graphs in a weather app. A guide should help.
Bellow is the code for corp image with selected rectangle area. But you can customized your selected area with CGPath and then corp the image.
-(IBAction) cropImage:(id) sender{
// Create rectangle that represents a cropped image
// from the middle of the existing image
float xCo,yCo;
float width=bottomCornerPoint.x-topCornerPoint.x;
float height=bottomCornerPoint.y-topCornerPoint.y;
if(width<0)
width=-width;
if(height<0)
height=-height;
if(topCornerPoint.x <bottomCornerPoint.x){
xCo=topCornerPoint.x;
}else{
xCo=bottomCornerPoint.x;
}
if(topCornerPoint.y <bottomCornerPoint.y){
yCo=topCornerPoint.y;
}else{
yCo=bottomCornerPoint.y;
}
CGRect rect = CGRectMake(xCo,yCo,width,height);
// Create bitmap image from original image data,
// using rectangle to specify desired crop area
UIImage *image = [UIImage imageNamed:#"abc.png"];
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);
UIImage *img = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
// Create and show the new image from bitmap data
imageView = [[UIImageView alloc] initWithImage:img];
[imageView setFrame:CGRectMake(110, 600, width, height)];
imageView.image=img;
[[self view] addSubview:imageView];
[imageView release];
}
CGSize newSize = CGSizeMake(backGroundImageView.frame.size.width, backGroundImageView.frame.size.height);
UIGraphicsBeginImageContext(newSize);
[<Drawing Imageview> drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
[backGroundImageView.image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeSourceIn alpha:1.0];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
How can I convert RGB image to 1 channel image (black/white) using ios5?
Input image is usually a photo of a book page.
Goal is to reduce the size of a photocopy by converting it to the 1 channel image.
If I understand your question, you want to apply a black and white thresholding to the image based on a pixel's luminance. For a fast way of doing this, you could use my open source GPUImage project (supporting back to iOS 4.x) and a couple of the image processing operations it provides.
In particular, the GPUImageLuminanceThresholdFilter and GPUImageAdaptiveThresholdFilter might be what you're looking for here. The former turns a pixel to black or white based on a luminance threshold you set (the default is 50%). The latter takes the local average luminance into account when applying this threshold, which can produce better results for text on pages of a book.
Usage of these filters on a UIImage is fairly simple:
UIImage *inputImage = [UIImage imageNamed:#"book.jpg"];
GPUImageLuminanceThresholdFilter *thresholdFilter = [[GPUImageLuminanceThresholdFilter alloc] init];
UIImage *quickFilteredImage = [thresholdFilter imageByFilteringImage:inputImage];
These can be applied to a live camera feed and photos taken by the camera, as well.
You can use Core Image to process your image to black & white.
use CIEdgeWork , this will convert your image to black and whie
for more information on Core Image Programming, visit:-
https://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/CoreImaging/ci_tasks/ci_tasks.html#//apple_ref/doc/uid/TP30001185-CH3-TPXREF101
The code you are looking for is probably this:
CIContext *context = [CIContext contextWithOptions:nil]; // 1
CIImage *image = [CIImage imageWithContentsOfURL:myURL]; // 2
CIFilter *filter = [CIFilter filterWithName:#"CIEdgeWork"]; // 3
[filter setValue:image forKey:kCIInputImgeKey];
[filter setValue:[NSNumber numberWithFloat:0.8f] forKey:#"InputIntensity"];
CIImage *result = [filter valueForKey:kCIOutputImageKey]; // 4
CGImageRef cgImage = [context createCGImage:result fromRect:[result extent];
here is some sample code,maybe helpful :
#implementation UIImage (GrayImage)
-(UIImage*)grayImage
{
int width = self.size.width;
int height = self.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate
(nil,width,height,8,0,colorSpace,kCGImageAlphaNone);
CGColorSpaceRelease(colorSpace);
if (context == NULL) {
return nil;
}
CGContextDrawImage(context,CGRectMake(0, 0, width, height), self.CGImage);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage *grayImage = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGContextRelease(context);
return grayImage;
}
#end
I just write it as a Category of UIImage, but not support png image which has transparent pixel or it will be black.