I want to create an OCR application which allows the user to chose the specific area on which to apply the processing.
As of now, i am able to capture the entire image using the AVFoundation however, my current target is to use an overlay of some dimensions and capture the image inside that, So rather than the entire image being captured, I want the image only inside the overlay to be captured and used.
+ (UIImage *)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([imageToCrop CGImage], rect);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return croppedImage;
}
Related
I have an image with multiple icons, and I have the position and the size of the icon that I want to show.
The question is, how can I show just part of an image in a UIImageView so I can show only the icon that I want to?
Is it possible to show the icon correctly in 1x, 2x, and 3x, even if the image gets a bit pixelated?
You can crop a part of the image and create a new UIImage from it with CGImageCreateWithImageInRect:
CGRect cropRect = CGRectMake(x,y,width,height); //Calculate the rect you'd like to show
CGImageRef imageRef = CGImageCreateWithImageInRect(originalImage.CGImage, cropRect);
UIImage* outImage = [UIImage imageWithCGImage:imageRef scale:originalImage.scale orientation:originalImage.imageOrientation];
CGImageRelease(imageRef);
This question already has answers here:
Cropping an UIImage
(25 answers)
Closed 8 years ago.
I'm making an application that need to crop the image to another image.
i want to crop the source Image ( like green rectangle) to destination image ( like white rectangle). I can get the size of source and destination image and the offset x and y . How can i got that crop image and save it to library?
You can see the image attach here:
How can I crop to that image? and if you can please give me an example source code
so much thanks for that
Use this method and pass image, rect as parameter. You can specify y offset and x offset in cropRect
-(UIImage *)cropImage:(UIImage *)image rect:(CGRect)cropRect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropRect);
UIImage *cropedImg = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return cropedImg;
}
Check Below Code:
-(UIImage *)imageWithImageSimple:(UIImage*)image scaledToSize:(CGSize)newSize
{
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Check below Links For reference:
http://code4app.net/ios/Image-crop-demo/501e1f3f6803faea5d000000
http://code4app.net/ios/Photo-Cropper-View-Controller/4f95519c06f6e7d870000000
http://code4app.net/ios/Image-Cropper/4f8cc87f06f6e7d86c000000
http://code4app.net/ios/Simple-Image-Editor-View/4ff2af4c6803fa381b000000
Get sample code from here. Then you can customise your own.
Xcode 5, iOS 7
I am loading an image into a UIImage, and then copy it into another UIImage with a Mirror transform, i.e.:
self.imageB.image=[UIImage imageWithCGImage:[self.imageA.image CGImage] scale:1.0 orientation:UIImageOrientationUpMirrored];
Next, I'm trying to combine the two images into one when saving (originalImage refers to the loaded image prior to copying/transforming into imageA and imageB):
newSize=CGSizeMake(originalImage.size.width*2,originalImage.size.height);
UIGraphicsBeginImageContext(newSize);
UIImage *leftImage=self.imageA.image;
UIImage *rightImage=self.imageB.image;
[self.imageA.image drawInRect:CGRectMake(0,0,newSize.width/2,newSize.height)];
[self.imageB.image drawInRect:CGRectMake(newSize.width/2,0,newSize.width/2,newSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(newImage, nil, nil, nil);
This works and gives me a single saved mirrored image.
However, I'm trying to do this using only a sub-region of mageA and imageB,
and the result is that the portion from imageB loses it's Mirrored transformation.
I end up with both sides of the final having the same orientation!(Note: visRatio is the percentage of the image I want to keep)
newSize=CGSizeMake(originalImage.size.width*2,originalImage.size.height);
UIGraphicsBeginImageContext(newSize);
UIImage *leftImage=self.imageA.image;
UIImage *rightImage=self.imageB.image;
CGRect clippedRectA = CGRectMake(0,0,originalImage.size.width*visRatio,originalImage.size.height);
CGImageRef imageRefA = CGImageCreateWithImageInRect([self.imageA.image CGImage], clippedRectA);
UIImage *leftImageA = [UIImage imageWithCGImage:imageRefA];
[leftImageA drawInRect:CGRectMake(0,0,newSize.width/2,newSize.height)];
CGRect clippedRectB = CGRectMake(originalImage.size.width-(originalImage.size.width*visRatio),0,originalImage.size.width*visRatio,originalImage.size.height);
CGImageRef imageRefB = CGImageCreateWithImageInRect([self.imageB.image CGImage], clippedRectB);
UIImage *rightImageB = [UIImage imageWithCGImage:imageRefB];
[rightImageB drawInRect:CGRectMake(newSize.width/2,0,newSize.width/2,newSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(newImage, nil, nil, nil);
It's as though "CGImageCreateWithImageInRect" copies the image from the original, not the transformed, data.
How can I accomplish this without losing the mirrored transformations of imageB ?
I figured it out - this will use the original, full resolution data opened into originalImage and saved a clipped, transformed version (I left the mirroring out of this post for simplicity):
//capture the transformed version
CGSize tmpSize=CGSizeMake(originalImage.size.width, originalImage.size.height);
UIGraphicsBeginImageContext(tmpSize);
[self.imageA.image drawInRect:CGRectMake(0,0, tmpSize.width, tmpSize.height)];
UIImage *tmpImageA=UIGraphicsGetImageFromCurrentImageContext();
visRatio=0.5;
//clip it
clippedRectA=CGRectMake(0,0,roundf(originalImage.size.width*visRatio),originalImage.size.height);
imageRefA=CGImageCreateWithImageInRect([tmpImageA CGImage],clippedRectA);
leftImageA=[UIImage imageWithCGImage:imageRefA];
UIGraphicsEndImageContext();
savedImage=UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(savedImage, nil, nil, nil);
I believe the key is to draw the image into the context, then use the context for cropping/saving, instead of the main image (imageA).
If you find a more efficient way, please post it!
Otherwise, I hope this helps someone else....
I want to crop a UIImage (not imageview), before doing pixel operations on it. Is there a reliable way to do this in ios framework?
Here are the related methods in android I am using:
http://developer.android.com/reference/android/graphics/Bitmap.html#createBitmap(android.graphics.Bitmap, int, int, int, int)
http://developer.android.com/reference/android/graphics/Bitmap.html#createBitmap(int, int, android.graphics.Bitmap.Config)
-(UIImage*)scaleToSize:(CGSize)size image:(UIImage*)image
{
UIGraphicsBeginImageContext(size);
// Draw the scaled image in the current context
[image drawInRect:CGRectMake(0, 0, size.width, size.height)];
// Create a new image from current context
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
// Pop the current context from the stack
UIGraphicsEndImageContext();
// Return our new scaled image
return scaledImage;
}
I would advise you to look at:
The CoreAnimation Apple framework to create an image context, draw in it and save this as a UIImage in RAM or save it to disk.
The CoreImage Apple framework if you want a powerful and fully customizable image manipulation solution.
A third party lib to crop and resize images: CocoaPods is a great way to quickly integrate that kind of libs… Here is a list of a some interesting image manipulation pods.
This worked for me:
-(UIImage *)cropCanvas:(UImage *)input x1:(int)x1 y1:(int)y1 x2:(int)x2 y2:(int)y2{
CGRect rect = CGRectMake(x1, y1, input.size.width-x2-x1, input.size.height-y2-y1);
CGImageRef imageref = CGImageCreateWithImageInRect([input CGImage], rect);
UIImage *img = [UIImage imageWithCGImage:imageref];
return img;
}
I have one UIImageView. Its content mode is set to AspectFit.
[imageView setContentMode:UIViewContentModeScaleAspectFit].
I need to crop a subImage from this image. This is the code which crops the image:
CGImageRef imageRef = CGImageCreateWithImageInRect([imageView.image CGImage], customRect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
where customRect is the rectangle from which I need to crop the image.
This is how I calculate it:
CGRect customRect = CGRectMake((cropView.frame.origin.x/xFactor),
(cropView.frame.origin.y/yFactor),
(cropView.frame.size.width/xFactor),
(cropView.frame.size.height/yFactor));
The problem comes in cropping. CGImageCreateWithImageInRect crops the given area according to the actual image size which, in some cases, is larger than the image view size. I tried using other approaches such as UIGraphics:getImageFromCurrentImageContext but these do not keep the image quality as much as it degrades them.