Crop and recreate UIImage - ios

I want to crop a UIImage (not imageview), before doing pixel operations on it. Is there a reliable way to do this in ios framework?
Here are the related methods in android I am using:
http://developer.android.com/reference/android/graphics/Bitmap.html#createBitmap(android.graphics.Bitmap, int, int, int, int)
http://developer.android.com/reference/android/graphics/Bitmap.html#createBitmap(int, int, android.graphics.Bitmap.Config)

-(UIImage*)scaleToSize:(CGSize)size image:(UIImage*)image
{
UIGraphicsBeginImageContext(size);
// Draw the scaled image in the current context
[image drawInRect:CGRectMake(0, 0, size.width, size.height)];
// Create a new image from current context
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
// Pop the current context from the stack
UIGraphicsEndImageContext();
// Return our new scaled image
return scaledImage;
}

I would advise you to look at:
The CoreAnimation Apple framework to create an image context, draw in it and save this as a UIImage in RAM or save it to disk.
The CoreImage Apple framework if you want a powerful and fully customizable image manipulation solution.
A third party lib to crop and resize images: CocoaPods is a great way to quickly integrate that kind of libs… Here is a list of a some interesting image manipulation pods.

This worked for me:
-(UIImage *)cropCanvas:(UImage *)input x1:(int)x1 y1:(int)y1 x2:(int)x2 y2:(int)y2{
CGRect rect = CGRectMake(x1, y1, input.size.width-x2-x1, input.size.height-y2-y1);
CGImageRef imageref = CGImageCreateWithImageInRect([input CGImage], rect);
UIImage *img = [UIImage imageWithCGImage:imageref];
return img;
}

Related

How to make CGImageCreateWithImageInRect reference transformed image?

Xcode 5, iOS 7
I am loading an image into a UIImage, and then copy it into another UIImage with a Mirror transform, i.e.:
self.imageB.image=[UIImage imageWithCGImage:[self.imageA.image CGImage] scale:1.0 orientation:UIImageOrientationUpMirrored];
Next, I'm trying to combine the two images into one when saving (originalImage refers to the loaded image prior to copying/transforming into imageA and imageB):
newSize=CGSizeMake(originalImage.size.width*2,originalImage.size.height);
UIGraphicsBeginImageContext(newSize);
UIImage *leftImage=self.imageA.image;
UIImage *rightImage=self.imageB.image;
[self.imageA.image drawInRect:CGRectMake(0,0,newSize.width/2,newSize.height)];
[self.imageB.image drawInRect:CGRectMake(newSize.width/2,0,newSize.width/2,newSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(newImage, nil, nil, nil);
This works and gives me a single saved mirrored image.
However, I'm trying to do this using only a sub-region of mageA and imageB,
and the result is that the portion from imageB loses it's Mirrored transformation.
I end up with both sides of the final having the same orientation!(Note: visRatio is the percentage of the image I want to keep)
newSize=CGSizeMake(originalImage.size.width*2,originalImage.size.height);
UIGraphicsBeginImageContext(newSize);
UIImage *leftImage=self.imageA.image;
UIImage *rightImage=self.imageB.image;
CGRect clippedRectA = CGRectMake(0,0,originalImage.size.width*visRatio,originalImage.size.height);
CGImageRef imageRefA = CGImageCreateWithImageInRect([self.imageA.image CGImage], clippedRectA);
UIImage *leftImageA = [UIImage imageWithCGImage:imageRefA];
[leftImageA drawInRect:CGRectMake(0,0,newSize.width/2,newSize.height)];
CGRect clippedRectB = CGRectMake(originalImage.size.width-(originalImage.size.width*visRatio),0,originalImage.size.width*visRatio,originalImage.size.height);
CGImageRef imageRefB = CGImageCreateWithImageInRect([self.imageB.image CGImage], clippedRectB);
UIImage *rightImageB = [UIImage imageWithCGImage:imageRefB];
[rightImageB drawInRect:CGRectMake(newSize.width/2,0,newSize.width/2,newSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(newImage, nil, nil, nil);
It's as though "CGImageCreateWithImageInRect" copies the image from the original, not the transformed, data.
How can I accomplish this without losing the mirrored transformations of imageB ?
I figured it out - this will use the original, full resolution data opened into originalImage and saved a clipped, transformed version (I left the mirroring out of this post for simplicity):
//capture the transformed version
CGSize tmpSize=CGSizeMake(originalImage.size.width, originalImage.size.height);
UIGraphicsBeginImageContext(tmpSize);
[self.imageA.image drawInRect:CGRectMake(0,0, tmpSize.width, tmpSize.height)];
UIImage *tmpImageA=UIGraphicsGetImageFromCurrentImageContext();
visRatio=0.5;
//clip it
clippedRectA=CGRectMake(0,0,roundf(originalImage.size.width*visRatio),originalImage.size.height);
imageRefA=CGImageCreateWithImageInRect([tmpImageA CGImage],clippedRectA);
leftImageA=[UIImage imageWithCGImage:imageRefA];
UIGraphicsEndImageContext();
savedImage=UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(savedImage, nil, nil, nil);
I believe the key is to draw the image into the context, then use the context for cropping/saving, instead of the main image (imageA).
If you find a more efficient way, please post it!
Otherwise, I hope this helps someone else....

Combine two images

I would like to take an image and duplicate it. Then increase it by 105% and overlay it on the original image.
What is the correct way to do this on iOS?
This is your basic code for drawing the image and then saving it as an image again:
- (UIImage *)renderImage:(UIImage *)image atSize:(CGSize)size
{
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0.0, 0.0, size.width, size.height)];
// draw anything else into the context
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Where it says "draw anything else into the context" you can draw the image at a reduced size by setting the appropriate rect to draw in. Then, call the renderImage method with whatever size you want the full image to be. You can use CGContextSetAlpha to set the transparency.

Is it possible to use the build in iOS crop tool in my app?

In the photo albums app there's a build in edit -> cropping tool. Is it possible to use that tool in an app instead of writing it on my own? Is it a part of the framework?
No, there is no built-in crop tool. However, it would not be that hard to write such a tool.
You'd need to create a control that let the user drag around an image in a scroll view, and collect the coordinates.
Then you'd create a graphics context and use the UIImage method drawInRect: to draw the image into a rect that's larger than the graphics context. The result would be to draw a cropped portion of the image into the context. Then you'd extract an image from the graphics context and discard the graphics context.
No that is not part of SDK, but you can easily crop images in iOS.
- (UIImage *)resizeImage:(UIImage *)image width:(float)w height:(float)h {
UIImage *croppedImage = image;
CGSize size = CGSizeMake(w, h);
UIGraphicsBeginImageContext(size);
CGRect rect = CGRectMake(0.0f, 0.0f, size.width, size.height);
[image drawInRect:rect];
croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}
I created a crop tool that might fit your need. It's not based on a scroll view, but rather letting the user choose a frame around their image.
https://github.com/nicholjs/BFCropInterface

How to get a CGImageRef from Context-Drawn Images?

Ok using coregraphics, I'm building up an image which will later be used in a CGContextClipToMask operation. It looks something like the following:
UIImage *eyes = [UIImage imageNamed:#"eyes"];
UIImage *mouth = [UIImage imageNamed:#"mouth"];
UIGraphicsBeginImageContext(CGSizeMake(150, 150));
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(context, 0, 0, 0, 1);
CGContextFillRect(context, bounds);
[eyes drawInRect:bounds blendMode:kCGBlendModeMultiply alpha:1];
[mouth drawInRect:bounds blendMode:kCGBlendModeMultiply alpha:1];
// how can i now get a CGImageRef here to use in a masking operation?
UIGraphicsEndImageContext();
Now, as you can see by the comment, I'm wondering how I'm actually going to USE the image I've built up. The reason why I'm using core graphics here and not just building up a UIImage is that the transparency I'm creating is very important. If I just grab a UIImage from the context, when it's used as a mask, it will just apply to everything... Further to the point, will I have any problems using a partially-transparent mask using this method?
CGImageRef result = CGBitmapContextCreateImage(UIGraphicsGetCurrentContext());
You can call the UIGraphicsGetImageFromCurrentImageContext function, which will return a UIImage object. You can hold onto and use the UIImage, or ask it for its CGImage.

MonoTouch convert color UIImage with alpha to grayscale and blur?

I am trying to find a recipe for producing a a blurred grayscale UIImage from a color PNG and alpha. There are recipes out there in ObjC but MonoTouch does not bind the CGRect functions so not sure how to do this. Any ideas?
Here is one ObjC example of grayscale:
(UIImage *)convertImageToGrayScale:(UIImage *)image
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
return newImage;
}
Monotouch does not bind the CGRect functions so not sure how to do this.
When using MonoTouch CGRect is mapped to RectangleF. A lot of extension methods exists that should map to every function provided by GCRect. You should not have any problem in porting ObjectiveC code using them.
If something is missing please fill a bug report # http://bugzilla.xamarin.com and we'll fix it asap (and provide a workaround when possible).
There are recipes out there in ObjC but MonoTouch
If you have links then please edit your question and add them. That will make it easier to help you :)
UPDATE
Here's a, line-by-line, C# translation of your example. It seems to works for me (and it's much easier to my eyes than Objective-C ;-)
UIImage ConvertToGrayScale (UIImage image)
{
RectangleF imageRect = new RectangleF (PointF.Empty, image.Size);
using (var colorSpace = CGColorSpace.CreateDeviceGray ())
using (var context = new CGBitmapContext (IntPtr.Zero, (int) imageRect.Width, (int) imageRect.Height, 8, 0, colorSpace, CGImageAlphaInfo.None)) {
context.DrawImage (imageRect, image.CGImage);
using (var imageRef = context.ToImage ())
return new UIImage (imageRef);
}
}
I wrote a native port of the blur and tint UIImage categories from WWDC for Monotouch.
https://github.com/lipka/MonoTouch.UIImageEffects
Sample code for tint and blur:
UIColor tintColor = UIColor.FromWhiteAlpha (0.11f, 0.73f);
UIImage yourImage;
yourImage.ApplyBlur (20f /*blurRadius*/, tintColor, 1.8f /*deltaSaturationFactor*/, null);

Resources