How to save a UIImage with a mask - ios

I have a UIImageView that can be moved/scaled (self.imageForEditing). On top of this image view I have an overlay with a hole cut out, which is static and can't be moved. I need to save just the part of the underlying image that is visible through the hole at the time a button is pressed. My current attempt:
- (IBAction)saveImage
{
UIImage *image = self.imageForEditing.image;
CGImageRef originalMask = [UIImage imageNamed:#"picOverlay"].CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(originalMask),
CGImageGetHeight(originalMask),
CGImageGetBitsPerComponent(originalMask),
CGImageGetBitsPerPixel(originalMask),
CGImageGetBytesPerRow(originalMask),
CGImageGetDataProvider(originalMask), nil, YES);
CGImageRef maskedImageRef = CGImageCreateWithMask(image.CGImage, mask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedImageRef scale:image.scale orientation:image.imageOrientation];
CGImageRelease(mask);
CGImageRelease(maskedImageRef);
UIImageView *test = [[UIImageView alloc] initWithImage:maskedImage];
[self.view addSubview:test];
}
As a test I'm just trying to add the newly created image to the top left of the screen. Theoretically it should be a small round image (the part that was visible through the overlay). But I'm just getting the whole image created again. What am I doing wrong? And how can I account for the fact that self.imageForEditing can be moved around?

CGImageCreateWithMask returns an image of the same size as the original's one.
That is why you get the original image (I assume) with the mask being applied.
You can apply the mask and then remove the invisible border. Use the advice from this question: iOS: How to trim an image to the useful parts (remove transparent border)
Find the bounds of the non-transparent part of the image and redraw it into a new image.

Related

iOS: How to get a piece of a stretched image?

The generic problem I'm facing is this:
I have a stretchable 50x50 PNG. I'm stretching it to 300x100. I want to get three UIImages of size 100x100 cut from the stretched image, A, B & C in the picture below:
I'm trying to do it like this:
// stretchedImage is the 50x50 UIImage, abcImageView is the 300x100 UIImageView
UIImage *stretchedImage = [abcImageView.image stretchableImageWithLeftCapWidth:25 topCapHeight:25];
CGImageRef image = CGImageCreateWithImageInRect(stretchedImage.CGImage, bButton.frame);
UIImage *result = [UIImage imageWithCGImage:image];
[bButton setBackgroundImage:result forState:UIControlStateSelected];
CGImageRelease(image);
I'm trying to crop the middle 100 ("B") using CGImageCreateWithImageInRect, but this is not right, since stretchedImage is 50x50, not 300x100. How do I get the 300x100 image to crop from? If the original image was 300x100 there would be no problem, but then I would lose the advantage of stretchable image.
I guess to generalize the problem even more, the question would be as simple as: if you scale or stretch an image to a bigger image view, how do you get the scaled/stretched image?
Background for the specific task I'd like to apply the solution for (if you can come up with an alternative solution):
I'm trying to implement a UI that's similar to the one you see during a call in native iPhone call application: a plate containing buttons for mute, speaker, hold, etc. Some of them are toggle type buttons with a different background color for selected state.
I have two graphics for the whole plate, for non-selected and selected states. I'm stretching both images to the desired size. For the buttons in selected state I want to get a piece of the stretched selected graphic.
You should be able to do this by rendering abcImageView to a UIImage
UIGraphicsBeginImageContextWithOptions(abcImageView.bounds.size, NO, 0.f);
[abcImageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Then, you can crop the image like this (given cropRect):
CGImageRef cgImage = CGImageCreateWithImageInRect(image.CGImage, cropRect);
UIImage *croppedImage = [UIImage imageWithCGImage:cgImage];
// Do something with the image.
CGImageRelease(cgImage);

How to retrive single image from png file in iphone

I can access image's by using [UIImage imageNamed:#"nameOfPng.png"]; if I have a single image in a .png file. But if we are having a multiple images in a .png file, how do we access sub images from a single UIImage ?
This is the single .png file. If i wanted to get the red button image or any other button image how can I do this.
What you are looking to do is to create a clipped UIImage from a section of your current image. I usually do it this way, changing the clip rect each time. Where srcImage is your original image.
//Set the clip rectangle
CGRect clipRect = CGRectMake(0, 0, 100, 100);
//Get sub image
CGImageRef drawImage = CGImageCreateWithImageInRect(srcImage.CGImage, clipRect);
UIImage *newImage = [UIImage imageWithCGImage:drawImage];
CGImageRelease(drawImage);

Creating a UIImage from two smaller UIImages

I have two instances of UIImage. How can I create a third UIImage that's just the two original images stitched together? I'd like the first image on top and the second image on the bottom such that the top image's bottom edge is flush with the bottom image's top edge.
something like this should work (I havent tested it though)
-(UIImage *)imageWithTopImage:(UIImage *)topImage bottomImage:(UIImage *)bottomImage
{
UIGraphicsBeginImageContext(CGSizeMake(topImage.size.width, topImage.size.height + bottomImage.size.height);
[topImage drawInRect:CGRectMake(0, 0, topImage.size.width, topImage.size.height)];
[bottomImage drawInRect:CGRectMake(0, topImage.size.width, bottomImage.size.width, bottomImage.size.height)];
UIImage *combinedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return combinedImage;
}

Crop UIImage according to Image Resolution

I have one UIImageView. Its content mode is set to AspectFit.
[imageView setContentMode:UIViewContentModeScaleAspectFit].
I need to crop a subImage from this image. This is the code which crops the image:
CGImageRef imageRef = CGImageCreateWithImageInRect([imageView.image CGImage], customRect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
where customRect is the rectangle from which I need to crop the image.
This is how I calculate it:
CGRect customRect = CGRectMake((cropView.frame.origin.x/xFactor),
(cropView.frame.origin.y/yFactor),
(cropView.frame.size.width/xFactor),
(cropView.frame.size.height/yFactor));
The problem comes in cropping. CGImageCreateWithImageInRect crops the given area according to the actual image size which, in some cases, is larger than the image view size. I tried using other approaches such as UIGraphics:getImageFromCurrentImageContext but these do not keep the image quality as much as it degrades them.

Crop UIImage from a transformed UIImageView

I am letting the user capture an image from the camera or picking one from the library.
This image I display in an UIImageView.
The user can now scale and position the image within a bounding box, exactly like you would do using the UIImagePickerController when allowsEditing is set to YES.
When the user is satisfied with the result and taps Done I would like to produce a cropped UIImage.
The problem arises when using CGImageCreateWithImageInRect as this does not take the scaling into account. The transform is applied to the imageView like this:
CGAffineTransform transform = CGAffineTransformScale(self.imageView.transform, newScale, newScale);
[self.imageView setTransform:transform];
Using a gestureRecognizer.
I assume what is happening is; the UIImageView is scaled and moved, it then applies the UIViewContentModeScaleAspectFit to the UIImage is holds and when I ask it to crop the image, it does exactly that - whit no regards to the scaling positioning. The reason I think this, is that if I don't scale or move the image but just tap Done straight away the cropping works.
I crop the image like this:
- (UIImage *)cropImage:(UIImage*) img toRect:(CGRect)rect {
CGFloat scale = [[UIScreen mainScreen] scale];
if (scale>1.0) {
rect = CGRectMake(rect.origin.x*scale , rect.origin.y*scale, rect.size.width*scale, rect.size.height*scale);
}
CGImageRef imageRef = CGImageCreateWithImageInRect([img CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef scale:self.imageView.image.scale orientation:self.imageView.image.imageOrientation];
// UIImage *result = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return result;
}
Passing in a cropRect from a view that is a subView of my main view (the square overlay box, like in UIImagePickerController). Main UIView has a UIImageView that gets scaled and a UIView that displays the crop rectangle.
How can I get the "what you see is what you get" cropping and which factors must I take into account. Or maybe suggestions if I should implemented the hierarchy or scaling differently.
Try a simple trick. Apple has got samples on its site to show how to zoom into a photo using code. Once done zooming, using graphic context take the frame size of the bounding view, and take the image with that. Eg Uiview contains scroll view which has the zoomed image. So the scrollview zooms and so does your image, now take the frame size of your bounding UIview, and create an image context out of it and then save that as a new image. Tell me if that makes sense.
Cheers :)

Resources