How to remove the transparent area of an UIImageView after masking? - ios

In one of my iOS applications, I am trying to cut a portion of an image using CGImageMask. I have succeeded in masking the image with the following code:
- (UIImage *)maskImage:(UIImage *)referenceImage withMask:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([referenceImage CGImage], mask);
return [UIImage imageWithCGImage:masked];
}
So, my image will be:
myImageView.image = [self maskImage:[UIImage imageNamed:#"image.png"]
withMask:[UIImage imageNamed:#"mask.png"]];
Problem:
The output image is of the same size of reference image('image.png') with transparent area around. But I want to avoid those transparent area, and crop the result image. How can I achieve this? There are several masks, and the mask frames are not similar to all. I am attaching a reference image of the problem overview here. Please help me friends. Thanks in advance.

Look up auto-cropping a UIImage. This should crop out anything transparent.
How do I autocrop a UIImage?

Related

Create a rounded inflated square [duplicate]

This question already has answers here:
Draw iOS 7-style squircle programmatically
(8 answers)
Closed 7 years ago.
Please help me!!! How create a rounded inflated square UIImageView. I need to create icons as a contact such as in the viber application .
Sample image
https://graphicdesign.stackexchange.com/questions/35579/create-a-rounded-inflated-square-in-illustrator-photoshop
I Asked about rounded INFLATED square, And i think questions have a different.
The easy way to do it is to use a mask and apply it to your image.
- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
return [UIImage imageWithCGImage:masked];
}
See this tutorial : http://iosdevelopertips.com/cocoa/how-to-mask-an-image.html#comment-47347
And the stackoverflow topic about your issue:
How to Mask an UIImageView
Possible duplicate here: Draw iOS 7-style squircle programatically
Anyway, what you want is a squircle/superellipse. Read up on the uibezierpath and customize the curves bigger for your usage.

CGImageMaskCreate specify color for mask image

How can specify mask image color for extracting a part of image?
I have found this code:
- (UIImage *)maskImage:(UIImage *)image withMask:(UIImage *)maskImage
{
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
return [UIImage imageWithCGImage:masked];
}
As Apple doc says:
The image to apply the mask parameter to. This image
must not be an image mask and may not have an image mask or masking
color associated with it. mask A mask. If the mask is an image, it
must be in the DeviceGray color space, must not have an alpha
component, and may not itself be masked by an image mask or a masking
color. If the mask is not the same size as the image specified by the
image parameter, then Quartz scales the mask to fit the image.
I have blue source image:
and I have image with specified colors:
So based on these colors I want to extract a part of image from the source image and it also will be great if I can detect center coordinates of extracted images in the source image. Right not problem is that using DeviceGray colors I need to have one source image and five mask images, but instead of it I want to specify color which I want to extract and have only 2 images - source and "mask" image.

Why is my image mask in iOS is so low-resolution (blocky)?

I am trying to apply a layer mask to a UIImage using CGImageCreateWithMask but the mask over the image is coming out very low resolution and appears blocky.
This is the code I'm trying (link to github) but I don't know what's wrong.
- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, true);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
return [UIImage imageWithCGImage:masked];
}
Here's what it looks like in the end.
My mask is a transparent PNG with an alpha channel (the circle in the screen-shot).
I don't know what I'm doing wrong.
Thanks for any/all help.
I'm guessing your mask is getting resized to fit the passed image and thus the jagged edges.
From the docs for CGImageCreateWithMask
If the mask is not the same size as the image specified by the image parameter, then Quartz scales the mask to fit the image
If that is not the case check the scale properties of the image and mask.
I was never able to figure this out.
It was probably something wrong with how the iOS runtime was interpreting the alpha-PNG I was using. After saving the mask as a JPG everything started working w/o any code-changes.
I find this very strange since the exact same mask files worked on both Android and WindowsPhone.
I assumed that my PNG files should work (as documented) but there must have been an encode setting that was incompatible with CoreGraphics

Crop Image using mask

My requirement is to crop the image using the maskImage.
Am able the to crop the image but not in the exact ratio as expected. I googled round and tried to implement it but unfortunately didn't got result as expected.This is what am getting after cropping the image.
Following is the code i'm using.
- (UIImage*) maskImage1:(UIImage *) image withMask:(UIImage *) mask
{
CGImageRef imageReference = image.CGImage;
CGImageRef maskReference = mask.CGImage;
CGImageRef imageMask = CGImageMaskCreate(CGImageGetWidth(maskReference),
CGImageGetHeight(maskReference),
CGImageGetBitsPerComponent(maskReference),
CGImageGetBitsPerPixel(maskReference),
CGImageGetBytesPerRow(maskReference),
CGImageGetDataProvider(maskReference),
NULL, // Decode is null
YES // Should interpolate
);
CGImageRef maskedReference = CGImageCreateWithMask(imageReference, imageMask);
CGImageRelease(imageMask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedReference];
CGImageRelease(maskedReference);
return maskedImage;
}
Thanks..!!
An alternative
You can also achieve the same effect with CALayers, and, in my opinion, is clear.
- (UIImage*) maskImage1:(UIImage *) image withMask:(UIImage *) mask
{
UIImage* maskedImage = image;
CALayer *maskLayer = [CALayer layer];
maskLayer.frame = maskedImage.bounds;
maskLayer.contents = (__bridge id) mask.CGImage;
maskedImage.layer.mask = maskLayer;
return maskedImage;
}
Probably a solution
Your mask UIImage probably has the contentScale wrong
mask.layer.contentScale = [UISCreen mainScreen].scale;
You can also force the size of your mask before you do CGImageMaskCreate:
mask.frame = image.bounds;
Maybe you had already solved this, but as I had the same problem and I solved it, I will explain the solution: The mask is applied to the real size of the photo, in my case it was 3264x2448 and logical it is not the iphone screen size, so when the mask is applied on the image, it became very small. I solved creating a layer on photoshop that have this 3264x2448 size and scaled the mask to stay exactly the same way like on iphone screen.
Other problem that I had is that the image got another orientation when I took the picture(I was using the camera), then I had to turn the mask to the same orientation of the picture on this photoshop layer. Modifying the orientation gives you changing sides, what was height now is width, as the inverse too, so, when I had to calculate the scale, I had to pay attention for which side should be multiplied to get the correct scale.

iOS load dynamic layer mask (i.e. layer mask is provided outside the code)

Situation: want to apply interesting photo frame to images, and the photo frame is implemented as layer mask, is it possible to dynamically build the layer mask by load a photo frame template outside of the obj-C code so that I can change the frame layer without ever touching the code?
the end result will be something like this. http://a3.mzstatic.com/us/r1000/106/Purple/9e/b9/9b/mzl.rdrrpcgr.320x480-75.jpg, except the photo edge/frame is dynamically loaded outside of the app, rather than built-into the app.
Ideally, would like to easily create a photo frame in photoshop as png file where the black pixel here will allow full transparency.. and then I can load this photo frame in the iOS app as the frame layer will allow the layer underneath to fully go through wherever the mask layer is black...
+ (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
return [UIImage imageWithCGImage:masked];
}
Then you could use this method with a UIImage you load from a URL.
The URL could serve a different UIImage, or could take a parameter for which UIImageMask to load. Does this answer your question?

Resources