Flipped image for Layer - ios

so I am trying to create a flipped image for CAShapeLayer.
and I have the following code
shiplayer.contents = #imageLiteral(resourceName: "spaceship").withHorizontallyFlippedOrientation().cgImage
view.layer.addSublayer(shiplayer)
But the image rendered by the shiplayer is still the original unflipped image .
I tested it on an UIImageView, and the image is flipped properly.
What can I do to flip the image for a CALayer>?
Thanks

The CGImage is the underlying bitmap; it knows nothing of the image properties. So draw the image, flipped, into a new image graphics context and use the resulting image as the basis for a new CGImage.

Related

Cropped image doesn't display correctly inside ImageView

I have rather interesting issue where cropped image is not displaying correctly inside UIImageView.
In my app users are able to draw custom shapes and then crop image.
For drawing shapes I used this github library - ZImageCropper
Here's how I crop image:
UIGraphicsBeginImageContextWithOptions(pickedImage.bounds.size, false, 1)
pickedImage.layer.render(in: UIGraphicsGetCurrentContext()!)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
After that image gets placed inside UIImageView and here's what I got:
But when I place Image from assets catalog into same UIImageView here's the result:
Both these images have same size, I have resized cropped image manually when tried to fix this bug but it remained the same. UIImageView content mode is Aspect Fit.
What I'm doing wrong?

ios drawing lines to an imageView and combine the lines and the image to a new image, size of the image changed

I have an imageView and say its size is the screen size. It displays an image which has a larger size, and the imageView's content mode is set to scaleAspectFill. Then I drawing some lines on the imageView by using UIBeizerPath.
Later I would like to generate an new image which includes the lines I drew by using drawViewHierarchyInRect. The problem is the new image size is the imageView's size, since the drawViewHierarchyInRect method works only like taking a snapshot. How Can I combine the original image with the lines I drew and at the same time, keep the image's size?
You want to use the method UIGraphicsBeginImageContextWithOptions to create an off-screen context of the desired size. (In your case, the size of the image.)
Then draw your image into the context, draw the lines on top, and extract your composite image from the context. Finally, dispose of the context.
There is tons of sample code online showing how to use UIGraphicsBeginImageContextWithOptions. It's quite easy.

Creating alpha for UIImage

I'm trying to create a feature in my app that allows the user to extract a specified area of an existing image, and save it as a png with alpha enabled.
I've put a UIView ontop of a UIImageView - the imageView displays the image, while you are drawing your mask on the transparent view. For drawing, I'm using UIBezierPath. The user are able to draw around the object, and the inside will temporarily fill in with black.
The user picks the image from photo roll, and it's presented in the underlying UIImageView as shown on the left image, and when the user has drawn a shape (which automatically closes), on the overlying UIView, it looks like the right image:
This is working as expected, but when the user then clicks "Crop", then the magic should start. So far, I've only been able to create a "mask" and save it as an image on the roll, as displayed here (never mind the aspect ratios, I'll fix that later):
This is just a normal image, created from the path/shape, with colors(black on white, not black on transparent).
What I need is some way to use this "image" as the alpha channel for the original image.
I know that these are two completely separate things, that an alpha-channel isn't the same as an image, but I have the shape, so I would like to know if there's any possible way to "crop" or "alpha out" with my data. What I want to end up with, is a png of this cat's face, with the surroundings 100% transparent (or not even there), so that I can change the background, like this:
It's important to note that I'm not talking about showing a UIImage in a UIImageView with applied mask, I'm talking about creating a new image, based on an existing image, combined with another image that I want to somehow convert to the first image's alpha channel, thus saving one image like above, with transparency.
I'm not too familiar with handling the data of images, so I was wondering if anyone know how to create a new image based on two images, one acting as alpha for the other, when neither of the images necessarily have an alpha channel to begin with.
The method below will take your original image and the mask image (the black shape) and return a UIImage that includes only the content of the image covered by the mask:
+ (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *) mask
{
CGImageRef imageReference = image.CGImage;
CGImageRef maskReference = mask.CGImage;
CGImageRef imageMask = CGImageMaskCreate(CGImageGetWidth(maskReference),
CGImageGetHeight(maskReference),
CGImageGetBitsPerComponent(maskReference),
CGImageGetBitsPerPixel(maskReference),
CGImageGetBytesPerRow(maskReference),
CGImageGetDataProvider(maskReference),
NULL, // Decode is null
YES // Should interpolate
);
CGImageRef maskedReference = CGImageCreateWithMask(imageReference, imageMask);
CGImageRelease(imageMask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedReference];
CGImageRelease(maskedReference);
return maskedImage;
}
The areas outside the mask will be transparent. You can then combine the resulting UIImage with a background color.

iOS Drawing image with objects (arrows) and saving to disk

I am creating a iOS drawing application that allows the user to place arrows on the page and position/stretch/rotate them. When the user saves the image, I must modify the background image with the png arrows at the exact coordinate they were dragged to.
The 3 drag-able parts will be UIImage's with CapInsets to keep the arrow looking normal when stretched. The canvas UIImageView will have the same aspect as the actual image ensuring no black space is visible.
My questions is this. After a drag-able part has been stretched, how do I save the results of the UIImageView (the smaller drag-able parts) to disk/memory for later modifying of the background image (the background UIImageView canvas). It is important to ensure what image I used to modify the canvas looks exactly the same as the UIImageView drag-able part that represented it. That means that it must retain exactly how the image is displayed, including the CapInsets.
I am ignoring rotation of the drag-able parts for now because I suspect that would be the easy part.
Try using this code to get a layer screenshot from a desired view:
- (UIImage *)imageFromLayer:(CALayer *)layer {
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}

Core Graphics: Mask Image, add Overlay and Underlay

How can I mask an image, add an overlay and an underlay to the masked image with Core Graphics? (for instance: a document icon consists out of a mask png, a page curl png and a base with shadow) Can someone give me a best practice?
Draw the underlay with CGContextDrawImage().
Push a graphics context state with CGContextSaveGState().
Load the mask and add it to the context using CGContextClipToMask().
Draw your content.
Pop the graphics context with CGContextRestoreGState().
Draw your overlay image with CGContextDrawImage().

Resources