I used this GitHub Repository to make an image cropper that could crop images into irregular shapes. As of right now, I have implemented this repository and am able to crop UIImages in irregular shapes as depicted in the link.
In its current state, the cropper uses a UIBezierPath and a CAShapeLayer to take a normal (rectangular) UIImage and cut out the parts not included within the shape. Due to the fact that I am trying to store this UIImage in a database, I need it to be stored as just a UIImage and not have to also store the CAShapeLayer and UIBezierPath that crop the image.
Is there a way to make this cropped section into a UIImage that is an irregular shape? If not, is there an alternative way of cropping a photo using user-drawn paths like shown in the link above that will allow for it to be stored as an irregularly shaped UIImage?
Thanks!
Please check this library completely. They defined one method where you can get image from Graphic context.
func cropImage(){
UIGraphicsBeginImageContextWithOptions(tempImageView.bounds.size, false, 1)
tempImageView.layer.render(in: UIGraphicsGetCurrentContext()!)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
self.croppedImage = newImage!
}
This is the method where you can get UIImage and utilize it as per your requirement.
Related
I'm trying to save the camera image from ARFrame to file. The image is given as a CVPixelBuffer. I have the following code, which produces an image with a wrong aspect ratio. I tried different ways, including CIFilter to scale down the image, but still cannot get the correct picture saved.
Camera resolution is 1920w x 1440h. I want to create a 1x scale image from the provided pixel buffer. I am getting 4320 × 5763, 4320 × 3786
How do I save capturedImage (CVPixelBuffer) from ARFrame to file?
func prepareImage(arFrame: ARFrame) {
let orientation = UIInterfaceOrientation.portrait
let viewportSize = CGSize(width: 428, height: 869)
let transform = arFrame.displayTransform(for: orientation, viewportSize: viewportSize).inverted()
let ciImage = CIImage(cvPixelBuffer: arFrame.capturedImage).transformed(by: transform)
let uiImage = UIImage(ciImage: ciImage)
}
There are a couple of issues I encountered when trying to pull a screenshot from an ARFrame:
The CVPixelBuffer underlined in the ARFrame is rotated to the left, which means that you will need to rotate the image.
The frame itself is much wider than what you see in the camera. This is done because AR needs a wider range of image to be able to see beyond what you see for it's scene.
E.G:
This is an image taken from a raw ARFrame
The following below may assist you to handle those two steps easily:
https://github.com/Rightpoint/ARKit-CoreML/blob/master/Library/UIImage%2BUtilities.swift
The problem with these are that this code uses UIGraphicsBeginImageContextWithOptions, which is heavy util.
But if performance are not your main issue(Let's say for an AR Experience) these might be suffice.
If performance is an issue, you can do the mathematical calculation to understand the conversion you will need to do, and simply cut the image needed from the original picture and rotate that.
Hope this will assist you!
I have rather interesting issue where cropped image is not displaying correctly inside UIImageView.
In my app users are able to draw custom shapes and then crop image.
For drawing shapes I used this github library - ZImageCropper
Here's how I crop image:
UIGraphicsBeginImageContextWithOptions(pickedImage.bounds.size, false, 1)
pickedImage.layer.render(in: UIGraphicsGetCurrentContext()!)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
After that image gets placed inside UIImageView and here's what I got:
But when I place Image from assets catalog into same UIImageView here's the result:
Both these images have same size, I have resized cropped image manually when tried to fix this bug but it remained the same. UIImageView content mode is Aspect Fit.
What I'm doing wrong?
All the code I can find revolves around loading images directly into visual controls.
However, I have my own cache system (converting a project from another language) and so I as efficient as possible want the following:
Load jpg/png images - probably into a bitmap / cgimage. (his can either be from the file system or from images downloaded online)
Possibly save image back as a compressed/resized png/jpg file
Supply an image reference for a visual control
I am new to swift and ios platform, but as far as I can tell, cgimage is as close as it gets? However, there does not appear to be a way to load an image from he file system when using cgimage... But i have found people discussing ways for e.g. UIImage, so I am now doubting my initial impression ha cgimage was the best match for my needs.
It is easy to get confused between UIImage, CGImage and CIImage. The difference is following:
UIImage: UIImage object is a high-level way to display image data. You can create images from files, from Quartz image objects, or from raw image data you receive. They are immutable and must specify an image’s properties at initialization time. This also means that these image objects are safe to use from any thread.
Typically you can take NSData object containing a PNG or JPEG representation image and convert it to a UIImage.
CGImage: A CGImage can only represent bitmaps. Operations in CoreGraphics, such as blend modes and masking require CGImageRefs. If you need to access and change the actual bitmap data, you can use CGImage. It can also be converted to NSBitmapImageReps.
CIImage: A CIImage is an immutable object that represents an image. It is not an image. It only has the image data associated with it. It has all the information necessary to produce an image.
You typically use CIImage objects in conjunction with other Core Image classes such as CIFilter, CIContext, CIColor, and CIVector. You can create CIImage objects with data supplied from variety of sources such as Quartz 2D images, Core Videos image, etc.
It is required to use the various GPU optimized Core Image filters. They can also be converted to NSBitmapImageReps. It can be based on the CPU or the GPU.
In conclusion, UIImage is what you are looking for. Reasons are:
You can get image from device memory and assign it to UIImage
You can get image from URL and assign it to UIImage
You can write UIImage in your desired format to device memory
You can resize image assigned to UIImage
Once you have assigned an image to UIImage, you can use that instance in controls directly. e.g. setting background of a button, setting as image for UIImageView
Would have added code samples but all these are basic questions which have been already answered on Stackoverflow, so there is no point. Not to mention adding code will make this unnecessarily large.
Credit for summarizing differences: Randall Leung
You can load your image easily into an UIImage object...
NSData *data = [NSData dataWith...];
UIImage *image = [UIImage imageWithData:data];
If you then want to show it in a view, you can use an UIImageView:
UIImageView *imageView = [[UIImageView alloc] init]; // or whatever
...
imageView.image = image;
See more in UIImage documentation.
Per the documentation at: https://developer.apple.com/documentation/uikit/uiimage
let uiImage = uiImageView.image
let cgImage = uiImage.cgImage
I'm trying to create a feature in my app that allows the user to extract a specified area of an existing image, and save it as a png with alpha enabled.
I've put a UIView ontop of a UIImageView - the imageView displays the image, while you are drawing your mask on the transparent view. For drawing, I'm using UIBezierPath. The user are able to draw around the object, and the inside will temporarily fill in with black.
The user picks the image from photo roll, and it's presented in the underlying UIImageView as shown on the left image, and when the user has drawn a shape (which automatically closes), on the overlying UIView, it looks like the right image:
This is working as expected, but when the user then clicks "Crop", then the magic should start. So far, I've only been able to create a "mask" and save it as an image on the roll, as displayed here (never mind the aspect ratios, I'll fix that later):
This is just a normal image, created from the path/shape, with colors(black on white, not black on transparent).
What I need is some way to use this "image" as the alpha channel for the original image.
I know that these are two completely separate things, that an alpha-channel isn't the same as an image, but I have the shape, so I would like to know if there's any possible way to "crop" or "alpha out" with my data. What I want to end up with, is a png of this cat's face, with the surroundings 100% transparent (or not even there), so that I can change the background, like this:
It's important to note that I'm not talking about showing a UIImage in a UIImageView with applied mask, I'm talking about creating a new image, based on an existing image, combined with another image that I want to somehow convert to the first image's alpha channel, thus saving one image like above, with transparency.
I'm not too familiar with handling the data of images, so I was wondering if anyone know how to create a new image based on two images, one acting as alpha for the other, when neither of the images necessarily have an alpha channel to begin with.
The method below will take your original image and the mask image (the black shape) and return a UIImage that includes only the content of the image covered by the mask:
+ (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *) mask
{
CGImageRef imageReference = image.CGImage;
CGImageRef maskReference = mask.CGImage;
CGImageRef imageMask = CGImageMaskCreate(CGImageGetWidth(maskReference),
CGImageGetHeight(maskReference),
CGImageGetBitsPerComponent(maskReference),
CGImageGetBitsPerPixel(maskReference),
CGImageGetBytesPerRow(maskReference),
CGImageGetDataProvider(maskReference),
NULL, // Decode is null
YES // Should interpolate
);
CGImageRef maskedReference = CGImageCreateWithMask(imageReference, imageMask);
CGImageRelease(imageMask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedReference];
CGImageRelease(maskedReference);
return maskedImage;
}
The areas outside the mask will be transparent. You can then combine the resulting UIImage with a background color.
I am creating a iOS drawing application that allows the user to place arrows on the page and position/stretch/rotate them. When the user saves the image, I must modify the background image with the png arrows at the exact coordinate they were dragged to.
The 3 drag-able parts will be UIImage's with CapInsets to keep the arrow looking normal when stretched. The canvas UIImageView will have the same aspect as the actual image ensuring no black space is visible.
My questions is this. After a drag-able part has been stretched, how do I save the results of the UIImageView (the smaller drag-able parts) to disk/memory for later modifying of the background image (the background UIImageView canvas). It is important to ensure what image I used to modify the canvas looks exactly the same as the UIImageView drag-able part that represented it. That means that it must retain exactly how the image is displayed, including the CapInsets.
I am ignoring rotation of the drag-able parts for now because I suspect that would be the easy part.
Try using this code to get a layer screenshot from a desired view:
- (UIImage *)imageFromLayer:(CALayer *)layer {
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}