Objective: Get bigger edit image from imagePickerController with correct position.
Question: Could anyone please help me to get the image and tell me what my code is wrong? In my investigation, CGImageCreateWithImageInRect with UIImagePickerControllerCropRect looks weird. But not sure...
Why: info[UIImagePickerControllerEditedImage] is too small, so I want to use info[UIImagePickerControllerEditedImage].
Issue: croppedImage in the code returns wrong position edited image.
Other info: original image is 90 rotated, so I use imageRotatedByDegrees function found in web, and it works well.
class PhotoViewController: UIImagePickerControllerDelegate {
func imagePickerController(picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [NSObject : AnyObject]) {
let originalImage = info[UIImagePickerControllerOriginalImage] as? UIImage
let rotatedOriginalImage = originalImage!.imageRotatedByDegrees(90, flip: false)
let croppedImage = UIImage(CGImage: CGImageCreateWithImageInRect(rotatedOriginalImage.CGImage, info[UIImagePickerControllerCropRect]!.CGRectValue()))
}
}
// For rotating UIImagePickerControllerOriginalImage
// source: http://blog.ruigomes.me/how-to-rotate-an-uiimage-using-swift/
extension UIImage {
public func imageRotatedByDegrees(degrees: CGFloat, flip: Bool) -> UIImage {
let radiansToDegrees: (CGFloat) -> CGFloat = {
return $0 * (180.0 / CGFloat(M_PI))
}
let degreesToRadians: (CGFloat) -> CGFloat = {
return $0 / 180.0 * CGFloat(M_PI)
}
// calculate the size of the rotated view's containing box for our drawing space
let rotatedViewBox = UIView(frame: CGRect(origin: CGPointZero, size: size))
let t = CGAffineTransformMakeRotation(degreesToRadians(degrees));
rotatedViewBox.transform = t
let rotatedSize = rotatedViewBox.frame.size
// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize)
let bitmap = UIGraphicsGetCurrentContext()
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width / 2.0, rotatedSize.height / 2.0);
// // Rotate the image context
CGContextRotateCTM(bitmap, degreesToRadians(degrees));
// Now, draw the rotated/scaled image into the context
var yFlip: CGFloat
if(flip){
yFlip = CGFloat(-1.0)
} else {
yFlip = CGFloat(1.0)
}
CGContextScaleCTM(bitmap, yFlip, -1.0)
CGContextDrawImage(bitmap, CGRectMake(-size.width / 2, -size.height / 2, size.width, size.height), CGImage)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
}
Found a solution. The problem was look like the image orientation function has a bug.
iOS UIImagePickerController result image orientation after upload
I should read the code more to understand what the root cause is.
Related
I want to display the crop view on uiimagepickercontrollers overlayView, and then crop the image with respect to the overlayImage rect. How can I calculate crop image given the rect of overlayView?
You can do it by using below chunk of code.
let imageRef:CGImage = uncroppedImage.cgImage!.cropping(to: bounds)!
let croppedImage:UIImage = UIImage(cgImage: imageRef)
Here you can pass your rect according to your requirement.
Also you can do it as as per apple suggestion.
Please find Official documentatation.
func cropImage(_ inputImage: UIImage, toRect cropRect: CGRect, viewWidth: CGFloat, viewHeight: CGFloat) -> UIImage?
{
let imageViewScale = max(inputImage.size.width / viewWidth,
inputImage.size.height / viewHeight)
// Scale cropRect to handle images larger than shown-on-screen size
let cropZone = CGRect(x:cropRect.origin.x * imageViewScale,
y:cropRect.origin.y * imageViewScale,
width:cropRect.size.width * imageViewScale,
height:cropRect.size.height * imageViewScale)
// Perform cropping in Core Graphics
guard let cutImageRef: CGImage = inputImage.cgImage?.cropping(to:cropZone)
else {
return nil
}
// Return image to UIImage
let croppedImage: UIImage = UIImage(cgImage: cutImageRef)
return croppedImage
}
private func CropImage( image:UIImage , cropRect:CGRect) -> UIImage
{
UIGraphicsBeginImageContextWithOptions(cropRect.size, false, 0);
let context = UIGraphicsGetCurrentContext();
context?.translateBy(x: 0.0, y: image.size.height);
context?.scaleBy(x: 1.0, y: -1.0);
context?.draw(image.cgImage!, in: CGRect(x:0, y:0, width:image.size.width, height:image.size.height), byTiling: false);
context?.clip(to: [cropRect]);
let croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage!;
}
Goal: Crop a UIImage (that starts with a scale property of 2.0)
I perform the following code:
let croppedCGImage = originalUIImage.cgImage!.cropping(to: cropRect)
let croppedUIImage = UIImage(cgImage: croppedCGImage!)
This code works, however the result, croppedUIImage, has an incorrect scale property of 1.0.
I've tried specifying the scale when creating the final image:
let croppedUIImage = UIImage(cgImage: croppedCGImage!, scale: 2.0, orientation: .up)
This yields the correct scale, but it cuts the size dimensions in half incorrectly.
What should I do here?
(*Note: the scale property on the UIImage is important because I later save the image with UIImagePNGRepresentation(_ image: UIImage) which is affected by the scale property)
Edit:
I got the following to work. Unfortunately it's just substantially slower than the CGImage cropping function.
extension UIImage {
func cropping(to rect: CGRect) -> UIImage {
UIGraphicsBeginImageContextWithOptions(rect.size, false, self.scale)
self.draw(in: CGRect(x: -rect.origin.x, y: -rect.origin.y, width: self.size.width, height: self.size.height))
let croppedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return croppedImage
}
}
Try this:
extension UIImage {
func imageByCropToRect(rect:CGRect, scale:Bool) -> UIImage {
var rect = rect
var scaleFactor: CGFloat = 1.0
if scale {
scaleFactor = self.scale
rect.origin.x *= scaleFactor
rect.origin.y *= scaleFactor
rect.size.width *= scaleFactor
rect.size.height *= scaleFactor
}
var image: UIImage? = nil;
if rect.size.width > 0 && rect.size.height > 0 {
let imageRef = self.cgImage!.cropping(to: rect)
image = UIImage(cgImage: imageRef!, scale: scaleFactor, orientation: self.imageOrientation)
}
return image!
}
}
Use this Extension :-
extension UIImage {
func cropping(to quality: CGInterpolationQuality, rect: CGRect) -> UIImage {
UIGraphicsBeginImageContextWithOptions(rect.size, false, self.scale)
let context = UIGraphicsGetCurrentContext()! as CGContext
context.interpolationQuality = quality
let drawRect : CGRect = CGRect(x: -rect.origin.x, y: -rect.origin.y, width: self.size.width, height: self.size.height)
context.clip(to: CGRect(x:0, y:0, width: rect.size.width, height: rect.size.height))
self.draw(in: drawRect)
let croppedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return croppedImage
}
}
I'm using the ImageHelper Pod for iOS and tvOS and it's working perfectly and might also fit your needs.
It brings a lot UIImage Extensions such as:
Crop and Resize
// Crops an image to a new rect
func crop(bounds: CGRect) -> UIImage?
// Crops an image to a centered square
func cropToSquare() -> UIImage? {
// Resizes an image
func resize(size:CGSize, contentMode: UIImageContentMode = .ScaleToFill) -> UIImage?
Screen Density
// To create an image that is Retina aware, use the screen scale as a multiplier for your size. You should also use this technique for padding or borders.
let width = 140 * UIScreen.mainScreen().scale
let height = 140 * UIScreen.mainScreen().scale
let image = UIImage(named: "myImage")?.resize(CGSize(width: width, height: height))
Also stuff like: Image Effects
// Applies a light blur effect to the image
func applyLightEffect() -> UIImage?
// Applies a extra light blur effect to the image
func applyExtraLightEffect() -> UIImage?
// Applies a dark blur effect to the image
func applyDarkEffect() -> UIImage?
// Applies a color tint to an image
func applyTintEffect(tintColor: UIColor) -> UIImage?
// Applies a blur to an image based on the specified radius, tint color saturation and mask image
func applyBlur(blurRadius:CGFloat, tintColor:UIColor?, saturationDeltaFactor:CGFloat, maskImage:UIImage? = nil) -> UIImage?
-(UIImage *)getNeedImageFrom:(UIImage*)image cropRect:(CGRect)rect
{
CGImageRef subImage = CGImageCreateWithImageInRect(image.CGImage, rect);
UIImage *croppedImage = [UIImage imageWithCGImage:subImage];
CGImageRelease(subImage);
return croppedImage;
}
calling
UIImage *imageSample=image;
CGRect rectMake1=CGRectMake(0, 0,imageSample.size.width*1/4, imageSample.size.height);
UIImage *img1=[[JRGlobal sharedInstance] getNeedImageFrom:imageSample cropRect:rectMake1];
I have a problem with swift, I have an app that, when a button is pressed, must rotate an image of 90°, for doing this I use this code
extension UIImage {
public func imageRotatedByDegrees(degrees: CGFloat, flip: Bool) -> UIImage {
let radiansToDegrees: (CGFloat) -> CGFloat = {
return $0 * (180.0 / CGFloat(M_PI))
}
let degreesToRadians: (CGFloat) -> CGFloat = {
return $0 / 180.0 * CGFloat(M_PI)
}
// calculate the size of the rotated view's containing box for our drawing space
let rotatedViewBox = UIView(frame: CGRect(origin: CGPointZero, size: size))
let t = CGAffineTransformMakeRotation(degreesToRadians(degrees));
rotatedViewBox.transform = t
let rotatedSize = rotatedViewBox.frame.size
// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize)
let bitmap = UIGraphicsGetCurrentContext()
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, 0.5 * rotatedSize.width, 0.5 * rotatedSize.height);
// // Rotate the image context
CGContextRotateCTM(bitmap, degreesToRadians(degrees));
// Now, draw the rotated/scaled image into the context
var yFlip: CGFloat
if(flip){
yFlip = CGFloat(-1.0)
} else {
yFlip = CGFloat(1.0)
}
CGContextScaleCTM(bitmap, yFlip, -1.0)
CGContextDrawImage(bitmap, CGRectMake(-size.width / 2, -size.height / 2, size.width, size.height), CGImage)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}}
with pictures in landscape i haven't any problem but when I try to rotate an image in portrait, on the first press of the button the image is scaled adapting its width to the width of the ImageView, losing his proportions, and without rotate, it is rotated, always with new proportions, only with the second pressure... can anyone help me?
here a screenshot of the starting situation
after first press
after second press
how could i transform image to look like one from the image ?
just want to reach how that transformation works
for rotating im using an extention of UIImage, but thats all a got for now, just new in swift
public func imageRotatedByDegrees(degrees: CGFloat, flip: Bool) -> UIImage {
let radiansToDegrees: (CGFloat) -> CGFloat = {
return $0 * (180.0 / CGFloat(M_PI))
}
let degreesToRadians: (CGFloat) -> CGFloat = {
return $0 / 180.0 * CGFloat(M_PI)
}
// calculate the size of the rotated view's containing box for our drawing space
let rotatedViewBox = UIView(frame: CGRect(origin: CGPointZero, size: size))
let t = CGAffineTransformMakeRotation(degreesToRadians(degrees));
rotatedViewBox.transform = t
let rotatedSize = rotatedViewBox.frame.size
// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize)
let bitmap = UIGraphicsGetCurrentContext()
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width / 2.0, rotatedSize.height / 2.0);
// // Rotate the image context
CGContextRotateCTM(bitmap, degreesToRadians(degrees));
// Now, draw the rotated/scaled image into the context
var yFlip: CGFloat
if(flip){
yFlip = CGFloat(-1.0)
} else {
yFlip = CGFloat(1.0)
}
CGContextScaleCTM(bitmap, yFlip, -1.0)
CGContextDrawImage(bitmap, CGRectMake(-size.width / 2, -size.height / 2, size.width, size.height), CGImage)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
You would need to use Core Animation with CATransform3D, and tweak the transform to give an impression of depth. What you do is to apply a small negative value (-1/200 is a good starting point) to the M34 component of the transformation matrix. You could ether do that to the layer of each of your views, or use a CATransformLayer on a super layer and then add the other images as sublayers of the transform layer.
This is tricky, fussy stuff. I haven't messed with it in a while so I don't remember the fine details any more, but that should be enough to get you started. I suggest searching on "iOS perspective", "M34" and "CATransformLayer" to find out more.
The image becomes blurry once applying roundImage:
Making a UIImage to a circle form
extension UIImage
{
func roundImage() -> UIImage
{
let newImage = self.copy() as! UIImage
let cornerRadius = self.size.height/2
UIGraphicsBeginImageContextWithOptions(self.size, false, 1.0)
let bounds = CGRect(origin: CGPointZero, size: self.size)
UIBezierPath(roundedRect: bounds, cornerRadius: cornerRadius).addClip()
newImage.drawInRect(bounds)
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return finalImage
}
}
Why are you using bezierpath? Just set cornerradius for uiimageview.
If your image is larger than the imageview then you have to resize your image to your imageview size and then set cornerradius for that uiimageview.
It will work. Works for me
Replace the following line
UIGraphicsBeginImageContextWithOptions(self.size, false, 1.0)
with
UIGraphicsBeginImageContextWithOptions(self.size, view.opaque , 0.0)
try this one
let image = UIImageView(frame: CGRectMake(0, 0, 100, 100))
I recommend that you can use AlamofireImage (https://github.com/Alamofire/AlamofireImage)
It's very easily to make rounded image or circle image without losing quality.
just like this:
let image = UIImage(named: "unicorn")!
let radius: CGFloat = 20.0
let roundedImage = image.af_imageWithRoundedCornerRadius(radius)
let circularImage = image.af_imageRoundedIntoCircle()
Voila!
Your issue is that you are using scale 1, which is the lowest "quality".
Setting the scale to 0 will use the device scale, which just uses the image as is.
A side note: Functions inside a class that return a new instance of that class can be implemented as class functions. This makes it very clear what the function does. It does not manipulate the existing image. It returns a new one.
Since you were talking about circles, I also corrected your code so it will now make a circle of any image and crop it. You might want to center this.
extension UIImage {
class func roundImage(image : UIImage) -> UIImage? {
// copy
guard let newImage = image.copy() as? UIImage else {
return nil
}
// start context
UIGraphicsBeginImageContextWithOptions(newImage.size, false, 0.0)
// bounds
let cornerRadius = newImage.size.height / 2
let minDim = min(newImage.size.height, newImage.size.width)
let bounds = CGRect(origin: CGPointZero, size: CGSize(width: minDim, height: minDim))
UIBezierPath(roundedRect: bounds, cornerRadius: cornerRadius).addClip()
// new image
newImage.drawInRect(bounds)
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// crop
let maybeCrop = UIImage.crop(finalImage, cropRect: bounds)
return maybeCrop
}
class func crop(image: UIImage, cropRect : CGRect) -> UIImage? {
guard let imgRef = CGImageCreateWithImageInRect(image.CGImage, cropRect) else {
return nil
}
return UIImage(CGImage: imgRef)
}
}