How to crop an image with a given angle with swift - ios

Does anyone know how to crop an image with a given angle with swift?
I put the demo image below.
I googled a little while and found almost all the solutions was about the image of no rotation or 90-degree rotation.
I want to rotate the image then crop it just like what the Photo App does in iPhone.
Thanks for any hint!

One option is to use the CGContext and CGAffineTransform to rotate according to your angle.
Make two rects one for rotated image and one for cropping image and use cropping(to rect: CGRect) -> CGImage?
finally according to your logic make only one image or two this is totally up to your approach.
here is a good reference for you:
https://www.raywenderlich.com/2305-core-image-tutorial-getting-started
hope it helps

Design storyboard and create outlets and properties in ViewController class.
let picker = UIImagePickerController()
var circlePath = UIBezierPath()
#IBOutlet weak var crop: CropView!
#IBOutlet weak var imageView: UIImageView!
#IBOutlet weak var scroll: UIScrollView!
The property crop has a custom UIView class. Add the below delegate function in it.
func point(inside point: CGPoint, with event: UIEvent?) -> Bool{
return false
}
Create extension for UIImage and UIImageView — refer. For zooming image, use delegate function viewForZooming ,then add UIScrollViewDelegate to class as the subtype and return imageView.
Pick image from gallery —
Create IBAction, To pick an image from Album and set the picker source type as photo library use the code below
picker.sourceType = .photoLibrary
present(picker, animated: true, completion: nil)
and add UIImagePickerControllerDelegate to class as sub type. In viewDidLoad,
picker.delegate = self
Use didFinishPickingMediaWithInfo —UIImagePickerControllerDelegate function to set image to the view after picking image from album.
let chosenImage = info[UIImagePickerControllerOriginalImage] as! UIImage
imageView.image = chosenImage.resizeImage()
dismiss(animated:true, completion: nil)
To dismiss the photo album when you did cancel, use imagePickerControllerDidCancel delegate.
Shoot picture from camera —
Create IBAction to shoot an image from the camera. First, check whether SourceTypeAvailable in the device. If it is set picker camera capture mode as a photo. Else handle the action.Then set source type as camera and camera capture mode as the photo.
if UIImagePickerController.isSourceTypeAvailable(.camera){
picker.sourceType = UIImagePickerControllerSourceType.camera
picker.cameraCaptureMode = UIImagePickerControllerCameraCaptureMode.photo
picker.modalPresentationStyle = .custom
present(picker,animated: true,completion: nil)
}else{
//action performed if there is no camera in device.
}
Cropping —
Add subLayer to the picked image — this layer provides an area to fix crop frame.
let path = UIBezierPath(roundedRect: CGRect(x: 0, y: 0, width: self.view.bounds.size.width, height: self.view.bounds.size.height), cornerRadius: 0)
Assign the path to the circle path property as of type UIBezierPath. Using BezierPath, you can change the crop frame into different shapes.
circlePath = UIBezierPath(roundedRect: CGRect(x: (self.view.frame.size.width / 2) - (size/2), y: (self.view.frame.size.height / 2) - (size / 2), width: size, height: size, cornerRadius: 0)
path.append(circlePath)
The CAShapeLayer that draws a cubic Bezier spline in its coordinate space.
let fillLayer = CAShapeLayer()
fillLayer.path = path.cgPath
Finally, add the layer to view,
view.layer.addSublayer(fillLayer)
Add crop area:
Create crop area. For that we need to set factor, dividing the imageView image width by view frame width set scale,
let factor = imageView.image!.size.width/view.frame.width
for zooming as scroll zoomScale.
let scale = 1/scroll.zoomScale
Then set crop area frame(x, y, width, height).
let x = (scroll.contentOffset.x + circlePath.bounds.origin.x - imageFrame.origin.x) * scale * factor
let y = (scroll.contentOffset.y + circlePath.bounds.origin.y - imageFrame.origin.y) * scale * factor
let width = circlePath.bounds.width * scale * factor
let height = circlePath.bounds.height * scale * factor
Finally , create a IBAction to crop the image.
let croppedCGImage = imageView.image?.cgImage?.cropping(to: croparea)
let croppedImage = UIImage(cgImage: croppedCGImage!)

Related

ios swift create rounded profile image like twitter, instagram,

I would like to create a very simple image editor, same as twitter (for the profile image)
I know how to pinch or move an image.
But i don’t know how to create the "circle layer" and just keep this part of the image, like this :
Make sure to import QuarzCore.
func maskRoundedImage(image: UIImage, radius: CGFloat) -> UIImage {
let imgView: UIImageView = UIImageView(image: image)
let layer = imgView.layer
layer.masksToBounds = true
layer.cornerRadius = radius
UIGraphicsBeginImageContext(imgView.bounds.size)
layer.render(in: UIGraphicsGetCurrentContext()!)
let roundedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return roundedImage!
}
https://github.com/andreaantonioni/AAPhotoCircleCrop
Each view has an underlying layer onto which you apply a corner radius. Then you must apply clipToBounds on that layer in order to apply that mask to the overlying UIView. The corner radius must be half the width of the view in order to get a circle effect. Otherwise the corners of the view will be rounded.
For example:
let square = UIView()
square.center = view.center
square.bounds.size = CGSize(width: 100, height: 100)
square.backgroundColor = .red
view.addSubview(square)
square.layer.cornerRadius = 50
square.clipsToBounds = true
Above mentioned ways are good, But you can not achieved exact output as in above image E.g You can not get alpha effect.
I suggest simple way to do it.
1) Add new image to project with transparent background with Opocity and circle.
2) Add new imageView above to mainview as below.
let image = UIImageView(frame: view.bounds)
mage.image = UIImage.init(named:"imageName")
view.addSubview(image)
Then output should be as per your requirement.

Swift iOS -How to extract a separate view or image from within it's own UIImageView's bounds? [duplicate]

I'm trying to crop a sub-image of a image view using an overlay UIView that can be positioned anywhere in the UIImageView. I'm borrowing a solution from a similar post on how to solve this when the UIImageView content mode is 'Aspect Fit'. That proposed solution is:
func computeCropRect(for sourceFrame : CGRect) -> CGRect {
let widthScale = bounds.size.width / image!.size.width
let heightScale = bounds.size.height / image!.size.height
var x : CGFloat = 0
var y : CGFloat = 0
var width : CGFloat = 0
var height : CGFloat = 0
var offSet : CGFloat = 0
if widthScale < heightScale {
offSet = (bounds.size.height - (image!.size.height * widthScale))/2
x = sourceFrame.origin.x / widthScale
y = (sourceFrame.origin.y - offSet) / widthScale
width = sourceFrame.size.width / widthScale
height = sourceFrame.size.height / widthScale
} else {
offSet = (bounds.size.width - (image!.size.width * heightScale))/2
x = (sourceFrame.origin.x - offSet) / heightScale
y = sourceFrame.origin.y / heightScale
width = sourceFrame.size.width / heightScale
height = sourceFrame.size.height / heightScale
}
return CGRect(x: x, y: y, width: width, height: height)
}
The problem is that using this solution when the image view is aspect fill causes the cropped segment to not line up exactly with where the overlay UIView was positioned. I'm not quite sure how to adapt this code to accommodate for Aspect Fill or reposition my overlay UIView so that it lines up 1:1 with the segment I'm trying to crop.
UPDATE Solved using Matt's answer below
class ViewController: UIViewController {
#IBOutlet weak var catImageView: UIImageView!
private var cropView : CropView!
override func viewDidLoad() {
super.viewDidLoad()
cropView = CropView(frame: CGRect(x: 0, y: 0, width: 45, height: 45))
catImageView.image = UIImage(named: "cat")
catImageView.clipsToBounds = true
catImageView.layer.borderColor = UIColor.purple.cgColor
catImageView.layer.borderWidth = 2.0
catImageView.backgroundColor = UIColor.yellow
catImageView.addSubview(cropView)
let imageSize = catImageView.image!.size
let imageViewSize = catImageView.bounds.size
var scale : CGFloat = imageViewSize.width / imageSize.width
if imageSize.height * scale < imageViewSize.height {
scale = imageViewSize.height / imageSize.height
}
let croppedImageSize = CGSize(width: imageViewSize.width/scale, height: imageViewSize.height/scale)
let croppedImrect =
CGRect(origin: CGPoint(x: (imageSize.width-croppedImageSize.width)/2.0,
y: (imageSize.height-croppedImageSize.height)/2.0),
size: croppedImageSize)
let renderer = UIGraphicsImageRenderer(size:croppedImageSize)
let _ = renderer.image { _ in
catImageView.image!.draw(at: CGPoint(x:-croppedImrect.origin.x, y:-croppedImrect.origin.y))
}
}
#IBAction func performCrop(_ sender: Any) {
let cropFrame = catImageView.computeCropRect(for: cropView.frame)
if let imageRef = catImageView.image?.cgImage?.cropping(to: cropFrame) {
catImageView.image = UIImage(cgImage: imageRef)
}
}
#IBAction func resetCrop(_ sender: Any) {
catImageView.image = UIImage(named: "cat")
}
}
The Final Result
Let's divide the problem into two parts:
Given the size of a UIImageView and the size of its UIImage, if the UIImageView's content mode is Aspect Fill, what is the part of the UIImage that fits into the UIImageView? We need, in effect, to crop the original image to match what the UIImageView is actually displaying.
Given an arbitrary rect within the UIImageView, what part of the cropped image (derived in part 1) does it correspond to?
The first part is the interesting part, so let's try it. (The second part will then turn out to be trivial.)
Here's the original image I'll use:
https://static1.squarespace.com/static/54e8ba93e4b07c3f655b452e/t/56c2a04520c64707756f4267/1455596221531/
That image is 1000x611. Here's what it looks like scaled down (but keep in mind that I'm going to be using the original image throughout):
My image view, however, will be 139x182, and is set to Aspect Fill. When it displays the image, it looks like this:
The problem we want to solve is: what part of the original image is being displayed in my image view, if my image view is set to Aspect Fill?
Here we go. Assume that iv is the image view:
let imsize = iv.image!.size
let ivsize = iv.bounds.size
var scale : CGFloat = ivsize.width / imsize.width
if imsize.height * scale < ivsize.height {
scale = ivsize.height / imsize.height
}
let croppedImsize = CGSize(width:ivsize.width/scale, height:ivsize.height/scale)
let croppedImrect =
CGRect(origin: CGPoint(x: (imsize.width-croppedImsize.width)/2.0,
y: (imsize.height-croppedImsize.height)/2.0),
size: croppedImsize)
So now we have solved the problem: croppedImrect is the region of the original image that is showing in the image view. Let's proceed to use our knowledge, by actually cropping the image to a new image matching what is shown in the image view:
let r = UIGraphicsImageRenderer(size:croppedImsize)
let croppedIm = r.image { _ in
iv.image!.draw(at: CGPoint(x:-croppedImrect.origin.x, y:-croppedImrect.origin.y))
}
The result is this image (ignore the gray border):
But lo and behold, that is the correct answer! I have extracted from the original image exactly the region portrayed in the interior of the image view.
So now you have all the information you need. croppedIm is the UIImage actually displayed in the clipped area of the image view. scale is the scale between the image view and that image. Therefore, you can easily solve the problem you originally proposed! Given any rectangle imposed upon the image view, in the image view's bounds coordinates, you simply apply the scale (i.e. divide all four of its attributes by scale) — and now you have the same rectangle as a portion of croppedIm.
(Observe that we didn't really need to crop the original image to get croppedIm; it was sufficient, in reality, to know how to perform that crop. The important information is the scale along with the origin of croppedImRect; given that information, you can take the rectangle imposed upon the image view, scale it, and offset it to get the desired rectangle of the original image.)
EDIT I added a little screencast just to show that my approach works as a proof of concept:
EDIT Also created a downloadable example project here:
https://github.com/mattneub/Programming-iOS-Book-Examples/blob/39cc800d18aa484d17c26ffcbab8bbe51c614573/bk2ch02p058cropImageView/Cropper/ViewController.swift
But note that I can't guarantee that URL will last forever, so please read the discussion above to understand the approach used.
Matt answered the question perfectly. I was creating a full-screen camera and had a need to make the final output match the full-screen preview. Offering here a compact extension of Matt's overall answer in Swift 5 for easy use by others. Recommend reading Matt's answer as it explains things very well.
extension UIImage {
func cropToRect(rect: CGRect) -> UIImage? {
var scale = rect.width / self.size.width
scale = self.size.height * scale < rect.height ? rect.height/self.size.height : scale
let croppedImsize = CGSize(width:rect.width/scale, height:rect.height/scale)
let croppedImrect = CGRect(origin: CGPoint(x: (self.size.width-croppedImsize.width)/2.0,
y: (self.size.height-croppedImsize.height)/2.0),
size: croppedImsize)
UIGraphicsBeginImageContextWithOptions(croppedImsize, true, 0)
self.draw(at: CGPoint(x:-croppedImrect.origin.x, y:-croppedImrect.origin.y))
let croppedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return croppedImage
}
}

only detect in a section of camera preview layer, iOS, Swift

I am trying to get a detection zone in a live preview on my camera preview layer.
Is it possible for this, say there is a live feed and you have face detect on and as you look around it will only put a box around the face in a certain area for example a rectangle in the centre of the screen. all other faces in the preview that are outside of the rectangle don't get detected?
Im using Vision, iOS, Swift.
I figured this out by adding a guard before the CALayer adding
Before View did load
#IBOutlet weak var scanAreaImage: UIImageView!
var regionOfInterest: CGRect!
In View did load
scanAreaImage.frame is a image view that I put in via storyboard and this would represent the area I only wanted detection in,
let someRect: CGRect = scanAreaImage.frame
regionOfInterest = someRect
then in the vision text detection section.
func highlightLetters(box: VNRectangleObservation) {
let xCord = box.topLeft.x * (cameraPreviewlayer?.frame.size.width)!
let yCord = (1 - box.topLeft.y) * (cameraPreviewlayer?.frame.size.height)!
let width = (box.topRight.x - box.bottomLeft.x) * (cameraPreviewlayer?.frame.size.width)!
let height = (box.topLeft.y - box.bottomLeft.y) * (cameraPreviewlayer?.frame.size.height)!
// This is the section I Added for the rec of interest detection zone.
//////////////////////////////////////////////
let wordRect = CGRect(x: xCord, y: yCord, width: width, height: height)
guard regionOfInterest.contains(wordRect.origin) else { return } // only draw a box if the orgin of the word box is within the regionOfInterest
// regionOfInterest being the cgRect you created earlier
//////////////////////////////////////////////
let outline = CALayer()
outline.frame = CGRect(x: xCord, y: yCord, width: width, height: height)
outline.borderWidth = 1.0
if textColour == 1 {
outline.borderColor = UIColor.blue.cgColor
}else {
outline.borderColor = UIColor.clear.cgColor
}
cameraPreviewlayer?.addSublayer(outline)
this will only show outlines of the things inside the rectangle you created in storyboard. (Mine being the scanAreaImage)
I hope this helps someone.

How to set half-round UIImage in Swift like this screenshot

https://www.dropbox.com/s/wlizis5zybsvnfz/File%202017-04-04%2C%201%2052%2024%20PM.jpeg?dl=0
Hello all Swifters,
Could anyone tell me how to set this kind of UI? Is there any half rounded image that they have set?
Or there are two images. One with the mountains in the background and anohter image made half rounded in white background and placed in on top?
Please advise
Draw an ellipse shape using UIBezier path.
Draw a rectangle path exactly similar to imageView which holds your image.
Transform the ellipse path with CGAffineTransform so that it will be in the center of the rect path.
Translate rect path with CGAffineTransform by 0.5 to create intersection between ellipse and the rect.
Mask the image using CAShapeLayer.
Additional: As Rob Mayoff stated in comments you'll probably need to calculate the mask size in viewDidLayoutSubviews. Don't forget to play with it, test different cases (different screen sizes, orientations) and adjust the implementation based on your needs.
Try the following code:
import UIKit
class ViewController: UIViewController {
#IBOutlet weak var imageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
guard let image = imageView.image else {
return
}
let size = image.size
imageView.clipsToBounds = true
imageView.image = image
let curveRadius = size.width * 0.010 // Adjust curve of the image view here
let invertedRadius = 1.0 / curveRadius
let rect = CGRect(x: 0,
y: -40,
width: imageView.bounds.width + size.width * 2 * invertedRadius,
height: imageView.bounds.height)
let ellipsePath = UIBezierPath(ovalIn: rect)
let transform = CGAffineTransform(translationX: -size.width * invertedRadius, y: 0)
ellipsePath.apply(transform)
let rectanglePath = UIBezierPath(rect: imageView.bounds)
rectanglePath.apply(CGAffineTransform(translationX: 0, y: -size.height * 0.5))
ellipsePath.append(rectanglePath)
let maskShapeLayer = CAShapeLayer()
maskShapeLayer.frame = imageView.bounds
maskShapeLayer.path = ellipsePath.cgPath
imageView.layer.mask = maskShapeLayer
}
}
Result:
You can find an answer here:
https://stackoverflow.com/a/34983655/5479510
But generally, I wouldn't recommend using a white image overlay as it may appear distorted or pixelated on different devices. Using a masking UIView would do just great.
Why you could not just create (draw) rounded transparent image and add UIImageView() with UIImage() at the top of the view with required height and below this view add other views. I this this is the easiest way. I would write comment but I cant.

Crop Image from Camera in Swift without move to another ViewController

I have an image overlay inside CameraViewController:
I want to get the image from inside this red square.
I don't want to move to another view controller to setup a CropViewController, the crop should be done inside this Controller.
This code behind almost works, the problem is that the image generated from camera is 1080x1920 and the self.cropView.bounds is (0,0,185,120) and of course it do not represent the same scale used to take the image
extension UIImage {
func crop(rect: CGRect) -> UIImage {
var rect = rect
rect.origin.x*=self.scale
rect.origin.y*=self.scale
rect.size.width*=self.scale
rect.size.height*=self.scale
let imageRef = self.cgImage!.cropping(to: rect)
let image = UIImage(cgImage: imageRef!, scale: self.scale, orientation: self.imageOrientation)
return image
}
}
You can always crop visually any image in a quadrilateral (a four sided shape - doesn't have to be rectangle) using a Core Image filter call CIPerspectiveCorrection.
Let's say you have an imageView frame that is 414 width by 716 height, with an image that is 1600 width by 900 height in size. (You are using a content mode of .aspectFit, right?) Let's say you want to crop a 4 sided shape that's corners - in (X,Y) coordinates in the imageView - are (50,50), (75,75), (100,300), and (25,200). Note that I'm listing the points in top left (TL, top right (TR), bottom right (BR), bottom left (BL) order. Also note that this is not a straight forward rectangle.
What you need to do is this:
Convert the UIImage to a CIImage where the "extent" is the UIImage size,
Convert those UIImageView coordinates to CIImage coordinates,
pass them and the CIImage into the CIPerspectiveCorrection filter for cropping, and
render the CIImage output into a UIImageView.
The below code is a little rough around the edges, but hopefully you get the concept:
class ViewController: UIViewController {
let uiTL = CGPoint(x: 50, y: 50)
let uiTR = CGPoint(x: 75, y: 75)
let uiBL = CGPoint(x: 100, y: 300)
let uiBR = CGPoint(x: 25, y: 200)
var ciImage:CIImage!
var ctx:CIContext!
#IBOutlet weak var imageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
ctx = CIContext(options: nil)
ciImage = CIImage(image: imageView.image!)
}
override func viewWillLayoutSubviews() {
let ciTL = createVector(createScaledPoint(uiTL))
let ciTR = createVector(createScaledPoint(uiTR))
let ciBR = createVector(createScaledPoint(uiBR))
let ciBL = createVector(createScaledPoint(uiBL))
imageView.image = doPerspectiveCorrection(CIImage(image: imageView.image!)!,
context: ctx,
topLeft: ciTL,
topRight: ciTR,
bottomRight: ciBR,
bottomLeft: ciBL)
}
func doPerspectiveCorrection(
_ image:CIImage,
context:CIContext,
topLeft:AnyObject,
topRight:AnyObject,
bottomRight:AnyObject,
bottomLeft:AnyObject)
-> UIImage {
let filter = CIFilter(name: "CIPerspectiveCorrection")
filter?.setValue(topLeft, forKey: "inputTopLeft")
filter?.setValue(topRight, forKey: "inputTopRight")
filter?.setValue(bottomRight, forKey: "inputBottomRight")
filter?.setValue(bottomLeft, forKey: "inputBottomLeft")
filter!.setValue(image, forKey: kCIInputImageKey)
let cgImage = context.createCGImage((filter?.outputImage)!, from: (filter?.outputImage!.extent)!)
return UIImage(cgImage: cgImage!)
}
func createScaledPoint(_ pt:CGPoint) -> CGPoint {
let x = (pt.x / imageView.frame.width) * ciImage.extent.width
let y = (pt.y / imageView.frame.height) * ciImage.extent.height
return CGPoint(x: x, y: y)
}
func createVector(_ point:CGPoint) -> CIVector {
return CIVector(x: point.x, y: ciImage.extent.height - point.y)
}
func createPoint(_ vector:CGPoint) -> CGPoint {
return CGPoint(x: vector.x, y: ciImage.extent.height - vector.y)
}
}
EDIT: I'm putting this here to explain things. The two of us swapped projects, and there was an issue with the questioner's code where a nil return was happening. First, here's the corrected code, which should be in the cropImage() function:
let ciTL = createVector(createScaledPoint(topLeft, overlay: cameraView, image: image), image: image)
let ciTR = createVector(createScaledPoint(topRight, overlay: cameraView, image: image), image: image)
let ciBR = createVector(createScaledPoint(bottomRight, overlay: cameraView, image: image), image: image)
let ciBL = createVector(createScaledPoint(bottomLeft, overlay: cameraView, image: image), image: image)
The issue is with the last two lines, which were transposed by passing bottomLeft where it should have been bottomRight, and vice-versa. (Easy mistake to make, I've done it too!)
Some explanation to help those who use CIPerspectiveCorrection (and other filters that use CIVectors).
A CIVector can have anywhere from - I think 2 to, well, almost infinite amount of components. It depends on the filter. In this case there are two components (X, Y). Simple enough, but the twist is that the 4 CIVectors describe 4 points inside the CIImage extent where the origin is the bottom left, not the top left.
Note I did not say a 4 sided shape. You can actually have a "figure 8" like shape where the "bottom right" point is left of the "bottom left" point! This would result in a shape where two sides cross each other.
All that matters is that all 4 points lie with the CIImage extent. If they don't, the filter with return nil for it's output image.
One last note for those who haven't work with CIImage filters before - the filters will not execute until you ask for the outputImage. You can instantiate one, fill in the parameters, chain them, whatever. You can even make a typo in the filter name (or any of their keys). Until your code asks for the filter.outputImage, nothing happens.

Resources