iOS: Cropping image in wrong position and size [duplicate] - ios

This question already has answers here:
How to crop a UIImageView to a new UIImage in 'aspect fill' mode?
(2 answers)
Closed 2 years ago.
I'm trying to allow the user to mark an area with her hand and then the image will be cropped to that particular area. The problem is that i've got everything wrong with the cropping sizing, positioning and scaling.
I'm definitely missing something but am not sure what is it that I'm doing wrong?
Here is the original image along with the crop rectangle that the user can mark with his finger:
This is the broken outcome:
Here is my custom UIImageView where I intercept the touch events. This is just for the user to draw the rectangle...
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first?.preciseLocation(in: self){
self.newTouch = touch
}
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let currentPoint = touch.preciseLocation(in: self)
reDrawSelectionArea(fromPoint: newTouch, toPoint: currentPoint)
}
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
self.newBoxSelected?(box.frame)
box.frame = CGRect.zero //reset overlay for next tap
}
func reDrawSelectionArea(fromPoint: CGPoint, toPoint: CGPoint) {
//Calculate rect from the original point and last known point
let rect = CGRect(x: min(fromPoint.x, toPoint.x),
y: min(fromPoint.y, toPoint.y),
width: abs(fromPoint.x - toPoint.x),
height: abs(fromPoint.y - toPoint.y));
box.frame = rect
}
This is the actual cropping logic. What am I doing wrong here?
func cropToBounds(image: UIImage, newFrame: CGRect) -> UIImage {
let xScaleFactor = (imageView.image?.size.width)! / (self.imageView.bounds.size.width)
let yScaleFactor = (imageView.image?.size.height)! / (self.imageView.bounds.size.height)
let contextImage: UIImage = UIImage(cgImage: image.cgImage!)
print("NewFrame is: \(newFrame)")
let xPos = newFrame.minX * xScaleFactor
let yPos = newFrame.minY * yScaleFactor
let width = newFrame.size.width * xScaleFactor
let height = newFrame.size.height * xScaleFactor
print("xScaleFactor: \(xScaleFactor)")
print("yScaleFactor: \(yScaleFactor)")
print("xPos: \(xPos)")
print("yPos: \(yPos)")
print("width: \(width)")
print("height: \(height)")
// let rect: CGRect = CGRect(x: xPos, y: yPos , width: width, height: height)
let rect: CGRect = CGRect(x: xPos, y: yPos , width: width, height: height)
// Create bitmap image from context using the rect
let imageRef: CGImage = contextImage.cgImage!.cropping(to: rect)!
// Create a new image based on the imageRef and rotate back to the original orientation
let image: UIImage = UIImage(cgImage: imageRef, scale: image.scale, orientation: image.imageOrientation)
return image
}

You can import AVFoundation and use func AVMakeRect(aspectRatio: CGSize, insideRect boundingRect: CGRect) -> CGRect to get the actual rectangle of the aspect-fit image:
// import this in your class
import AVFoundation
then:
guard let img = imageView.image else {
fatalError("imageView has no image!")
}
// Original size which you want to preserve the aspect ratio of
let aspect: CGSize = img.size
// Rect to fit that size within
let rect: CGRect = CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height)
// resulting size
let resultSize: CGSize = AVMakeRect(aspectRatio: aspect, insideRect: rect).size
// get y-position (1/2 of (imageView height - resulting size)
let resultOrigin: CGPoint = CGPoint(x: 0.0, y: (imageView.bounds.size.height - resultSize.height) / 2.0)
// this is the actual rect for the aspect-fit image
let actualImageRect: CGRect = CGRect(origin: resultOrigin, size: resultSize)
print(actualImageRect)
// you now have the actual rectangle for the image
// on which you can base your scale calculations
}

Related

cropping image by using a custom view

I'm trying to crop an image by using a custom view. I have two view controller each of them are ViewController and ResultViewController. viewController has a button and an mageView and the other one has an imageView. when I crop on the ViewController, the results are out of the transparent custom view on the ResultViewController. I don't know where I'm doing wrong or missing. I have changed content mode of the image view to aspect fill, aspect fit, scale to fit and center but the results aren't what I want. Any help would be really appreciated.
I read the official document : here
import UIKit
class ViewController: UIViewController {
let resultVC = UIStoryboard.init(name: "Main", bundle: nil).instantiateViewController(withIdentifier: "resultVC") as! ResultViewController
var rectangleView: UIView! = UIView(frame: CGRect(x: 100,y: 200,width: 100,height: 100))
// this is the transparent custom view as a mask.
#IBAction func okayButton(_ sender: Any) {
resultVC.resultImage = cropImage(imageView.image!, toRect: rectangleView.frame, viewWidth: self.view.frame.size.width, viewHeight: self.view.frame.size.height)
print(rectangleView.frame)
present(resultVC, animated: true)
}
#IBOutlet weak var imageView: UIImageView!
func cropImage(_ inputImage: UIImage, toRect cropRect: CGRect, viewWidth: CGFloat, viewHeight: CGFloat) -> UIImage?
{
let imageViewScale = max(inputImage.size.width / viewWidth,
inputImage.size.height / viewHeight)
// Scale cropRect to handle images larger than shown-on-screen size
let cropZone = CGRect(x:cropRect.origin.x * imageViewScale,
y:cropRect.origin.y * imageViewScale,
width:cropRect.size.width * imageViewScale,
height:cropRect.size.height * imageViewScale)
// Perform cropping in Core Graphics
guard let cutImageRef: CGImage = inputImage.cgImage?.cropping(to:cropZone)
else {
return nil
}
// Return image to UIImage
let croppedImage: UIImage = UIImage(cgImage: cutImageRef)
return croppedImage
}
override func viewDidLoad() {
super.viewDidLoad()
let imageFile = UIImage(named: "c.png")
// Do any additional setup after loading the view.
imageView.image = imageFile
view.addSubview(rectangleView)
rectangleView.backgroundColor = UIColor.init(white: 1, alpha: 0.5)
}
}
//ResultViewController
import UIKit
class ResultViewController: UIViewController {
var resultImage:UIImage!
#IBOutlet weak var resultImageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
resultImageView.image = resultImage
// Do any additional setup after loading the view.
}
}
before cropping1
after cropping2
There is an important point here is you have to convert all the frame's rects into the same coordinate system before cropping.
For easy, I'll choose the image as the base coordinate system with the origin is the TOP-LEFT point of the image. Please note that the image I mention here is not the same with the imageView which is used to display the image.
For the crop function, I'll use the one I'm using for my library
public func crop(image: UIImage, toRect rect: CGRect) -> UIImage {
let orientation = image.imageOrientation
let scale = image.scale
var targetRect = CGRect()
switch orientation {
case .down:
targetRect.origin.x = (image.size.width - rect.maxX) * scale
targetRect.origin.y = (image.size.height - rect.maxY) * scale
targetRect.size.width = rect.width * scale
targetRect.size.height = rect.height * scale
case .right:
targetRect.origin.x = rect.minY * scale
targetRect.origin.y = (image.size.width - rect.maxX) * scale
targetRect.size.width = rect.height * scale
targetRect.size.height = rect.width * scale
case .left:
targetRect.origin.x = image.size.height - rect.maxY * scale
targetRect.origin.y = rect.minX * scale
targetRect.size.width = rect.height * scale
targetRect.size.height = rect.width * scale
default:
targetRect = CGRect(x: rect.origin.x * scale,
y: rect.origin.y * scale,
width: rect.width * scale,
height: rect.height * scale)
}
if let croppedCGImage = image.cgImage?.cropping(to: targetRect) {
return UIImage(cgImage: croppedCGImage, scale: scale, orientation: orientation)
}
return image
}
To use the function above correctly with your example, you have to change your implemetation to this. Please read the comments to get the idea.
#IBAction func okayButton(_ sender: Any) {
// Get the rectangeView's frame in the imageView's coordinate system
let rectangleViewRectInImageView = rectangleView.convert(rectangleView.bounds, to: imageView)
let imageWHRatio = imageView.image!.size.width / imageView.image!.size.height
let imageViewWHRatio = imageView.bounds.width / imageView.bounds.height
// We are going to calculate the frame of the displayed image content inside the imageView
var imageRectInImageView: CGRect = .zero
if imageWHRatio > imageViewWHRatio {
// White spaces in the top & bottom
imageRectInImageView = CGRect(
x: 0,
y: (imageView.bounds.height - imageView.bounds.width / imageWHRatio) / 2,
width: imageView.bounds.width,
height: imageView.bounds.width / imageWHRatio
)
} else {
// White spaces in the left & right
imageRectInImageView = CGRect(
x: (imageView.bounds.width - imageView.bounds.height * imageWHRatio) / 2,
y: 0,
width: imageView.bounds.height * imageWHRatio,
height: imageView.bounds.height
)
}
// How big the image are being scaled to fit the imageView's bound.
let imageScale = imageView.image!.size.width / imageRectInImageView.width
// The frame of the rectangeView in the displayed image content view's coordinate system
let toRect = CGRect(
x: (rectangleViewRectInImageView.minX - imageRectInImageView.minX) * imageScale,
y: (rectangleViewRectInImageView.minY - imageRectInImageView.minY) * imageScale,
width: rectangleViewRectInImageView.width * imageScale,
height: rectangleViewRectInImageView.height * imageScale)
// Now you have the frame of the rectangleView in proper coordinate system, so you can my method above safely
resultVC.resultImage = crop(image: imageView.image!, toRect: toRect)
print(rectangleView.frame)
present(resultVC, animated: true)
}
Please make sure that you have to select scaleAspectFill as the contentMode for the imageView to make it work correctly.

UISlider: changed hight, but maximum value flashes

I customised a UISlider and everything works well except when I drag the slider to 100%. Then the rounded caps are replaced with a square.
Here is how I customize the Slider:
#IBInspectable var trackHeight: CGFloat = 14
override func trackRect(forBounds bounds: CGRect) -> CGRect {
return CGRect(origin: bounds.origin, size: CGSize(width: bounds.width, height: trackHeight))
}
98% image:
100% image:
You can remove the track background by set minimumTrackTintColor and maximumTrackTintColor into UIColor.clear and draw the track background yourself.
For example:
class CustomSlider: UISlider {
override func trackRect(forBounds bounds: CGRect) -> CGRect {
return bounds
}
override func draw(_ rect: CGRect) {
guard let context = UIGraphicsGetCurrentContext() else {
return
}
context.setFillColor(UIColor.green.cgColor)
let path = UIBezierPath(roundedRect: rect, cornerRadius: rect.size.height / 2).cgPath
context.addPath(path)
context.fillPath()
}
}
Here some quick solution
You have changed frame of the line, the height is higher the default and that's why you see it.
You can move little bit slider at the 100%.
override func thumbRect(forBounds bounds: CGRect, trackRect rect: CGRect, value: Float) -> CGRect {
var rect = super.thumbRect(forBounds: bounds, trackRect: rect, value: value)
if value > 0.99 {
rect = CGRect(x: rect.origin.x+2, y: rect.origin.y, width: rect.size.width, height: rect.size.height)
}
return rect
}
Or you can make the thumb bigger to cover the corners, its up to you.
Original answer here: Thumb image does not move to the edge of UISlider
updated to swift 4.2:
override func thumbRect(forBounds bounds: CGRect, trackRect rect: CGRect, value: Float) -> CGRect {
let unadjustedThumbrect = super.thumbRect(forBounds: bounds, trackRect: rect, value: value)
let thumbOffsetToApplyOnEachSide:CGFloat = unadjustedThumbrect.size.width / 2.0
let minOffsetToAdd = -thumbOffsetToApplyOnEachSide
let maxOffsetToAdd = thumbOffsetToApplyOnEachSide
let offsetForValue = minOffsetToAdd + (maxOffsetToAdd - minOffsetToAdd) * CGFloat(value / (self.maximumValue - self.minimumValue))
var origin = unadjustedThumbrect.origin
origin.x += offsetForValue
return CGRect(origin: origin, size: unadjustedThumbrect.size)
}

Swift: Draw on image without changing its size (Aspect fill)

I have a simple view controller where the entire screen is an imageView. The imageView has its content mode set to UIViewContentMode.scaleAspectFill. I can draw on the picture, but as soon as I do the image shrinks horizontally. This messing up the quality of the image and I'm not sure exactly why it's happening.
I looked at this question, but I didn't understand the only answer. I think the issue is with the way I am drawing and that rect is shrinking the image.
class AnnotateViewController: UIViewController {
#IBOutlet weak var imageView: UIImageView!
var lastPoint = CGPoint.zero
var fromPoint = CGPoint()
var toPoint = CGPoint.zero
var brushWidth: CGFloat = 5.0
var opacity: CGFloat = 1.0
#IBAction func startDrawing(_ sender: Any) {
self.isDrawing = !self.isDrawing
if isDrawing {
self.imageView.isUserInteractionEnabled = true
showColorButtons()
}
else {
self.imageView.isUserInteractionEnabled = false
hideColorButtons()
}
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
lastPoint = touch.preciseLocation(in: self.imageView)
}
}
func drawLineFrom(fromPoint: CGPoint, toPoint: CGPoint) {
UIGraphicsBeginImageContextWithOptions(self.imageView.bounds.size, false, 0.0)
imageView.image?.draw(in: CGRect(x: 0, y: 0, width: screenSize.width, height: screenSize.height))
let context = UIGraphicsGetCurrentContext()
context?.move(to: fromPoint)
context?.addLine(to: toPoint)
context?.setLineCap(CGLineCap.round)
context?.setLineWidth(brushWidth)
context?.setStrokeColor(red: 255, green: 0, blue: 0, alpha: 1.0)
context?.setBlendMode(CGBlendMode.normal)
context?.strokePath()
imageView.image = UIGraphicsGetImageFromCurrentImageContext()
imageView.alpha = opacity
UIGraphicsEndImageContext()
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let currentPoint = touch.preciseLocation(in: self.imageView)
drawLineFrom(fromPoint: lastPoint, toPoint: currentPoint)
lastPoint = currentPoint
}
}
}
UPDATE - SOLUTION
I finally got back to working on this problem. Looking at this question, I was able to use the accepted answer to solve my problem. Below is my updated code:
func drawLineFrom(fromPoint: CGPoint, toPoint: CGPoint) {
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, true, 0.0)
let aspect = imageView.image!.size.width / imageView.image!.size.height
let rect: CGRect
if imageView.frame.width / aspect > imageView.frame.height {
let height = imageView.frame.width / aspect
rect = CGRect(x: 0, y: (imageView.frame.height - height) / 2,
width: imageView.frame.width, height: height)
} else {
let width = imageView.frame.height * aspect
rect = CGRect(x: (imageView.frame.width - width) / 2, y: 0,
width: width, height: imageView.frame.height)
}
imageView.image?.draw(in: rect)
let context = UIGraphicsGetCurrentContext()
context?.move(to: fromPoint)
context?.addLine(to: toPoint)
context?.setLineCap(CGLineCap.round)
context?.setLineWidth(brushWidth)
context?.setStrokeColor(red: self.red, green: self.green, blue: self.blue, alpha: 1.0)
context?.setBlendMode(CGBlendMode.normal)
context?.strokePath()
imageView.image = UIGraphicsGetImageFromCurrentImageContext()
annotatedImage = imageView.image
imageView.alpha = opacity
UIGraphicsEndImageContext()
}
This line:
imageView.image?.draw(in: CGRect(x: 0, y: 0, width: screenSize.width, height: screenSize.height))
Draws the image at the aspect ratio of the screen, but your image is probably not at the same aspect ratio (the ratio of width to height).
I think this might be useful: UIImage Aspect Fit when using drawInRect?

How to Crop UIImage considering screen scale

I'm trying to crop a UIImage using a crop box UIView that a user can drag around anywhere in the image view to crop. The logic I'm using to compute the crop rect is as follows:
extension UIImageView {
public func computeCropRect(for sourceFrame : CGRect) -> CGRect {
let widthScale = bounds.size.width / image!.size.width
let heightScale = bounds.size.height / image!.size.height
var x : CGFloat = 0
var y : CGFloat = 0
var width : CGFloat = 0
var height : CGFloat = 0
var offSet : CGFloat = 0
if widthScale < heightScale {
offSet = (bounds.size.height - (image!.size.height * widthScale))/2
x = sourceFrame.origin.x / widthScale
y = (sourceFrame.origin.y - offSet) / widthScale
width = sourceFrame.size.width / widthScale
height = sourceFrame.size.height / widthScale
} else {
offSet = (bounds.size.width - (image!.size.width * heightScale))/2
x = (sourceFrame.origin.x - offSet) / heightScale
y = sourceFrame.origin.y / heightScale
width = sourceFrame.size.width / heightScale
height = sourceFrame.size.height / heightScale
}
return CGRect(x: x, y: y, width: width, height: height)
}
}
The crop box frame looks like this and is positionable anywhere in the frame of the image view by dragging it:
This crop code works just fine until I combine this with another feature I'm trying to support which is the ability to let the user draw using their finger inside of the UIImageView. The code for that looks like this:
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touchPoint = touches.first {
let currentPoint = touchPoint.location(in: self)
UIGraphicsBeginImageContextWithOptions(frame.size, false, UIScreen.main.scale)
if let context = UIGraphicsGetCurrentContext() {
image?.draw(in: imageEffectsService.computeAspectFitFrameFor(containerSize: frame.size, imageSize: image!.size), blendMode: .normal, alpha: CGFloat(imageOpacity))
drawLineAt(startPoint: lastTouchPoint, endPoint: currentPoint, currentContext: context, strokeColor: drawColor)
UIGraphicsEndImageContext()
}
}
}
private func drawLineAt(startPoint : CGPoint, endPoint : CGPoint, currentContext : CGContext, strokeColor : UIColor) {
currentContext.beginPath()
currentContext.setLineCap(CGLineCap.round)
currentContext.setLineWidth(brushSize)
currentContext.setStrokeColor(strokeColor.cgColor)
currentContext.move(to: startPoint)
currentContext.addLine(to: endPoint)
currentContext.strokePath()
image = UIGraphicsGetImageFromCurrentImageContext()
}
The crop method loses accuracy once I apply a drawing particularly because of this line:
UIGraphicsBeginImageContextWithOptions(frame.size, false, UIScreen.main.scale)
If instead I use:
UIGraphicsBeginImageContext(frame.size)
My crop code will be accurate but the drawing fidelity will look grainy and low quality because I am not accounting for retina screen devices. My question is how would I modify my crop function to account for the UIScreen.main.scale?

Drawn Image Does Not Appear on Camera Overlay Swift

I'm trying to create a drawing app that sits on top of a UIImagePickerController after the picture is taken. The touch delegate functions are called correctly but do not leave any traces on the view as they are supposed to. What I have so far:
func createImageOptionsView() { //creates the view above the picker controller overlay view
let imgOptsView = UIView()
let scale = CGAffineTransform(scaleX: 4.0 / 3.0, y: 4.0 / 3.0)
imgOptsView.tag = 1
imgOptsView.frame = view.frame
let imageView = UIImageView()
imageView.frame = imgOptsView.frame
imageView.translatesAutoresizingMaskIntoConstraints = true
imageView.transform = scale //to account for the removal of the black bar in the camera
let useButton = UIButton()
imageView.tag = 2
imageView.contentMode = .scaleAspectFit
imageView.image = currentImage
useButton.setImage(#imageLiteral(resourceName: "check circle"), for: .normal)
useButton.translatesAutoresizingMaskIntoConstraints = true
useButton.frame = CGRect(x: view.frame.width / 2, y: view.frame.height / 2, width: 100, height: 100)
let cancelButton = UIButton()
colorSlider.previewEnabled = true
colorSlider.translatesAutoresizingMaskIntoConstraints = true
imgOptsView.addSubview(imageView)
imgOptsView.addSubview(useButton)
imgOptsView.addSubview(colorSlider)
imgOptsView.isUserInteractionEnabled = true
useButton.addTarget(self, action: #selector(ViewController.usePicture(sender:)), for: .touchUpInside)
picker.cameraOverlayView!.addSubview(imgOptsView)
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
mouseSwiped = false //to check if the touch moved or was simply dotted
let touch: UITouch? = touches.first
lastPoint = touch?.location(in: view) //where to start image context from
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
mouseSwiped = true
let touch: UITouch? = touches.first
let currentPoint: CGPoint? = touch?.location(in: view)
UIGraphicsBeginImageContext(view.frame.size) //begin drawing
tempDrawImage.image?.draw(in: CGRect(x: CGFloat(0), y: CGFloat(0), width: CGFloat(view.frame.size.width), height: CGFloat(view.frame.size.height)))
UIGraphicsGetCurrentContext()?.move(to: CGPoint(x: CGFloat(lastPoint.x), y: CGFloat(lastPoint.y)))
UIGraphicsGetCurrentContext()?.addLine(to: CGPoint(x: CGFloat((currentPoint?.x)!), y: CGFloat((currentPoint?.y)!)))
UIGraphicsGetCurrentContext()!.setLineCap(.round)
UIGraphicsGetCurrentContext()?.setLineWidth(1.0)
UIGraphicsGetCurrentContext()?.setStrokeColor(c)
UIGraphicsGetCurrentContext()!.setBlendMode(.normal)
UIGraphicsGetCurrentContext()?.strokePath() //move from lastPoint to currentPoint with these options
tempDrawImage.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
lastPoint = currentPoint
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
if !mouseSwiped {
UIGraphicsBeginImageContext(view.frame.size)
tempDrawImage.image?.draw(in: CGRect(x: CGFloat(0), y: CGFloat(0), width: CGFloat(view.frame.size.width), height: CGFloat(view.frame.size.height)))
UIGraphicsGetCurrentContext()!.setLineCap(.round)
UIGraphicsGetCurrentContext()?.setLineWidth(1.0)
UIGraphicsGetCurrentContext()?.setStrokeColor(c)
UIGraphicsGetCurrentContext()?.move(to: CGPoint(x: CGFloat(lastPoint.x), y: CGFloat(lastPoint.y)))
UIGraphicsGetCurrentContext()?.addLine(to: CGPoint(x: CGFloat(lastPoint.x), y: CGFloat(lastPoint.y)))
UIGraphicsGetCurrentContext()?.strokePath()
UIGraphicsGetCurrentContext()!.flush()
tempDrawImage.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
UIGraphicsBeginImageContext(mainImage.frame.size)
mainImage.image?.draw(in: CGRect(x: CGFloat(0), y: CGFloat(0), width: CGFloat(view.frame.size.width), height: CGFloat(view.frame.size.height)), blendMode: .normal, alpha: 1.0)
tempDrawImage.setNeedsDisplay()
mainImage.image = UIGraphicsGetImageFromCurrentImageContext() //paste tempImage onto the main image and then start over
tempDrawImage.image = nil
UIGraphicsEndImageContext()
}
func changedColor(_ slider: ColorSlider) {
c = slider.color.cgColor
}
This is just a guess, but worth a shot. I had a similar problem but in a different context... I was working with AVPlayer when it happened.
In my case, it's because the zPosition wasn't set correctly, so the overlays I was trying to draw appeared behind the video. Assuming that's the case, the fix would be to set the zPosition = -1 which moves it further back like so:
AVPlayerView.layer?.zPosition = -1
This moved the base layer behind the overlay layer (negative numbers move further away from the user in 3D space.)
I would see if the view controlled by your UIImagePickerController allows you to set it's zPosition similar to what AVPlayer allows. If I recall correctly, any view should be able to set it's zPosition. Alternatively, you could try moving your overlay layer to a positive value like zPosition = 1. (I'm not certain, but I would imagine that the default is position is 0.)

Resources