Cropping the same UIImage with the same CGRect gives different results - ios

I have the following functional in the app:
the user takes (or chooses) an image (hereinafter originalImage).
the originalImage is sent to some external API, which returns the array of coordinates of dots that I need to add to originalImage.
Since the dots are always located in one area (face), I want to crop the originalImage close to the face borders and display to the user only the result of crop.
After the crop result is displayed I'm adding dots to it one by one.
Here is the code that does the job (except sending image, let's say it has already happened)
class ScanResultViewController{
#IBOutlet weak var scanPreviewImageView: UIImageView!
let originalImage = ORIGINAL_IMAGE //meaning we already have it
let scanDots = [["x":123, "y":123], ["x":234, "y":234]]//total 68 coordinates
var cropRect:CGRect!
override func viewDidLoad() {
super.viewDidLoad()
self.setScanImage()
}
override func viewDidAppear(animated: Bool) {
super.viewDidAppear(animated)
self.animateScan(0)
}
func setScanImage(){
self.cropRect = self.getCropRect(self.scanDots, sourceImage:self.originalImage)
let croppedImage = self.originalImage.imageAtRect(self.cropRect)
self.scanPreviewImageView.image = croppedImage
self.scanPreviewImageView.contentMode = .ScaleAspectFill
}
func animateScan(index:Int){
let i = index
self.originalImage = self.addOnePointToImage(self.originalImage, pointImage: GREEN_DOT!, point: self.scanDots[i])
let croppedImage = self.originalImage.imageAtRect(self.cropRect)
self.scanPreviewImageView.image = croppedImage
self.scanPreviewImageView.contentMode = .ScaleAspectFill
if i < self.scanDots.count-1{
let delay = dispatch_time(DISPATCH_TIME_NOW, Int64(0.1 * Double(NSEC_PER_SEC)))
dispatch_after(delay, dispatch_get_main_queue()) {
self.animateScan(i+1)
}
}
}
func addOnePointToImage(sourceImage:UIImage, pointImage:UIImage, point: Dictionary<String,CGFloat>)->UIImage{
let rect = CGRect(x: 0, y: 0, width: sourceImage.size.width, height: sourceImage.size.height)
UIGraphicsBeginImageContextWithOptions(sourceImage.size, true, 0)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, UIColor.whiteColor().CGColor)
CGContextFillRect(context, rect)
sourceImage.drawInRect(rect, blendMode: .Normal, alpha: 1)
let pointWidth = sourceImage.size.width/66.7
pointImage.drawInRect(CGRectMake(point["x"]!-pointWidth/2, point["y"]!-pointWidth/2, pointWidth, pointWidth), blendMode: .Normal, alpha: 1)
let result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return result
}
func getCropRect(points: Array<Dictionary<String,CGFloat>>, sourceImage:UIImage)->CGRect{
var topLeft:CGPoint = CGPoint(x: points[0]["x"]!, y: points[0]["y"]!)
var topRight:CGPoint = CGPoint(x: points[0]["x"]!, y: points[0]["y"]!)
var bottomLeft:CGPoint = CGPoint(x: points[0]["x"]!, y: points[0]["y"]!)
var bottomRight:CGPoint = CGPoint(x: points[0]["x"]!, y: points[0]["y"]!)
for p in points{
if p["x"]<topLeft.x {topLeft.x = p["x"]!}
if p["y"]<topLeft.y {topLeft.y = p["y"]!}
if p["x"]>topRight.x {topRight.x = p["x"]!}
if p["y"]<topRight.y {topRight.y = p["y"]!}
if p["x"]<bottomLeft.x {bottomLeft.x = p["x"]!}
if p["y"]>bottomLeft.y {bottomLeft.y = p["y"]!}
if p["x"]>bottomRight.x {bottomRight.x = p["x"]!}
if p["y"]>bottomRight.y {bottomRight.y = p["y"]!}
}
let rect = CGRect(x: topLeft.x, y: topLeft.y, width: (topRight.x-topLeft.x), height: (bottomLeft.y-topLeft.y))
return rect
}
}
extension UIImage{
public func imageAtRect(rect: CGRect) -> UIImage {
let imageRef: CGImageRef = CGImageCreateWithImageInRect(self.CGImage, rect)!
let subImage: UIImage = UIImage(CGImage: imageRef)
return subImage
}
}
The problem is that in setScanImage the desired area is accurately cropped and displayed, but when animateScan method is called a different area of the same image is cropped (and displayed) though cropRect is the same and the size of originalImage is totally the same.
Any ideas, guys?
By the way if I display originalImage without cropping it everything works smoothly.

So finally after approximately 10 hours net time (and a lot of help of the stackoverflow community:-) I managed to fix the problem:
In the function addOnePointToImage you need to change the following:
In this line:
UIGraphicsBeginImageContextWithOptions(sourceImage.size, true, 0)
you need to change the last argument (which stands for scale) to 1:
UIGraphicsBeginImageContextWithOptions(sourceImage.size, true, 1)
That totally resolves the issue.

Related

Cropping AVCapturePhoto to overlay rectangle displayed on screen

I am trying to take a picture of a thin piece of metal, cropped to the outline displayed on the screen. I have seen almost every other post on here, but nothing has got it for me yet. This image will then be used for analysis by a library. I can get some cropping to happen, but never to the rectangle displayed. I have tried rotating the image before cropping, and calculating the rect based on the rectangle on screen.
Here is my capture code. PreviewView is the container, videoLayer is for the AVCapture video.
// Photo capture delegate
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard let imgData = photo.fileDataRepresentation(), let uiImg = UIImage(data: imgData), let cgImg = uiImg.cgImage else {
return
}
print("Original image size: ", uiImg.size, "\nCGHeight: ", cgImg.height, " width: ", cgImg.width)
print("Orientation: ", uiImg.imageOrientation.rawValue)
guard let img = cropImage(image: uiImg) else {
return
}
showImage(image: img)
}
func cropImage(image: UIImage) -> UIImage? {
print("Image size before crop: ", image.size)
//Get the croppedRect from function below
let croppedRect = calculateRect(image: image)
guard let imgRet = image.cgImage?.cropping(to: croppedRect) else {
return nil
}
return UIImage(cgImage: imgRet)
}
func calculateRect(image: UIImage) -> CGRect {
let originalSize: CGSize
let visibleLayerFrame = self.rectangleView.bounds
// Calculate the rect from the rectangleview to translate to the image
let metaRect = (self.videoLayer.metadataOutputRectConverted(fromLayerRect: visibleLayerFrame))
print("MetaRect: ", metaRect)
// check orientation
if (image.imageOrientation == UIImage.Orientation.left || image.imageOrientation == UIImage.Orientation.right) {
originalSize = CGSize(width: image.size.height, height: image.size.width)
} else {
originalSize = image.size
}
let cropRect: CGRect = CGRect(x: metaRect.origin.x * originalSize.width, y: metaRect.origin.y * originalSize.height, width: metaRect.size.width * originalSize.width, height: metaRect.size.height * originalSize.height).integral
print("Calculated Rect: ", cropRect)
return cropRect
}
func showImage(image: UIImage) {
if takenImage != nil {
takenImage = nil
}
takenImage = UIImageView(image: image)
takenImage.frame = CGRect(x: 10, y: 50, width: 400, height: 1080)
takenImage.contentMode = .scaleAspectFit
print("Cropped Image Size: ", image.size)
self.previewView.addSubview(takenImage)
}
And here is along the line of what I keep getting.
What am I screwing up?
I managed to solve the issue for my use case.
private func cropToPreviewLayer(from originalImage: UIImage, toSizeOf rect: CGRect) -> UIImage? {
guard let cgImage = originalImage.cgImage else { return nil }
// This previewLayer is the AVCaptureVideoPreviewLayer which the resizeAspectFill and videoOrientation portrait has been set.
let outputRect = previewLayer.metadataOutputRectConverted(fromLayerRect: rect)
let width = CGFloat(cgImage.width)
let height = CGFloat(cgImage.height)
let cropRect = CGRect(x: (outputRect.origin.x * width), y: (outputRect.origin.y * height), width: (outputRect.size.width * width), height: (outputRect.size.height * height))
if let croppedCGImage = cgImage.cropping(to: cropRect) {
return UIImage(cgImage: croppedCGImage, scale: 1.0, orientation: originalImage.imageOrientation)
}
return nil
}
usage of the piece of code for my case:
let rect = CGRect(x: 25, y: 150, width: 325, height: 230)
let croppedImage = self.cropToPreviewLayer(from: image, toSizeOf: rect)
self.imageView.image = croppedImage
The world of UIKit has the TOP LEFT corner as 0,0.
The 0,0 point in the AVFoundation world is the BOTTOM LEFT corner.
So you have to translate by rotating 90 degrees.
That's why your image is bonkers.
Also remember that because of the origin translation the following rules apply:
X is actually up and down
Y is actually left and right
width and height are swapped
Also be aware that the UIImageView content mode setting WILL impact how your image scales. You might want to use .scaleAspectFill and NOT AspectFit if you really want to see how your image looks in the UIView.
I used this code snippet to see what was behind the curtain:
// figure out how to cut/crop this
let realImageRect = AVMakeRect(aspectRatio: image.size, insideRect: (self.cameraPreview?.frame)!)
NSLog("real image rectangle = \(realImageRect.debugDescription)")
The 'cameraPreview' reference above is the control you're using for your AV Capture Session.
Good luck!

Swift 5: Better way/approach to add image border on photo editing app?

In case the title doesn't make sense, i'm trying to make a photo editing app where user can add border to their photo. For now, i'm testing a white border.
here is a gif sample of the app. (see how slow the slider is. It's meant to be smooth like any other slider.)
Gif sample
My approach was, to render the white background to the image's size, and then render the image n% smaller to shrink it hence the border.
But i have come to a problem where when i'm testing on my device (iphone 7 plus) the slider was so laggy and slow as if it's taking so much time to compute the function.
Here are the codes for the function. This function serves as blend the background with the foreground. Background being plain white colour.
blendImages is a function located on my adjustmentEngine class.
func blendImages(backgroundImg: UIImage,foregroundImg: UIImage) -> Data? {
// size variable
let contentSizeH = foregroundImg.size.height
let contentSizeW = foregroundImg.size.width
// the magic. how the image will scale in the view.
let topImageH = foregroundImg.size.height - (foregroundImg.size.height * imgSizeMultiplier)
let topImageW = foregroundImg.size.width - (foregroundImg.size.width * imgSizeMultiplier)
let bottomImage = backgroundImg
let topImage = foregroundImg
let imgView = UIImageView(frame: CGRect(x: 0, y: 0, width : contentSizeW, height: contentSizeH))
let imgView2 = UIImageView(frame: CGRect(x: 0, y: 0, width: topImageW, height: topImageH))
// - Set Content mode to what you desire
imgView.contentMode = .scaleAspectFill
imgView2.contentMode = .scaleAspectFit
// - Set Images
imgView.image = bottomImage
imgView2.image = topImage
imgView2.center = imgView.center
// - Create UIView
let contentView = UIView(frame: CGRect(x: 0, y: 0, width: contentSizeW, height: contentSizeH))
contentView.addSubview(imgView)
contentView.addSubview(imgView2)
// - Set Size
let size = CGSize(width: contentSizeW, height: contentSizeH)
UIGraphicsBeginImageContextWithOptions(size, true, 0)
contentView.drawHierarchy(in: contentView.bounds, afterScreenUpdates: true)
guard let i = UIGraphicsGetImageFromCurrentImageContext(),
let data = i.jpegData(compressionQuality: 1.0)
else {return nil}
UIGraphicsEndImageContext()
return data
}
Below are the function i called to render it into uiImageView
guard let image = image else { return }
let borderColor = UIColor.white.image()
self.adjustmentEngine.borderColor = borderColor
self.adjustmentEngine.image = image
guard let combinedImageData: Data = self.adjustmentEngine.blendImages(backgroundImg: borderColor, foregroundImg: image) else {return}
let combinedImage = UIImage(data: combinedImageData)
self.imageView.image = combinedImage
This function will get the image and blend it with a new background colour for the border.
And finally, below are the codes for the slider's didChange function.
#IBAction func sliderDidChange(_ sender: UISlider) {
print(sender.value)
let borderColor = adjustmentEngine.borderColor
let image = adjustmentEngine.image
adjustmentEngine.imgSizeMultiplier = CGFloat(sender.value)
guard let combinedImageData: Data = self.adjustmentEngine.blendImages(backgroundImg: borderColor, foregroundImg: image) else {return}
let combinedImage = UIImage(data: combinedImageData)
self.imageView.image = combinedImage
}
So the question is, Is there a better way or optimised way to do this? Or a better approach?

Memory leak when resizing UIImage

I've read through multiple threads concerning the topic but my problem still persists.
When I'm resizing an Image with following code:
extension UIImage {
func thumbnailWithMaxSize(image:UIImage, maxSize: CGFloat) -> UIImage {
let width = image.size.width
let height = image.size.height
var sizeX: CGFloat = 0
var sizeY: CGFloat = 0
if width > height {
sizeX = maxSize
sizeY = maxSize * height/width
}
else {
sizeY = maxSize
sizeX = maxSize * width/height
}
UIGraphicsBeginImageContext(CGSize(width: sizeX, height: sizeY))
let rect = CGRect(x: 0.0, y: 0.0, width: sizeX, height: sizeY)
UIGraphicsBeginImageContext(rect.size)
draw(in: rect)
let thumbnail = UIGraphicsGetImageFromCurrentImageContext()!;
UIGraphicsEndImageContext()
return thumbnail
}
override func viewDidLoad() {
super.viewDidLoad()
let lionImage = UIImage(named: "lion.jpg")!
var thumb = UIImage()
autoreleasepool {
thumb = lionImage.thumbnailWithMaxSize(image: lionImage, maxSize: 2000)
}
myImageView.image = thumb
}
...the memory is not released. So when I navigate through multiple ViewControllers (e.g. with a PageViewController) I end up getting memory warnings and the app eventually crashes.
I also tried to load the image via UIImage(contentsOfFile: path) without success.
Any suggestions?
I noticed your code beginning two contexts but only ending one.
Here's my extension, which is basically the same as your's. Since I'm not having memory issues, it looks like that may be the issue.
extension UIImage {
public func resizeToRect(_ size : CGSize) -> UIImage {
UIGraphicsBeginImageContext(size)
self.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
return resizedImage!
}
}
The problem is this:
UIGraphicsGetImageFromCurrentImageContext()
returns an autoreleased UIImage. The autorelease pool holds on to this image until your code returns control to the runloop, which you do not do for a long time. To solve this problem, make thumb = nil after using it.
var thumb = UIImage()
autoreleasepool {
thumb = lionImage.thumbnailWithMaxSize(image: lionImage, maxSize: 2000)
let myImage:UIImage = UIImage(UIImagePNGRepresentation(thumb));
thumb = nil
}
myImageView.image = myImage

After cropping images in Swift I'm getting results tilted with 90 degrees - why?

I'm using a nice github plugin for Swift https://github.com/budidino/ShittyImageCrop responsible for cropping the image.
I need aspect ratio 4:3, so I call this controller like this:
let shittyVC = ShittyImageCropVC(frame: (self.navigationController?.view.frame)!, image: image!, aspectWidth: 3, aspectHeight: 4)
self.navigationController?.present(shittyVC, animated: true, completion: nil)
Now, when I provide horizontal image (wider than taller) - cropped result is fine - I see a photo with aspect ratio 4:3 as an output.
But when I provide vertical image and try to cropp it - I'm seeing tilted output. So for example, when normal photo is like this:
vertical - and tilted - one looks like this:
(sorry for low res here). Why does it get shifted to one side?
I suspect the problem might be somewhere in the logic of the crop-button:
func tappedCrop() {
print("tapped crop")
var imgX: CGFloat = 0
if scrollView.contentOffset.x > 0 {
imgX = scrollView.contentOffset.x / scrollView.zoomScale
}
let gapToTheHole = view.frame.height/2 - holeRect.height/2
var imgY: CGFloat = 0
if scrollView.contentOffset.y + gapToTheHole > 0 {
imgY = (scrollView.contentOffset.y + gapToTheHole) / scrollView.zoomScale
}
let imgW = holeRect.width / scrollView.zoomScale
let imgH = holeRect.height / scrollView.zoomScale
print("IMG x: \(imgX) y: \(imgY) w: \(imgW) h: \(imgH)")
let cropRect = CGRect(x: imgX, y: imgY, width: imgW, height: imgH)
let imageRef = img.cgImage!.cropping(to: cropRect)
let croppedImage = UIImage(cgImage: imageRef!)
var path:String = NSTemporaryDirectory() + "tempFile.jpeg"
if let data = UIImageJPEGRepresentation(croppedImage, 0.95) { //0.4 - compression quality
//print("low compression is here")
try? data.write(to: URL(fileURLWithPath: path), options: [.atomic])
}
self.dismiss(animated: true, completion: nil)
}
ShittyImageCrop saves cropped images directly to your album and I couldn't replicate your issue using vertical images.
I see you used UIImageJPEGRepresentation compared to UIImageWriteToSavedPhotosAlbum from ShittyImageCrop and it seems other people also have problems with image rotation after using UIImageJPEGRepresentation.
Look up iOS UIImagePickerController result image orientation after upload and iOS JPEG images rotated 90 degrees
EDIT
try implementing fixOrientation() from https://stackoverflow.com/a/27775741/611879
add fixOrientation():
func fixOrientation(img:UIImage) -> UIImage {
if (img.imageOrientation == UIImageOrientation.Up) {
return img
}
UIGraphicsBeginImageContextWithOptions(img.size, false, img.scale)
let rect = CGRect(x: 0, y: 0, width: img.size.width, height: img.size.height)
img.drawInRect(rect)
let normalizedImage : UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return normalizedImage
}
and then do it before using UIImageJPEGRepresentation:
if let data = UIImageJPEGRepresentation(fixOrientation(croppedImage), 0.95) {
try? data.write(to: URL(fileURLWithPath: path), options: [.atomic])
}
EDIT 2
please edit the init method of ShittyImageCrop by replacing img = image with:
if (image.imageOrientation != .up) {
UIGraphicsBeginImageContextWithOptions(image.size, false, image.scale)
var rect = CGRect.zero
rect.size = image.size
image.draw(in: rect)
img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
} else {
img = image
}

View with Custom design

Need to design single view like the attached design in swift and this to be visible at all my collecitonview cell, how to achieve this anyone have idea about this
I have tried it in a test project. This is my way:
Open Photoshop or a similar tool and make a picture with a translucent background.
Use the PS tools to draw a figure the way you want it.
Save it as a PNG. Open Xcode. Put a UIImageView into your UIViewController. Put the PDF into your Assets folder and set this Image as the Image for the UIImageView. Set the contraints.
Set the following code into the UIViewController.swift file:
import UIKit
class ViewController: UIViewController {
#IBOutlet weak var testImageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
let tap = UITapGestureRecognizer(target: self, action: #selector(doubleTapped))
tap.numberOfTapsRequired = 1
view.addGestureRecognizer(tap)
}
func doubleTapped() {
let image = UIImage(named: "test")
testImageView.image = image?.maskWithColor(color: UIColor.blue)
}
}
extension UIImage {
func maskWithColor(color: UIColor) -> UIImage? {
let maskImage = self.cgImage
let width = self.size.width
let height = self.size.height
let bounds = CGRect(origin: CGPoint(x: 0,y :0), size: CGSize(width: width, height: height))
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
let bitmapContext = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue) //needs rawValue of bitmapInfo
bitmapContext!.clip(to: bounds, mask: maskImage!)
bitmapContext!.setFillColor(color.cgColor)
bitmapContext!.fill(bounds)
//is it nil?
if let cImage = bitmapContext!.makeImage() {
let coloredImage = UIImage(cgImage: cImage)
return coloredImage
} else {
return nil
}
}
}
Start the app and touch the display. The color changes from black to blue once tapped. Now you should have all the tools to do whatever you want too... The Code is in Swift 3 language.
You can set this UIImageView into your UICollectionViewCell and set the UIColor with the function provided.
And here is a function to set a random UIColor.

Resources