UIScrollView crop to take Device scale and zoom scale into effect? - ios

I am trying to create a simple crop feature that takes into effect device screen-density and zoom-scale
I basically modeled it after the code in this tutorial:
https://www.youtube.com/watch?v=hz9pMw4Y2Lk
func cropImage(sender:AnyObject!) { //triggered by a button
let myScale = UIScreen.mainScreen().scale
var height = self.scrollView.bounds.height
var width = self.scrollView.bounds.width
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, height), true, myScale)
let offset = scrollView.contentOffset
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -offset.x, -offset.y)
scrollView.layer.renderInContext(UIGraphicsGetCurrentContext())
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//i would like to check here if target image is >300x300px
if image.size.width > 300 && image.size.height > 300{
println("image correct")
println(image.size)
} else {
println("nope")
println(image.size)
}
}
So far I will always end up having an image that is bounds.height/width -which means that on a 320 device incl. a 8xp leading/trailing gap, the user might never be able to create a "correct image".
I understand why it happens, but I do not understand where I should be multiplying with device-scale factor and/or zoom-factor of the UIScrollView.
For example having a camera picture imported at ScrollView zoom-scale 0.0 - i want to keep it at ~8MP'ish.

Related

Removing statusbar from screenshot on iOS

Im trying to remove the top part of an image by cropping, but the result is unexpected.
The code used:
extension UIImage {
class func removeStatusbarFromScreenshot(_ screenshot:UIImage) -> UIImage {
let statusBarHeight = 44.0
let newHeight = screenshot.size.height - statusBarHeight
let newSize = CGSize(width: screenshot.size.width, height: newHeight)
let newOrigin = CGPoint(x: 0, y: statusBarHeight)
let imageRef:CGImage = screenshot.cgImage!.cropping(to: CGRect(origin: newOrigin, size: newSize))!
let cropped:UIImage = UIImage(cgImage:imageRef)
return cropped
}
}
My logic is that I need to make the image smaller in heigh by 44px and move the origin y by 44px, but it ends up only creating an image much smaller of the top left corner.
The only way that I get it to work as expected is by multiplying the width by 2 and height by 2.5 in newSize, but that also double the size of the image produced..
Which anyways doesnt make much sense.. can someone help make it work without using magic values?
There are two main problems with what you're doing:
A UIImage has a scale (usually tied to resolution of your device's screen), but a CGImage does not.
Different devices have different "status bar" heights. In general, what you want to cut off from the top is not the status bar but the safe area. The top of the safe area is where your content starts.
Because of this:
You are wrong to talk about 44 px. There are no pixels here. Pixels are physical atomic illuminations on your screen. In code, there are points. Points are independent of the scale (and the scale is the multiplier between points and pixels).
You are wrong to talk about the number 44 itself as if it were hard-coded. You should get the top of the safe area instead.
By crossing into the CGImage world without taking scale into account, you lose the scale information, because CGImage knows nothing of scale.
By crossing back into the UIImage world without taking scale into account, you end up with a UIImage with a resolution of 1, which may not be the resolution of the original UIImage.
The simplest solution is not to do any of what you are doing. First, get the height of the safe area; call it h. Then just draw the snapshot image into a graphics image context that is the same scale as your image (which, if you play your cards right, it will be automatically), but is h points shorter than the height of your image — and draw it with its y origin at -h, thus cutting off the safe area. Extract the resulting image and you're all set.
Example! This code comes a view controller. First, I'll take a screenshot of my own device's current screen (this view controller's view) as my app runs:
let renderer = UIGraphicsImageRenderer(size: view.bounds.size)
let screenshot = renderer.image { context in
view.layer.render(in: context.cgContext)
}
Now, I'll cut the safe area off the top of that screenshot:
let h = view.safeAreaInsets.top
let size = screenshot.size
let r = UIGraphicsImageRenderer(
size: .init(width: size.width, height: size.height - h)
)
let result = r.image { _ in
screenshot.draw(at: .init(x: 0, y: -h))
}
Experimentation will confirm that this works perfectly on every device, regardless of whether it has a bezel and regardless of its screen resolution: the top of the resulting image, result, is the top of your actual content.

How to convert VNRectangleObservation item to UIImage in SwiftUI

I was able to identify squares from a images using VNDetectRectanglesRequest. Now I want those rectangles to store as separate images (UIImage or cgImage). Below is what I tried.
let rectanglesDetection = VNDetectRectanglesRequest { request, error in
rectangles = request.results as! [VNRectangleObservation]
rectangles.sort{$0.boundingBox.origin.y > $1.boundingBox.origin.y}
for rectangle in rectangles {
let rect = rectangle.boundingBox
let imageRef = cgImage.cropping(to: rect)
let image = UIImage(cgImage: imageRef!, scale: image!.scale, orientation: image!.imageOrientation)
checkBoxImages.append(image)
}
Can anybody point out what's wrong or what should be the best approach?
Update 1
At this stage, I'm testing with an image that I added to the assets.
With this image I get 7 rectangles as observations as each for each cell and one for the table margin.
My task is to identify the text inside in each rectangle and my approach is to send VNRecognizeTextRequest for each rectangle that has been identified. My real scenario is little complicated than this but I want to at least achieve this before going forward.
Update 2
for rectangle in rectangles {
let trueX = rectangle.boundingBox.minX * image!.size.width
let trueY = rectangle.boundingBox.minY * image!.size.height
let width = rectangle.boundingBox.width * image!.size.width
let height = rectangle.boundingBox.height * image!.size.height
print("x = " , trueX , " y = " , trueY , " width = " , width , " height = " , height)
let cropZone = CGRect(x: trueX, y: trueY, width: width, height: height)
guard let cutImageRef: CGImage = image?.cgImage?.cropping(to:cropZone)
else {
return
}
let croppedImage: UIImage = UIImage(cgImage: cutImageRef)
croppedImages.append(croppedImage)
}
My image width and height is
width = 406.0 height = 368.0
I've taken my debug interface for you to get a proper understand.
As #Lasse mentioned, this is my actual issue with screenshots.
This is just a guess since you didn't state what the actual problem is, but probably you're getting a zero-sized image for each VNRectangleObservation.
The reason is: Vision uses a normalized coordinate space from 0.0 to 1.0 with lower left origin.
So in order to get the correct rectangle of your original image, you need to convert the rect from Normalized Space to Image Space. Luckily there is VNImageRectForNormalizedRect(::_:) to do just that.

Crop UIImage to square portion

I have a UIScrollView which contains a UIImage. On top of that is a box that the user can move the image, so that that portion is cropped.
This screenshot explains it better:
So they can scroll the image around until the portion they want is inside that box.
I then want to be able to crop the scrollView/UIImage to exactly that size and store the cropped image.
It shouldn't be very hard but I've spent ages trying screenshots, UIGraphicsContext, etc. and cant seem to get anything to work.
Thanks for the help.
I finally figured out how to get it to work. Here is the code:
func croppedImage() -> UIImage {
let cropSize = CGSize(width: 280, height: 280)
let scale = (imageView.image?.size.height)! / imageView.frame.height
let cropSizeScaled = CGSize(width: cropSize.width * scale, height: cropSize.height * scale)
if #available(iOS 10.0, *) {
let r = UIGraphicsImageRenderer(size: cropSizeScaled)
let x = -scrollView.contentOffset.x * scale
let y = -scrollView.contentOffset.y * scale
return r.image { _ in
imageView.image!.draw(at: CGPoint(x: x, y: y))
}
} else {
return UIImage()
}
}
So it first calculates the scale of the imageView and the actual image.
Then it creates a CGSize of that crop box as shown in the photo. However, the width and height must be scaled by the scale factor. (e.g. 280 * 6.5)
You must check if the phone is running iOS 10.0 for UIGraphicsImageRender - if not, it won't work.
Initialise this with the crop box size.
The image must then be offset, and this is calculated by getting the scrollView's content offset, negating it, and multiplying by the scale factor.
Then return the image drawn at that point!

Swift: UIGraphicsBeginImageContextWithOptions scale factor set to 0 but not applied

I used to resize an image with the following code and it used to work just fine regarding the scale factor. Now with Swift 3 I can't figure out why the scale factor is not taken into account. The image is resized but the scale factor not applied. Do you know why?
let layer = self.imageview.layer
UIGraphicsBeginImageContextWithOptions(layer.bounds.size, true, 0)
layer.render(in: UIGraphicsGetCurrentContext()!)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
print("SCALED IMAGE SIZE IS \(scaledImage!.size)")
print(scaledImage!.scale)
For example if I take a screenshot on iPhone 5 the image size will be 320*568. I used to get 640*1136 with exact same code.. What can cause the scale factor not to be applied?
When I print the scale of the image it would print 1, 2 or 3 based on the device resolution but will not be applied to the image taken from the context.
scaledImage!.size will not return the image size in pixel.
CGImageGetWidth and CGImageGetHeight returns the same size (in pixels)
That is image.size * image.scale
If you wanna test it out, at first you have to import CoreGraphics
let imageSize = scaledImage!.size //(320,568)
let imageWidthInPixel = CGImageGetWidth(scaledImage as! CGImage) //640
let imageHeightInPixel = CGImageGetHeight(scaledImage as! CGImage) //1136

Getting size of an image in an UIImageView

I am having trouble getting the size of an image after it has been assigned to an UIImageView programmatically.
The code below runs in a loop and a function is used (getNoteImage) to download an image from an URL (photoURL) and assign it to the created UIImageView. I need to get the height of the image so that I can calculate the spacing for the following images and labels.
var myImage :UIImageView
myImage = UIImageView(frame: CGRectMake(0, 0, UIScreen.mainScreen().bounds.width, UIScreen.mainScreen().bounds.height))
myImage.center = CGPointMake(UIScreen.mainScreen().bounds.size.width/2, imageY)
myImage.getNoteImage(photoUrl)
self.detailsView.addSubview(myImage)
myImage.contentMode = UIViewContentMode.ScaleAspectFit
imageHeight = myImage.bounds.size.height
I have tried using
imageHeight = myImage.bounds.size.height
which I read as a solution on another post but his just returns the screen size for the device (667 on the iPhone6 simulator).
Can anyone guide me on how I can get the correct image size ?
Thanks
As your imageView is taking up the whole screen and the image is sized using 'aspect fit' to get the height the image is displayed at you will need to get the original image size using myImage.image!.size then scale this based on myImage.bounds.size something like;
let imageViewHeight = myImage.bounds.height
let imageViewWidth = myImage.bounds.width
let imageSize = myImage.image!.size
let scaledImageHeight = min(imageSize.height * (imageViewWidth / imageSize.width), imageViewHeight)
That should give the actual height of the image, note that image.size gives the "logical dimensions of the image" i.e. its natural size and not the size it is drawn at.
As UIImage official doc:
if let image = myImage.image {
let size = image.size
print("image size is \(size)")
} else {
print("There is no image here..")
}
I suppose your code working with synchronously image as I understand in your question, but if not I recommended to use AlamofireImage.
With AlamofireImage you can do:
self.myImage.af_setImageWithURL(
NSURL(string: photoUrl)!,
placeholderImage: nil,
filter: nil,
imageTransition: .CrossDissolve(0.5),
completion: { response in
print(response.result.value) # UIImage
if let image = response.result.value {
let size = image.size
print("image size is \(size)")
}
print(response.result.error) # NSError
}
)
change your code like this and try again. only you did note force set the frame of the UIImageView and then give it a image ,you can use "imageHeight = myImage.bounds.size.height" get the real size of the image.
var myImage :UIImageView
//remove your initial frame parameters
myImage = UIImageView()
myImage.center = CGPointMake(UIScreen.mainScreen().bounds.size.width/2, imageY)
try using myImage.image.size.height and also make sure the image is downloaded from the url you should write it in a completion block and only check if the image is downloaded.
See the documentation for more details.

Resources