I have a UIScrollView which contains a UIImage. On top of that is a box that the user can move the image, so that that portion is cropped.
This screenshot explains it better:
So they can scroll the image around until the portion they want is inside that box.
I then want to be able to crop the scrollView/UIImage to exactly that size and store the cropped image.
It shouldn't be very hard but I've spent ages trying screenshots, UIGraphicsContext, etc. and cant seem to get anything to work.
Thanks for the help.
I finally figured out how to get it to work. Here is the code:
func croppedImage() -> UIImage {
let cropSize = CGSize(width: 280, height: 280)
let scale = (imageView.image?.size.height)! / imageView.frame.height
let cropSizeScaled = CGSize(width: cropSize.width * scale, height: cropSize.height * scale)
if #available(iOS 10.0, *) {
let r = UIGraphicsImageRenderer(size: cropSizeScaled)
let x = -scrollView.contentOffset.x * scale
let y = -scrollView.contentOffset.y * scale
return r.image { _ in
imageView.image!.draw(at: CGPoint(x: x, y: y))
}
} else {
return UIImage()
}
}
So it first calculates the scale of the imageView and the actual image.
Then it creates a CGSize of that crop box as shown in the photo. However, the width and height must be scaled by the scale factor. (e.g. 280 * 6.5)
You must check if the phone is running iOS 10.0 for UIGraphicsImageRender - if not, it won't work.
Initialise this with the crop box size.
The image must then be offset, and this is calculated by getting the scrollView's content offset, negating it, and multiplying by the scale factor.
Then return the image drawn at that point!
Related
Im trying to remove the top part of an image by cropping, but the result is unexpected.
The code used:
extension UIImage {
class func removeStatusbarFromScreenshot(_ screenshot:UIImage) -> UIImage {
let statusBarHeight = 44.0
let newHeight = screenshot.size.height - statusBarHeight
let newSize = CGSize(width: screenshot.size.width, height: newHeight)
let newOrigin = CGPoint(x: 0, y: statusBarHeight)
let imageRef:CGImage = screenshot.cgImage!.cropping(to: CGRect(origin: newOrigin, size: newSize))!
let cropped:UIImage = UIImage(cgImage:imageRef)
return cropped
}
}
My logic is that I need to make the image smaller in heigh by 44px and move the origin y by 44px, but it ends up only creating an image much smaller of the top left corner.
The only way that I get it to work as expected is by multiplying the width by 2 and height by 2.5 in newSize, but that also double the size of the image produced..
Which anyways doesnt make much sense.. can someone help make it work without using magic values?
There are two main problems with what you're doing:
A UIImage has a scale (usually tied to resolution of your device's screen), but a CGImage does not.
Different devices have different "status bar" heights. In general, what you want to cut off from the top is not the status bar but the safe area. The top of the safe area is where your content starts.
Because of this:
You are wrong to talk about 44 px. There are no pixels here. Pixels are physical atomic illuminations on your screen. In code, there are points. Points are independent of the scale (and the scale is the multiplier between points and pixels).
You are wrong to talk about the number 44 itself as if it were hard-coded. You should get the top of the safe area instead.
By crossing into the CGImage world without taking scale into account, you lose the scale information, because CGImage knows nothing of scale.
By crossing back into the UIImage world without taking scale into account, you end up with a UIImage with a resolution of 1, which may not be the resolution of the original UIImage.
The simplest solution is not to do any of what you are doing. First, get the height of the safe area; call it h. Then just draw the snapshot image into a graphics image context that is the same scale as your image (which, if you play your cards right, it will be automatically), but is h points shorter than the height of your image — and draw it with its y origin at -h, thus cutting off the safe area. Extract the resulting image and you're all set.
Example! This code comes a view controller. First, I'll take a screenshot of my own device's current screen (this view controller's view) as my app runs:
let renderer = UIGraphicsImageRenderer(size: view.bounds.size)
let screenshot = renderer.image { context in
view.layer.render(in: context.cgContext)
}
Now, I'll cut the safe area off the top of that screenshot:
let h = view.safeAreaInsets.top
let size = screenshot.size
let r = UIGraphicsImageRenderer(
size: .init(width: size.width, height: size.height - h)
)
let result = r.image { _ in
screenshot.draw(at: .init(x: 0, y: -h))
}
Experimentation will confirm that this works perfectly on every device, regardless of whether it has a bezel and regardless of its screen resolution: the top of the resulting image, result, is the top of your actual content.
I was able to identify squares from a images using VNDetectRectanglesRequest. Now I want those rectangles to store as separate images (UIImage or cgImage). Below is what I tried.
let rectanglesDetection = VNDetectRectanglesRequest { request, error in
rectangles = request.results as! [VNRectangleObservation]
rectangles.sort{$0.boundingBox.origin.y > $1.boundingBox.origin.y}
for rectangle in rectangles {
let rect = rectangle.boundingBox
let imageRef = cgImage.cropping(to: rect)
let image = UIImage(cgImage: imageRef!, scale: image!.scale, orientation: image!.imageOrientation)
checkBoxImages.append(image)
}
Can anybody point out what's wrong or what should be the best approach?
Update 1
At this stage, I'm testing with an image that I added to the assets.
With this image I get 7 rectangles as observations as each for each cell and one for the table margin.
My task is to identify the text inside in each rectangle and my approach is to send VNRecognizeTextRequest for each rectangle that has been identified. My real scenario is little complicated than this but I want to at least achieve this before going forward.
Update 2
for rectangle in rectangles {
let trueX = rectangle.boundingBox.minX * image!.size.width
let trueY = rectangle.boundingBox.minY * image!.size.height
let width = rectangle.boundingBox.width * image!.size.width
let height = rectangle.boundingBox.height * image!.size.height
print("x = " , trueX , " y = " , trueY , " width = " , width , " height = " , height)
let cropZone = CGRect(x: trueX, y: trueY, width: width, height: height)
guard let cutImageRef: CGImage = image?.cgImage?.cropping(to:cropZone)
else {
return
}
let croppedImage: UIImage = UIImage(cgImage: cutImageRef)
croppedImages.append(croppedImage)
}
My image width and height is
width = 406.0 height = 368.0
I've taken my debug interface for you to get a proper understand.
As #Lasse mentioned, this is my actual issue with screenshots.
This is just a guess since you didn't state what the actual problem is, but probably you're getting a zero-sized image for each VNRectangleObservation.
The reason is: Vision uses a normalized coordinate space from 0.0 to 1.0 with lower left origin.
So in order to get the correct rectangle of your original image, you need to convert the rect from Normalized Space to Image Space. Luckily there is VNImageRectForNormalizedRect(::_:) to do just that.
I'm working on iOS app which should enable users to create Instagram stories photos and export them to Instagram. Basically app like Unfold, Stellar, Chroma Stories... I've prepared UI where user can select from prepared templates and add to them own photos with filters, labels etc.
My question is - what is the best way to export created UIView to bigger Image?
I mean how to get the best quality, sharp pixels of labels etc?
Because the template view with subviews (added photos, labels...) is taking +- half of device's screen. But I need bigger size for exported image.
Currently I use:
func makeImageFromView() -> UIImage {
let format = UIGraphicsImageRendererFormat()
let size = CGSize(width: 1080 / format.scale, height: 1920 / format.scale)
let renderer = UIGraphicsImageRenderer(size: size, format: format)
let image = renderer.image { (ctx) in
templateView.drawHierarchy(in: CGRect(origin: .zero, size: size), afterScreenUpdates: true)
}
return image
}
The resulted image has size of 1080 x 1920, but labels aren't sharp.
Do I need to scale somehow the photo and font size before making it to an image?
Thanks!
So actually yes, before capturing image I need to scale whole view and it's subviews. Here are my findings (maybe obvious things but it took me a while to realize that – I'll be glad for any improvements)
Rendering image of the same size
When you want to capture UIView as an image, you can simply use this function. Resulted image will have a same size as a view (scaled 2x / 3x depending on actual device)
func makeImageFrom(_ desiredView: MyView) -> UIImage {
let size = CGSize(width: desiredView.bounds.width, height: desiredView.bounds.height)
let renderer = UIGraphicsImageRenderer(size: size)
let image = renderer.image { (ctx) in
desiredView.drawHierarchy(in: CGRect(origin: .zero, size: size), afterScreenUpdates: true)
}
return image
}
Rendering image of the different size
But what to do, when you want a specific size for your exported image?
So from my use-case I wanted to render image of final size (1080 x 1920), but a view I wanted to capture had a smaller size (in my case 275 x 487). If you do such a rendering without anything, there must be a loss in quality.
If you want to avoid that and preserve sharp labels and other subviews, you need to try to scale the view ideally to the desired size. In my case, make it from 275 x 487 to 1080 x 1920.
func makeImageFrom(_ desiredView: MyView) -> UIImage {
let format = UIGraphicsImageRendererFormat()
// We need to divide desired size with renderer scale, otherwise you get output size larger #2x or #3x
let size = CGSize(width: 1080 / format.scale, height: 1920 / format.scale)
let renderer = UIGraphicsImageRenderer(size: size, format: format)
let image = renderer.image { (ctx) in
// remake constraints or change size of desiredView to 1080 x 1920
// handle it's subviews (update font size etc.)
// ...
desiredView.drawHierarchy(in: CGRect(origin: .zero, size: size), afterScreenUpdates: true)
// undo the size changes
// ...
}
return image
}
My approach
But because I didn't want to mess with a size of a view displayed to the user, I took a different way and used second view which isn't shown to the user. That means that just before I want to capture image, I prepare "duplicated" view with the same content but bigger size. I don't add it to the view controller's view hierarchy, so it's not visible.
Important note!
You really need to take care of subviews. That means, that you have to increase the font size, update position of moved subviews (for example their center) etc.!
Here is just a few lines to illustrate that:
// 1. Create bigger view
let hdView = MyView()
hdView.frame = CGRect(x: 0, y: 0, width: 1080, height: 1920)
// 2. Load content according to the original view (desiredView)
// set text, images...
// 3. Scale subviews
// Find out what scale we need
let scaleMultiplier: CGFloat = 1080 / desiredView.bounds.width // 1080 / 275 = 3.927 ...
// Scale everything, for examples label's font size
[label1, label2].forEach { $0.font = UIFont.systemFont(ofSize: $0.font.pointSize * scaleMultiplier, weight: .bold) }
// or subview's center
subview.center = subview.center.applying(.init(scaleX: scaleMultiplier, y: scaleMultiplier))
// 4. Render image from hdView
let hdImage = makeImageFrom(hdView)
Difference in quality from real usage – zoomed to the label:
I'm trying to position a button at a fixed position inside a UIImageView (AspectFit) which itself is inside a UIScrollView. This worked perfectly on my first try when the UIScrollView and the UIImageView containers both covered the whole screen, the button was pinned to a certain location of the image and stayed at position during zooming. You can see the result in the below image.
As you can obviously see there are white borders above and below the image (related to aspect fit) therefore I had to do some calculation to get the margin from top to calculate the "real" y position of the red square. My code looks like this:
let originalImageSize: CGSize = CGSize(width: image.size.width, height: image.size.height)
let aspectRatio = image.size.width/image.size.height;
let requiredHeight = self.view.bounds.size.width / aspectRatio;
let screenHeight = self.view.bounds.height;
let marginTop = (screenHeight - requiredHeight) / 2;
let renderedImageSize: CGSize = CGSize(width: self.view.bounds.width, height: requiredHeight)
let x:Double = 0
let y:Double = 0
let button = UIButton()
button.frame = CGRect(x: Double(renderedImageSize.width/originalImageSize.width) * x,
y: Double(renderedImageSize.height/originalImageSize.height) * y + Double(marginTop),
width: 10, height: 10)
button.backgroundColor = UIColor.red
imageView.addSubview(button)
As you see I calculated the "marginTop" to get the real y position. The square is perfectly located on x: 0 and y:0 (relative to the image). So far so good, this example worked perfectly.
Now I created a new view which contains a navigation bar and tab bar. The scrollView is in between and no longer covers the whole screen but only the area between my navigation and my tab bar. The imageView has the same size like my scrollView. Pretty much the same as in the example above. Now I tried to set my button a specific location again, but this time there is an offset on the y axis of exactly 6 pixels and I have no idea what I'm doing wrong. And to make it even worse when testing it on other devices the offset on the y axis is even bigger than 6 pixels, while the first example works perfectly accross all devices I tested. You can see the result with the "wrong" y-axis value here.
I changed my code to the following, based on the fact that sizes should be calculated according to the "new" scrollView size.
let originalImageSize: CGSize = CGSize(width: image.size.width, height: image.size.height)
let aspectRatio = image.size.width/image.size.height;
let requiredHeight = scrollView.bounds.size.width / aspectRatio;
let screenHeight = scrollView.bounds.height;
let marginTop = (screenHeight - requiredHeight) / 2;
let renderedImageSize: CGSize = CGSize(width: scrollView.bounds.width, height: requiredHeight)
let x:Double = 0
let y:Double = 0
let button = UIButton()
button.frame = CGRect(x: Double(renderedImageSize.width/originalImageSize.width) * x,
y: Double(renderedImageSize.height/originalImageSize.height) * y + Double(marginTop),
width: 10, height: 10)
button.backgroundColor = UIColor.red
imageView.addSubview(button)
A quick workaround would be something likes this, but it is hacky as hell and doesn't work for other device sizes and I obviously want to learn how to do it the right way:
[...] y: Double(renderedImageSize.height/originalImageSize.height) * y + Double(marginTop) - 6, [...]
I've been sitting on this for hours now and still don't have any idea why the y-axis is off even though the calculation of the top margin should be right and why the y axis offset is even bigger now on different devices. I'm thankful for any advice as I'm pretty new to iOS developing and I guess I'm missunderstanding something related to calculating correct sizes.
Turns out, that the solution to my problem is rather simple. I first started wondering when I saw that scrollView.bounds.size.height
returned a bigger value than the actual screen height on devices smaller than the iPhone X, which seemed pretty strange. Then I tried to figure out how to find the "real" size of my scrollView on different devices, because it is obviously not really bigger than the whole screen according to the simulator visual result.
So instead of doing the above quoted calculation inside viewDidLoad() doing all my calculation in viewDidLayoutSubviews() instead and this solved the whole problem.
The only thing I'm still wondering about is, why there was an offset even on the iPhone X as it was my default template in Xcode.
I am trying to create a simple crop feature that takes into effect device screen-density and zoom-scale
I basically modeled it after the code in this tutorial:
https://www.youtube.com/watch?v=hz9pMw4Y2Lk
func cropImage(sender:AnyObject!) { //triggered by a button
let myScale = UIScreen.mainScreen().scale
var height = self.scrollView.bounds.height
var width = self.scrollView.bounds.width
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, height), true, myScale)
let offset = scrollView.contentOffset
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -offset.x, -offset.y)
scrollView.layer.renderInContext(UIGraphicsGetCurrentContext())
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//i would like to check here if target image is >300x300px
if image.size.width > 300 && image.size.height > 300{
println("image correct")
println(image.size)
} else {
println("nope")
println(image.size)
}
}
So far I will always end up having an image that is bounds.height/width -which means that on a 320 device incl. a 8xp leading/trailing gap, the user might never be able to create a "correct image".
I understand why it happens, but I do not understand where I should be multiplying with device-scale factor and/or zoom-factor of the UIScrollView.
For example having a camera picture imported at ScrollView zoom-scale 0.0 - i want to keep it at ~8MP'ish.