Removing statusbar from screenshot on iOS - ios

Im trying to remove the top part of an image by cropping, but the result is unexpected.
The code used:
extension UIImage {
class func removeStatusbarFromScreenshot(_ screenshot:UIImage) -> UIImage {
let statusBarHeight = 44.0
let newHeight = screenshot.size.height - statusBarHeight
let newSize = CGSize(width: screenshot.size.width, height: newHeight)
let newOrigin = CGPoint(x: 0, y: statusBarHeight)
let imageRef:CGImage = screenshot.cgImage!.cropping(to: CGRect(origin: newOrigin, size: newSize))!
let cropped:UIImage = UIImage(cgImage:imageRef)
return cropped
}
}
My logic is that I need to make the image smaller in heigh by 44px and move the origin y by 44px, but it ends up only creating an image much smaller of the top left corner.
The only way that I get it to work as expected is by multiplying the width by 2 and height by 2.5 in newSize, but that also double the size of the image produced..
Which anyways doesnt make much sense.. can someone help make it work without using magic values?

There are two main problems with what you're doing:
A UIImage has a scale (usually tied to resolution of your device's screen), but a CGImage does not.
Different devices have different "status bar" heights. In general, what you want to cut off from the top is not the status bar but the safe area. The top of the safe area is where your content starts.
Because of this:
You are wrong to talk about 44 px. There are no pixels here. Pixels are physical atomic illuminations on your screen. In code, there are points. Points are independent of the scale (and the scale is the multiplier between points and pixels).
You are wrong to talk about the number 44 itself as if it were hard-coded. You should get the top of the safe area instead.
By crossing into the CGImage world without taking scale into account, you lose the scale information, because CGImage knows nothing of scale.
By crossing back into the UIImage world without taking scale into account, you end up with a UIImage with a resolution of 1, which may not be the resolution of the original UIImage.
The simplest solution is not to do any of what you are doing. First, get the height of the safe area; call it h. Then just draw the snapshot image into a graphics image context that is the same scale as your image (which, if you play your cards right, it will be automatically), but is h points shorter than the height of your image — and draw it with its y origin at -h, thus cutting off the safe area. Extract the resulting image and you're all set.
Example! This code comes a view controller. First, I'll take a screenshot of my own device's current screen (this view controller's view) as my app runs:
let renderer = UIGraphicsImageRenderer(size: view.bounds.size)
let screenshot = renderer.image { context in
view.layer.render(in: context.cgContext)
}
Now, I'll cut the safe area off the top of that screenshot:
let h = view.safeAreaInsets.top
let size = screenshot.size
let r = UIGraphicsImageRenderer(
size: .init(width: size.width, height: size.height - h)
)
let result = r.image { _ in
screenshot.draw(at: .init(x: 0, y: -h))
}
Experimentation will confirm that this works perfectly on every device, regardless of whether it has a bezel and regardless of its screen resolution: the top of the resulting image, result, is the top of your actual content.

Related

How to create an image of specific size from UIView

I'm working on iOS app which should enable users to create Instagram stories photos and export them to Instagram. Basically app like Unfold, Stellar, Chroma Stories... I've prepared UI where user can select from prepared templates and add to them own photos with filters, labels etc.
My question is - what is the best way to export created UIView to bigger Image?
I mean how to get the best quality, sharp pixels of labels etc?
Because the template view with subviews (added photos, labels...) is taking +- half of device's screen. But I need bigger size for exported image.
Currently I use:
func makeImageFromView() -> UIImage {
let format = UIGraphicsImageRendererFormat()
let size = CGSize(width: 1080 / format.scale, height: 1920 / format.scale)
let renderer = UIGraphicsImageRenderer(size: size, format: format)
let image = renderer.image { (ctx) in
templateView.drawHierarchy(in: CGRect(origin: .zero, size: size), afterScreenUpdates: true)
}
return image
}
The resulted image has size of 1080 x 1920, but labels aren't sharp.
Do I need to scale somehow the photo and font size before making it to an image?
Thanks!
So actually yes, before capturing image I need to scale whole view and it's subviews. Here are my findings (maybe obvious things but it took me a while to realize that – I'll be glad for any improvements)
Rendering image of the same size
When you want to capture UIView as an image, you can simply use this function. Resulted image will have a same size as a view (scaled 2x / 3x depending on actual device)
func makeImageFrom(_ desiredView: MyView) -> UIImage {
let size = CGSize(width: desiredView.bounds.width, height: desiredView.bounds.height)
let renderer = UIGraphicsImageRenderer(size: size)
let image = renderer.image { (ctx) in
desiredView.drawHierarchy(in: CGRect(origin: .zero, size: size), afterScreenUpdates: true)
}
return image
}
Rendering image of the different size
But what to do, when you want a specific size for your exported image?
So from my use-case I wanted to render image of final size (1080 x 1920), but a view I wanted to capture had a smaller size (in my case 275 x 487). If you do such a rendering without anything, there must be a loss in quality.
If you want to avoid that and preserve sharp labels and other subviews, you need to try to scale the view ideally to the desired size. In my case, make it from 275 x 487 to 1080 x 1920.
func makeImageFrom(_ desiredView: MyView) -> UIImage {
let format = UIGraphicsImageRendererFormat()
// We need to divide desired size with renderer scale, otherwise you get output size larger #2x or #3x
let size = CGSize(width: 1080 / format.scale, height: 1920 / format.scale)
let renderer = UIGraphicsImageRenderer(size: size, format: format)
let image = renderer.image { (ctx) in
// remake constraints or change size of desiredView to 1080 x 1920
// handle it's subviews (update font size etc.)
// ...
desiredView.drawHierarchy(in: CGRect(origin: .zero, size: size), afterScreenUpdates: true)
// undo the size changes
// ...
}
return image
}
My approach
But because I didn't want to mess with a size of a view displayed to the user, I took a different way and used second view which isn't shown to the user. That means that just before I want to capture image, I prepare "duplicated" view with the same content but bigger size. I don't add it to the view controller's view hierarchy, so it's not visible.
Important note!
You really need to take care of subviews. That means, that you have to increase the font size, update position of moved subviews (for example their center) etc.!
Here is just a few lines to illustrate that:
// 1. Create bigger view
let hdView = MyView()
hdView.frame = CGRect(x: 0, y: 0, width: 1080, height: 1920)
// 2. Load content according to the original view (desiredView)
// set text, images...
// 3. Scale subviews
// Find out what scale we need
let scaleMultiplier: CGFloat = 1080 / desiredView.bounds.width // 1080 / 275 = 3.927 ...
// Scale everything, for examples label's font size
[label1, label2].forEach { $0.font = UIFont.systemFont(ofSize: $0.font.pointSize * scaleMultiplier, weight: .bold) }
// or subview's center
subview.center = subview.center.applying(.init(scaleX: scaleMultiplier, y: scaleMultiplier))
// 4. Render image from hdView
let hdImage = makeImageFrom(hdView)
Difference in quality from real usage – zoomed to the label:

Cropping UIImage to custom path and keeping correct resolution?

I have a view (blue background...) which I'll call "main" here, on main I added a UIImageView that I then rotate, pan and scale. On main I have a another subview that shows the cropping area. Anything out of that under the darker area needs to be cropped.
I am trying to figure out how to properly create a cropped image from this state. I want the resulting image to look like this:
I want to make sure to keep the resolution of the image.
Any idea?
I have tried to figure out how to use the layer.mask property of the UIImageView. After some feedback, I think I could have another view (B) on the blue view, on B I would then add the image view, so then I would make sure that B's frame would match the rect of the cropping mask overlay. I think that could work? The only thing is I want to make sure I don't lose resolution.
So, earlier I tried this:
maskShape.frame = imageView.bounds
maskShape.path = UIBezierPath(rect: CGRect(x: 20, y: 20, width: 200, height: 200)).cgPath
imageView.layer.mask = maskShape
The rect was just a test rect and the image would be cropped to that path, but, I wasn't sure how to get a UIImage from all this that could keep the large resolution of the original image
So, I have implemented the method suggested by marco, it all works with the exception of keeping the resolution.
I use this call to take a screenshot of the view the contains the image and I have it clip to bounds:
public func renderToImage(afterScreenUpdates: Bool = false) -> UIImage {
let rendererFormat = UIGraphicsImageRendererFormat.default()
rendererFormat.opaque = isOpaque
let renderer = UIGraphicsImageRenderer(size: bounds.size, format: rendererFormat)
let snapshotImage = renderer.image { _ in
drawHierarchy(in: bounds, afterScreenUpdates: afterScreenUpdates)
}
return snapshotImage
}
The image I get is correct, but is not as sharp as the one I crop.
Hoe can I keep the resolution high?
In your view which keeps the image you must set clipsToBounds to true. Not sure if I got well but I suppose it's your "cropping area"

Cropping UI images

I have a SceneKit view that fills my screen. My goal is to let the user take snapshots of that scene, but the snapshots are not the whole screen, but an inset portion in a UIImageView which is slightly smaller than the screen. Ideally, the user should not notice, the image on top should be identical to the scene behind it.
I have coded this up using snapshot and cropped, but as you can see in the image, the scale ends up way off - see the width of the yellow line, and the size of the windows? It's also not positioned correctly, it's somewhat down and to the left from where it should be - the upper left should be below the line of windows, but you can see it is at the roofline above them. I can't see the original snapshot because the debugger QuickLook refuses to show it.
There's not much code to it, anyone see the problem:
let background = sceneView.snapshot().cgImage!
let cropped = background.cropping(to: overlayView.frame)
UIGraphicsBeginImageContextWithOptions(overlayView.frame.size, false, 1.0)
let context = UIGraphicsGetCurrentContext()
context!.setAlpha(0.50)
context!.draw(cropped!, in: overlayView.bounds)
let transparent = context!.makeImage();
UIGraphicsEndImageContext()
overlayView.image = UIImage.init(cgImage: transparent!, scale: 1.0, orientation: .downMirrored)
I have tried various scales and rects to no avail. I assume this is something very easy.
UPDATE: after several tries I was able to get quicklook to work. The snapshot is indeed the entire background as I would expect. But it is much larger than I would expect too - its 640, 998 while the cropped version is 228, 304. That explains the "zooming". This leads me to believe that the frame size of the inset view is NOT a direct relationship to the image size. Does that ring any bells? Is there some other rect I should be using rather than overlayView.frame?
So I assume the problem is that the frame coordinates are in one set of units and the image coordinates are in another. I was able to solve the problem this way:
let croprect = CGRect(x: overlayView.frame.origin.x * 2, y: overlayView.frame.origin.y * 2 - 45, width: overlayView.frame.width * 2, height: overlayView.frame.height * 2)
let drawrect = CGRect(x: 0, y: 0, width: overlayView.frame.width * 2, height: overlayView.frame.height * 2)
let background = sceneView.snapshot()
let cropped = background.cgImage!.cropping(to: croprect)
UIGraphicsBeginImageContextWithOptions(drawrect.size, false, 0.0)
let context = UIGraphicsGetCurrentContext()
context!.setAlpha(0.50)
context!.draw(cropped!, in: drawrect)
let transparent = context!.makeImage();
UIGraphicsEndImageContext()
I'm extremely curious why I had to adjust the Y starting point to get them to line up, anyone have an idea?

Crop UIImage to square portion

I have a UIScrollView which contains a UIImage. On top of that is a box that the user can move the image, so that that portion is cropped.
This screenshot explains it better:
So they can scroll the image around until the portion they want is inside that box.
I then want to be able to crop the scrollView/UIImage to exactly that size and store the cropped image.
It shouldn't be very hard but I've spent ages trying screenshots, UIGraphicsContext, etc. and cant seem to get anything to work.
Thanks for the help.
I finally figured out how to get it to work. Here is the code:
func croppedImage() -> UIImage {
let cropSize = CGSize(width: 280, height: 280)
let scale = (imageView.image?.size.height)! / imageView.frame.height
let cropSizeScaled = CGSize(width: cropSize.width * scale, height: cropSize.height * scale)
if #available(iOS 10.0, *) {
let r = UIGraphicsImageRenderer(size: cropSizeScaled)
let x = -scrollView.contentOffset.x * scale
let y = -scrollView.contentOffset.y * scale
return r.image { _ in
imageView.image!.draw(at: CGPoint(x: x, y: y))
}
} else {
return UIImage()
}
}
So it first calculates the scale of the imageView and the actual image.
Then it creates a CGSize of that crop box as shown in the photo. However, the width and height must be scaled by the scale factor. (e.g. 280 * 6.5)
You must check if the phone is running iOS 10.0 for UIGraphicsImageRender - if not, it won't work.
Initialise this with the crop box size.
The image must then be offset, and this is calculated by getting the scrollView's content offset, negating it, and multiplying by the scale factor.
Then return the image drawn at that point!

Getting actual view size in Swift / IOS

Even after reading several posts in frame vs view I still cannot get this working. I open Xcode (7.3), create a new game project for IOS. On the default scene file, right after addChild, I add the following code:
print(self.view!.bounds.width)
print(self.view!.bounds.height)
print(self.frame.size.width)
print(self.frame.size.height)
print(UIScreen.mainScreen().bounds.size.width)
print(UIScreen.mainScreen().bounds.size.height)
I get following results when I run it for iPhone 6s:
667.0
375.0
1024.0
768.0
667.0
375.0
I can guess that first two numbers are Retina Pixel size at x2. I am trying to understand why frame size reports 1024x768 ?
Then I add following code to resize a simple background image to fill the screen:
self.anchorPoint = CGPointMake(0.5,0.5)
let theTexture = SKTexture(imageNamed: "intro_screen_phone")
let theSizeFromBounds = CGSizeMake(self.view!.bounds.width, self.view!.bounds.height)
let theImage = SKSpriteNode(texture: theTexture, color: SKColor.clearColor(), size: theSizeFromBounds)
I get an image smaller than the screen size. Image is displayed even smaller if I choose landscape mode.
I tried multiplying bounds width/height with two, hoping to get actual screen size but then the image gets too big. I also tried frame size which makes the image slightly bigger than the screen.
Main reason for my confusion, besides lack of knowledge is the fact that I've seen this exact example on a lesson working perfectly. Either I am missing something obvious or ?
The frame is given as 1024x768 because it is defined in points, not pixels.
If you want your scene to be the same size as your screen, before the scene is presented in your GameViewController, before:
skView.presentScene(scene)
use this:
scene.size = self.view.frame.size
which will make the scene the exact size of the screen.
Then you could easily make an image fill the scene like so:
func addBackground() {
let bgTexture = SKTexture(imageNamed: "NAME")
let bgSprite = SKSpriteNode(texture: bgTexture, color: SKColor.clearColor(), size: scene.size)
bgSprite.anchorPoint = CGPoint(x: 0, y: 0)
bgSprite.position = self.frame.origin
self.addChild(bgSprite)
}
Also, you may want to read up on the difference between a view's bounds and it's frame.

Resources