I want to "extract" a apart of a UIImage but Im having some troubles.
For example the original imagesize is 320x480 and I want to get render a new UIImage in a rect like CGRect(10, 10, 100, 100)
Ive not found a good solution but Ive found something that might be close to the right solution: drawInrect().
But when I use that everything else but within that rect gets black.
Please help me.
If what you are trying to achieve is a cropped version of the original image, then you should know that the canvas you are drawing in as frame with {0,0} origin. So for example if your desired effect is to get an image with size 100x100 and origin {10, 10}. So ..
let imageSize = image.size
UIGraphicsBeginImageContextWithOptions(CGSizeMake(100.0, 100.0), true, 0.0)
image.drawInRect(CGRectMake(-10, -10, imageSize.width, imageSize.height))
Related
Im trying to remove the top part of an image by cropping, but the result is unexpected.
The code used:
extension UIImage {
class func removeStatusbarFromScreenshot(_ screenshot:UIImage) -> UIImage {
let statusBarHeight = 44.0
let newHeight = screenshot.size.height - statusBarHeight
let newSize = CGSize(width: screenshot.size.width, height: newHeight)
let newOrigin = CGPoint(x: 0, y: statusBarHeight)
let imageRef:CGImage = screenshot.cgImage!.cropping(to: CGRect(origin: newOrigin, size: newSize))!
let cropped:UIImage = UIImage(cgImage:imageRef)
return cropped
}
}
My logic is that I need to make the image smaller in heigh by 44px and move the origin y by 44px, but it ends up only creating an image much smaller of the top left corner.
The only way that I get it to work as expected is by multiplying the width by 2 and height by 2.5 in newSize, but that also double the size of the image produced..
Which anyways doesnt make much sense.. can someone help make it work without using magic values?
There are two main problems with what you're doing:
A UIImage has a scale (usually tied to resolution of your device's screen), but a CGImage does not.
Different devices have different "status bar" heights. In general, what you want to cut off from the top is not the status bar but the safe area. The top of the safe area is where your content starts.
Because of this:
You are wrong to talk about 44 px. There are no pixels here. Pixels are physical atomic illuminations on your screen. In code, there are points. Points are independent of the scale (and the scale is the multiplier between points and pixels).
You are wrong to talk about the number 44 itself as if it were hard-coded. You should get the top of the safe area instead.
By crossing into the CGImage world without taking scale into account, you lose the scale information, because CGImage knows nothing of scale.
By crossing back into the UIImage world without taking scale into account, you end up with a UIImage with a resolution of 1, which may not be the resolution of the original UIImage.
The simplest solution is not to do any of what you are doing. First, get the height of the safe area; call it h. Then just draw the snapshot image into a graphics image context that is the same scale as your image (which, if you play your cards right, it will be automatically), but is h points shorter than the height of your image — and draw it with its y origin at -h, thus cutting off the safe area. Extract the resulting image and you're all set.
Example! This code comes a view controller. First, I'll take a screenshot of my own device's current screen (this view controller's view) as my app runs:
let renderer = UIGraphicsImageRenderer(size: view.bounds.size)
let screenshot = renderer.image { context in
view.layer.render(in: context.cgContext)
}
Now, I'll cut the safe area off the top of that screenshot:
let h = view.safeAreaInsets.top
let size = screenshot.size
let r = UIGraphicsImageRenderer(
size: .init(width: size.width, height: size.height - h)
)
let result = r.image { _ in
screenshot.draw(at: .init(x: 0, y: -h))
}
Experimentation will confirm that this works perfectly on every device, regardless of whether it has a bezel and regardless of its screen resolution: the top of the resulting image, result, is the top of your actual content.
I have a view (blue background...) which I'll call "main" here, on main I added a UIImageView that I then rotate, pan and scale. On main I have a another subview that shows the cropping area. Anything out of that under the darker area needs to be cropped.
I am trying to figure out how to properly create a cropped image from this state. I want the resulting image to look like this:
I want to make sure to keep the resolution of the image.
Any idea?
I have tried to figure out how to use the layer.mask property of the UIImageView. After some feedback, I think I could have another view (B) on the blue view, on B I would then add the image view, so then I would make sure that B's frame would match the rect of the cropping mask overlay. I think that could work? The only thing is I want to make sure I don't lose resolution.
So, earlier I tried this:
maskShape.frame = imageView.bounds
maskShape.path = UIBezierPath(rect: CGRect(x: 20, y: 20, width: 200, height: 200)).cgPath
imageView.layer.mask = maskShape
The rect was just a test rect and the image would be cropped to that path, but, I wasn't sure how to get a UIImage from all this that could keep the large resolution of the original image
So, I have implemented the method suggested by marco, it all works with the exception of keeping the resolution.
I use this call to take a screenshot of the view the contains the image and I have it clip to bounds:
public func renderToImage(afterScreenUpdates: Bool = false) -> UIImage {
let rendererFormat = UIGraphicsImageRendererFormat.default()
rendererFormat.opaque = isOpaque
let renderer = UIGraphicsImageRenderer(size: bounds.size, format: rendererFormat)
let snapshotImage = renderer.image { _ in
drawHierarchy(in: bounds, afterScreenUpdates: afterScreenUpdates)
}
return snapshotImage
}
The image I get is correct, but is not as sharp as the one I crop.
Hoe can I keep the resolution high?
In your view which keeps the image you must set clipsToBounds to true. Not sure if I got well but I suppose it's your "cropping area"
I have a SceneKit view that fills my screen. My goal is to let the user take snapshots of that scene, but the snapshots are not the whole screen, but an inset portion in a UIImageView which is slightly smaller than the screen. Ideally, the user should not notice, the image on top should be identical to the scene behind it.
I have coded this up using snapshot and cropped, but as you can see in the image, the scale ends up way off - see the width of the yellow line, and the size of the windows? It's also not positioned correctly, it's somewhat down and to the left from where it should be - the upper left should be below the line of windows, but you can see it is at the roofline above them. I can't see the original snapshot because the debugger QuickLook refuses to show it.
There's not much code to it, anyone see the problem:
let background = sceneView.snapshot().cgImage!
let cropped = background.cropping(to: overlayView.frame)
UIGraphicsBeginImageContextWithOptions(overlayView.frame.size, false, 1.0)
let context = UIGraphicsGetCurrentContext()
context!.setAlpha(0.50)
context!.draw(cropped!, in: overlayView.bounds)
let transparent = context!.makeImage();
UIGraphicsEndImageContext()
overlayView.image = UIImage.init(cgImage: transparent!, scale: 1.0, orientation: .downMirrored)
I have tried various scales and rects to no avail. I assume this is something very easy.
UPDATE: after several tries I was able to get quicklook to work. The snapshot is indeed the entire background as I would expect. But it is much larger than I would expect too - its 640, 998 while the cropped version is 228, 304. That explains the "zooming". This leads me to believe that the frame size of the inset view is NOT a direct relationship to the image size. Does that ring any bells? Is there some other rect I should be using rather than overlayView.frame?
So I assume the problem is that the frame coordinates are in one set of units and the image coordinates are in another. I was able to solve the problem this way:
let croprect = CGRect(x: overlayView.frame.origin.x * 2, y: overlayView.frame.origin.y * 2 - 45, width: overlayView.frame.width * 2, height: overlayView.frame.height * 2)
let drawrect = CGRect(x: 0, y: 0, width: overlayView.frame.width * 2, height: overlayView.frame.height * 2)
let background = sceneView.snapshot()
let cropped = background.cgImage!.cropping(to: croprect)
UIGraphicsBeginImageContextWithOptions(drawrect.size, false, 0.0)
let context = UIGraphicsGetCurrentContext()
context!.setAlpha(0.50)
context!.draw(cropped!, in: drawrect)
let transparent = context!.makeImage();
UIGraphicsEndImageContext()
I'm extremely curious why I had to adjust the Y starting point to get them to line up, anyone have an idea?
I am wondering how I can always pick a square image from camera library using UIImagePicker?
So I have set imagePicker.allowsEditing = true and when the image I pick is large enough (larger than the square crop), the picked image is square. But, when the image that I pick is smaller, let's say it's 748 by 466, even though the square crop enclose the image including the top and bottom black parts, the picked image does not include the black parts and so it returns a non square image. How do I make it so that it always picks the black top and bottom parts so image is always square?
Thanks a lot for help!
Here is the method using CoreGraphics to add the black area manually, add them in the UIImagePicker delegate method:
let squareSideLength = image.size.width > image.size.height ? image.size.width : image.size.height
UIGraphicsBeginImageContextWithOptions(CGSizeMake(squareSideLength, squareSideLength), false, 1)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, UIColor.blackColor().CGColor)
CGContextFillRect(context, CGRectMake(0, 0, squareSideLength, squareSideLength))
image.drawInRect(CGRectMake((squareSideLength - image.size.width) / 2, (squareSideLength - image.size.height) / 2, image.size.width, image.size.height))
let imageYouWant = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
use imageYouWant then.
Let's say you have an original image that is
200 high, 100 wide
Let's say you want to draw only a square of it. Let's say, just the bottom square.
Let's say you want to draw it on to a new small image that is
20 high, 20 wide
Of course, you simply do this:
CGRect imageRect = CGRectMake( -10,0, 20,20);
.. begin graphics context ..
[originalImage drawInRect:imageRect];
With drawRect, you supply a rectangle the same full shape (same proportions) of the original image, but expressed in the size of the new canvas. No problem.
BUT:
in the example, you are drawing THE WHOLE ORIGINAL IMAGE -- THE WHOLE 200 HEIGHT on to the new small square.
(Of course the "top half" misses the new canvas, and you only get the bottom half on the new canvas -- which is what you wanted.)
My impression is iOS renders or calculates the "whole" original image, and it only "puts on" the bottom half (in the example) on to the new canvas.
This seems very wasteful.
IS THERE A FASTER WAY TO DO THIS?
It seems like there should be a command, something like this:
drawThisPartOfTheOriginalImage: (0,100 to 100,200)
ontoThisPartOfTheNewCanvas: (0,20 to 20,20)
What's the situation? Is there a more efficient command than drawRect when you are only drawing a small part of the original image? Cheers
CGContextClipToRect approach...(doesn't work!)
.
I experimented with CGContextClipToRect as Peter suggested below.
CGContextClipToRect indeed sets the area you will draw to on your "result" canvas. I simply set it to the size of that result canvas (it would be 20.20 in the example above). To repeat the aim here being to have iOS save time by avoiding pointlessly drawing the, err, not-drawn part of the original.
This example is for an original image 2000.2000 drawing on to a 500.500 (ie, only drawing the top left quarter of the original on to the result).
In fact notice it is slightly slower when you include the CGContextClipToRect, again suggesting iOS "knows when to stop" anyways.
// no need to "overdraw"... quickener turned OFF
//CGContextRef c = UIGraphicsGetCurrentContext();
//CGContextClipToRect(c, CGRectMake(0, 0, resultSize.width,resultSize.height));
//Execution Time .................................. 0.443669
// no need to "overdraw"... quickener turned ON
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextClipToRect(c, CGRectMake(0, 0, resultSize.width,resultSize.height));
//Execution Time .................................. 0.461845
As you can see it's a hair slower, actually, adding the CGContextClipToRect trick.
For the record, here is the exact routine used to crop an image:
-(UIImage *)simplishTopCrop:(UIImage *)fromImage
{
// check for zero fromImage.size.width etc etc
CGSize resultSize = CGSizeMake(640,640);
CGFloat scale = MAX(
resultSize.width/fromImage.size.width,
resultSize.height/fromImage.size.height);
CGFloat width = fromImage.size.width * scale;
CGFloat height = fromImage.size.height * scale;
CGRect imageRect = CGRectMake(0,0, width,height);
UIGraphicsBeginImageContextWithOptions(resultSize, NO, 0);
// INSERT 'CGContextClipToRect' TRICK ABOVE, RIGHT HERE
[fromImage drawInRect:imageRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
This is where clipping comes in. Clip to your dirty rect, then draw the whole image into your bounds. The clipping path will keep the rest of the image at least from appearing, and hopefully from being composited or sampled at all.
If your profiling in Instruments finds that that is not efficient enough, you might try cropping the image itself, using CGImageCreateWithImageInRect, and then drawing that image into your dirty rect. You may want to keep your cropped image around and only throw it away when the rect changes. One way or the other, cropping the image may be more efficient—but don't forget to profile both before and after to prove that.