UIImagePicker: How to always pick square image from camera library? - ios

I am wondering how I can always pick a square image from camera library using UIImagePicker?
So I have set imagePicker.allowsEditing = true and when the image I pick is large enough (larger than the square crop), the picked image is square. But, when the image that I pick is smaller, let's say it's 748 by 466, even though the square crop enclose the image including the top and bottom black parts, the picked image does not include the black parts and so it returns a non square image. How do I make it so that it always picks the black top and bottom parts so image is always square?
Thanks a lot for help!

Here is the method using CoreGraphics to add the black area manually, add them in the UIImagePicker delegate method: 
let squareSideLength = image.size.width > image.size.height ? image.size.width : image.size.height
UIGraphicsBeginImageContextWithOptions(CGSizeMake(squareSideLength, squareSideLength), false, 1)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, UIColor.blackColor().CGColor)
CGContextFillRect(context, CGRectMake(0, 0, squareSideLength, squareSideLength))
image.drawInRect(CGRectMake((squareSideLength - image.size.width) / 2, (squareSideLength - image.size.height) / 2, image.size.width, image.size.height))
let imageYouWant = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
use imageYouWant then.

Related

Removing statusbar from screenshot on iOS

Im trying to remove the top part of an image by cropping, but the result is unexpected.
The code used:
extension UIImage {
class func removeStatusbarFromScreenshot(_ screenshot:UIImage) -> UIImage {
let statusBarHeight = 44.0
let newHeight = screenshot.size.height - statusBarHeight
let newSize = CGSize(width: screenshot.size.width, height: newHeight)
let newOrigin = CGPoint(x: 0, y: statusBarHeight)
let imageRef:CGImage = screenshot.cgImage!.cropping(to: CGRect(origin: newOrigin, size: newSize))!
let cropped:UIImage = UIImage(cgImage:imageRef)
return cropped
}
}
My logic is that I need to make the image smaller in heigh by 44px and move the origin y by 44px, but it ends up only creating an image much smaller of the top left corner.
The only way that I get it to work as expected is by multiplying the width by 2 and height by 2.5 in newSize, but that also double the size of the image produced..
Which anyways doesnt make much sense.. can someone help make it work without using magic values?
There are two main problems with what you're doing:
A UIImage has a scale (usually tied to resolution of your device's screen), but a CGImage does not.
Different devices have different "status bar" heights. In general, what you want to cut off from the top is not the status bar but the safe area. The top of the safe area is where your content starts.
Because of this:
You are wrong to talk about 44 px. There are no pixels here. Pixels are physical atomic illuminations on your screen. In code, there are points. Points are independent of the scale (and the scale is the multiplier between points and pixels).
You are wrong to talk about the number 44 itself as if it were hard-coded. You should get the top of the safe area instead.
By crossing into the CGImage world without taking scale into account, you lose the scale information, because CGImage knows nothing of scale.
By crossing back into the UIImage world without taking scale into account, you end up with a UIImage with a resolution of 1, which may not be the resolution of the original UIImage.
The simplest solution is not to do any of what you are doing. First, get the height of the safe area; call it h. Then just draw the snapshot image into a graphics image context that is the same scale as your image (which, if you play your cards right, it will be automatically), but is h points shorter than the height of your image — and draw it with its y origin at -h, thus cutting off the safe area. Extract the resulting image and you're all set.
Example! This code comes a view controller. First, I'll take a screenshot of my own device's current screen (this view controller's view) as my app runs:
let renderer = UIGraphicsImageRenderer(size: view.bounds.size)
let screenshot = renderer.image { context in
view.layer.render(in: context.cgContext)
}
Now, I'll cut the safe area off the top of that screenshot:
let h = view.safeAreaInsets.top
let size = screenshot.size
let r = UIGraphicsImageRenderer(
size: .init(width: size.width, height: size.height - h)
)
let result = r.image { _ in
screenshot.draw(at: .init(x: 0, y: -h))
}
Experimentation will confirm that this works perfectly on every device, regardless of whether it has a bezel and regardless of its screen resolution: the top of the resulting image, result, is the top of your actual content.

CIImage extent in pixels or points?

I'm working with a CIImage, and while I understand it's not a linear image, it does hold some data.
My question is whether or not a CIImage's extent property returns pixels or points? According to the documentation, which says very little, it's working space coordinates. Does this mean there's no way to get the pixels / points from a CIImage and I must convert to a UIImage to use the .size property to get the points?
I have a UIImage with a certain size, and when I create a CIImage using the UIImage, the extent is shown in points. But if I run a CIImage through a CIFilter that scales it, I sometimes get the extent returned in pixel values.
I'll answer the best I can.
If your source is a UIImage, its size will be the same as the extent. But please, this isn't a UIImageView (which the size is in points). And we're just talking about the source image.
Running something through a CIFilter means you are manipulating things. If all you are doing is manipulating color, its size/extent shouldn't change (the same as creating your own CIColorKernel - it works pixel-by-pixel).
But, depending on the CIFilter, you may well be changing the size/extent. Certain filters create a mask, or tile. These may actually have an extent that is infinite! Others (blurs are a great example) sample surrounding pixels so their extent actually increases because they sample "pixels" beyond the source image's size. (Custom-wise these are a CIWarpKernel.)
Yes, quite a bit. Taking this to a bottom line:
What is the filter doing? Does it need to simply check a pixel's RGB and do something? Then the UIImage size should be the output CIImage extent.
Does the filter produce something that depends on the pixel's surrounding pixels? Then the output CIImage extent is slightly larger. How much may depend on the filter.
There are filters that produce something with no regard to an input. Most of these may have no true extent, as they can be infinite.
Points are what UIKit and CoreGraphics always work with. Pixels? At some point CoreImage does, but it's low-level to a point (unless you want to write your own kernel) you shouldn't care. Extents can usually - but keep in mind the above - be equated to a UIImage size.
EDIT
Many images (particularly RAW ones) can have so large a size as to affect performance. I have an extension for UIImage that resizes an image to a specific rectangle to help maintain consistent CI performance.
extension UIImage {
public func resizeToBoundingSquare(_ boundingSquareSideLength : CGFloat) -> UIImage {
let imgScale = self.size.width > self.size.height ? boundingSquareSideLength / self.size.width : boundingSquareSideLength / self.size.height
let newWidth = self.size.width * imgScale
let newHeight = self.size.height * imgScale
let newSize = CGSize(width: newWidth, height: newHeight)
UIGraphicsBeginImageContext(newSize)
self.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
return resizedImage!
}
}
Usage:
image = image.resizeToBoundingSquare(640)
In this example, an image size of 3200x2000 would be reduced to 640x400. Or an image size or 320x200 would be enlarged to 640x400. I do this to an image before rendering it and before creating a CIImage to use in a CIFilter.
I suggest you think of them as points. There is no scale and no screen (a CIImage is not something that is drawn), so there are no pixels.
A UIImage backed by a CGImage is the basis for drawing, and in addition to the CGImage it has a scale; together with the screen resolution, that gives us our translation from points to pixels.

Cropping UI images

I have a SceneKit view that fills my screen. My goal is to let the user take snapshots of that scene, but the snapshots are not the whole screen, but an inset portion in a UIImageView which is slightly smaller than the screen. Ideally, the user should not notice, the image on top should be identical to the scene behind it.
I have coded this up using snapshot and cropped, but as you can see in the image, the scale ends up way off - see the width of the yellow line, and the size of the windows? It's also not positioned correctly, it's somewhat down and to the left from where it should be - the upper left should be below the line of windows, but you can see it is at the roofline above them. I can't see the original snapshot because the debugger QuickLook refuses to show it.
There's not much code to it, anyone see the problem:
let background = sceneView.snapshot().cgImage!
let cropped = background.cropping(to: overlayView.frame)
UIGraphicsBeginImageContextWithOptions(overlayView.frame.size, false, 1.0)
let context = UIGraphicsGetCurrentContext()
context!.setAlpha(0.50)
context!.draw(cropped!, in: overlayView.bounds)
let transparent = context!.makeImage();
UIGraphicsEndImageContext()
overlayView.image = UIImage.init(cgImage: transparent!, scale: 1.0, orientation: .downMirrored)
I have tried various scales and rects to no avail. I assume this is something very easy.
UPDATE: after several tries I was able to get quicklook to work. The snapshot is indeed the entire background as I would expect. But it is much larger than I would expect too - its 640, 998 while the cropped version is 228, 304. That explains the "zooming". This leads me to believe that the frame size of the inset view is NOT a direct relationship to the image size. Does that ring any bells? Is there some other rect I should be using rather than overlayView.frame?
So I assume the problem is that the frame coordinates are in one set of units and the image coordinates are in another. I was able to solve the problem this way:
let croprect = CGRect(x: overlayView.frame.origin.x * 2, y: overlayView.frame.origin.y * 2 - 45, width: overlayView.frame.width * 2, height: overlayView.frame.height * 2)
let drawrect = CGRect(x: 0, y: 0, width: overlayView.frame.width * 2, height: overlayView.frame.height * 2)
let background = sceneView.snapshot()
let cropped = background.cgImage!.cropping(to: croprect)
UIGraphicsBeginImageContextWithOptions(drawrect.size, false, 0.0)
let context = UIGraphicsGetCurrentContext()
context!.setAlpha(0.50)
context!.draw(cropped!, in: drawrect)
let transparent = context!.makeImage();
UIGraphicsEndImageContext()
I'm extremely curious why I had to adjust the Y starting point to get them to line up, anyone have an idea?

Create a circle or a disk with antialiasing for retina display

I have creating a circle using CGContextFillEllipseInRect from CoreGraphic.
I'm using this circle (which is actually a disk) to replace the thumbImage of an UISlider. The antialiasing is applied by default.
But the result on my iPhone 6 is clearly not good. I can clearly see the pixels ,not as much as with the antialiasing off, but way more than the pixels of a normal UISlider.
Maybe I'm doing something wrong. So my question is, is there a way to get programmatically the same nice disk than the one used by default for an UISlider?
EDIT:
Here is how I create the disk:
class func drawDisk(rgbValue:UInt32, rectForDisk: CGRect = CGRectMake(0.0, 0.0, 1.0, 1.0)) -> UIImage {
let color = uicolorFromHex(rgbValue)
UIGraphicsBeginImageContext(rectForDisk.size)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, color.CGColor)
CGContextFillEllipseInRect(context, rectForDisk)
let rectForCircle = CGRectMake(0.5, 0.5, rectForDisk.size.width - 1, rectForDisk.size.height - 1)
CGContextSetLineWidth(context, 1.0)
CGContextSetStrokeColorWithColor(context,
UIColor.blackColor().CGColor)
CGContextAddEllipseInRect(context, rectForCircle)
CGContextStrokePath(context)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
The problem is that you are creating a non-retina graphics context when using UIGraphicsBeginImageContext, as mentioned in the documentation:
This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to NO and a scale factor of 1.0.
Instead you should be using UIGraphicsBeginImageContextWithOptions to create your image context. You can keep passing NO for the opaque parameter if you want an image that supports transparency (same as what you are implicitly doing now).
In most cases you can pass 0.0 for the scale factor. This sets the scale factor to that of the device's main screen. Again, as mentioned in the documentation:
If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
So, in short, you should create your image context like this:
UIGraphicsBeginImageContextWithOptions(rectForDisk.size, false, 0.0) // false for Swift

Filling UIBezierPath results in pixelated image from image context [duplicate]

This question already has an answer here:
How come my drawing code keeps resulting in fuzzy shapes?
(1 answer)
Closed 8 years ago.
I have implemented a very simple filled in circle using UIBezierPath, which I then convert to UIImage so that I can set that to a UITableViewCell's imageView. This works really well, but you can see the edges are pixelated on a Retina display. Why is that, what can be done to ensure it looks fantastic?
let colorSize = cell.frame.size.height - 20
let rect = CGRectMake(0, 0, colorSize, colorSize)
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()
UIBezierPath(roundedRect: rect, cornerRadius: (colorSize)/2).addClip()
CGContextSetFillColorWithColor(context, UIColor.redColor().CGColor);
CGContextFillRect(context, rect)
var colorImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
cell.imageView?.image = colorImage
Step 1. Read the UIGraphicsBeginImageContext documentation:
This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to false and a scale factor of 1.0.
Step 2. Follow the link to the UIGraphicsBeginImageContextWithOptions documentation, which says:
scale The scale factor to apply to the bitmap. If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
Step 3. Try using UIGraphicsBeginImageContextWithOptions with a scale of 0.0:
UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)
Step 4. Profit…?

Resources