Set dimensions for UIImagePickerController "move and scale" cropbox - ios

How does the "move and scale screen" determine dimensions for its cropbox?
Basically I would like to set a fixed width and height for the "CropRect" and let the user move and scale his image to fit in to that box as desired.
Does anyone know how to do this? (Or if it is even possible with the UIImagePickerController)
Thanks!

Not possible with UIImagePickerController unfortunately. The solution I recommend is to disable editing for the image picker and handle it yourself. For instance, I put the image in a scrollable, zoomable image view. On top of the image view is a fixed position "crop guide view" that draws the crop indicator the user sees. Assuming the guide view has properties for the visible rect (the part to keep) and edge widths (the part to discard) you can get the cropping rectangle like so. You can use the UIImage+Resize category to do the actual cropping.
CGRect cropGuide = self.cropGuideView.visibleRect;
UIEdgeInsets edges = self.cropGuideView.edgeWidths;
CGPoint cropGuideOffset = self.cropScrollView.contentOffset;
CGPoint origin = CGPointMake( cropGuideOffset.x + edges.left, cropGuideOffset.y + edges.top );
CGSize size = cropGuide.size;
CGRect crop = { origin, size };
crop.origin.x = crop.origin.x / self.cropScrollView.zoomScale;
crop.origin.y = crop.origin.y / self.cropScrollView.zoomScale;
crop.size.width = crop.size.width / self.cropScrollView.zoomScale;
crop.size.height = crop.size.height / self.cropScrollView.zoomScale;
photo = [photo croppedImage:crop];

Kinda late to the game but I think this may be what you are looking for: https://github.com/gekitz/GKImagePicker

Here is a solution for manual cropping by Ming Yang.
https://github.com/myang-git/iOS-Image-Crop-View
It offers a rectangular frame, which the user can slide or drag to fit the required portion of the image in the rectangle. Please note that this solution does the reverse of the question asked - lets the rectangle size vary, but eventually brings the desired result.
It is coded in Objective-C. You may have to either code it in Swift or simply build a bridging header to connect the Objective-C code with Swift code.

It's now later than late but may be useful for someone. This is the library I've used for swift (many thanks to Tim Oliver):
TOCropViewController
as described in README file in GitHub link above, by using this library you can get cropped images in user-defined rectangular and also in a circular mode, e.g. for updating profile image.
below is sample code from GitHub:
func presentCropViewController {
let image: UIImage = ... //Load an image
let cropViewController = CropViewController(image: image)
cropViewController.delegate = self
present(cropViewController, animated: true, completion: nil)
}
func cropViewController(_ cropViewController: CropViewController, didCropToImage image: UIImage, withRect cropRect: CGRect, angle: Int) {
// 'image' is the newly cropped version of the original image
}

Related

Achieving erase/restore drawing on UIImage in Swift

I'm trying to make a simple image eraser tool, where the user can erase and restore as drawing into an image, just like in this image:
After many attempts and testing, I have achieved the sufficient "erase" functionality with the following code on the UI side:
// Drawing code - on user touch
// `currentPath` is a `UIBezierPath` property of the containing class.
guard let image = pickedImage else { return }
UIGraphicsBeginImageContextWithOptions(imageView.frame.size, false, 0)
if let context = UIGraphicsGetCurrentContext() {
mainImageView.layer.render(in: context)
context.addPath(currentPath.cgPath)
context.setBlendMode(.clear)
context.setLineWidth(translatedBrushWidth)
context.setLineCap(.round)
context.setLineJoin(.round)
context.setStrokeColor(UIColor.clear.cgColor)
context.strokePath()
let capturedImage = UIGraphicsGetImageFromCurrentImageContext()
imageView.image = capturedImage
}
UIGraphicsEndImageContext()
And upon user touch-up I am applying a scale transform to currentPath to render the image with the cutout part in full size to preserve UI performance.
What I'm trying to figure out now is how to approach the "restore" functionality. Essentially, the user should draw on the erased parts to reveal the original image.
I've tried looking at CGContextClipToMask but I'm not sure how to approach the implementation.
I've also looked at other approaches to achieving this "erase/restore" effect before rendering the actual images, such as masking a CAShapeLayer over the image but also in this approach restoring becomes a problem.
Any help will be greatly appreciated, as well as alternative approaches to erase and restore with a path on the UI-level and rendering level.
Thank you!
Yes, I would recommend adding a CALayer to your image's layer as a mask.
You can either make the mask layer a CAShapeLayer and draw geometric shapes into it, or use a simple CALayer as a mask, where the contents property of the mask layer is a CGImage. You'd then draw opaque pixels into the mask to reveal the image contents, or transparent pixels to "erase" the corresponding image pixels.
This approach is hardware accelerated and quite fast.
Handling undo/redo of eraser functions would require you to collect changes to your mask layer as well as the previous state of the mask.
Edit:
I created a small demo app on Github that shows how to use a CGImage as a mask on an image view
Here is the ReadMe file from that project:
MaskableImageView
This project demonstrates how to use a CALayer to mask a UIView.
It defines a custom subclass of UIImageView, MaskableView.
The MaskableView class has a property maskLayer that contains a CALayer.
MaskableView defines a didSet method on its bounds property so that when the view's bounds change, it resizes the mask layer to match the size of the image view.
The MaskableView has a method installSampleMask which builds an image the same size as the image view, mostly filled with opaque black, but with a small rectangle in the center filled with black at an alpha of 0.7. The translucent center rectangle causes the image view to become partly transparent and show the view underneath.
The demo app installs a couple of subviews into the MaskableView, a sample image of Scampers, one of my dogs, and a UILabel. It also installs an image of a checkerboard under the MaskableView so that you can see the translucent parts more easily.
The MaskableView has properties circleRadius, maskDrawingAlpha, and drawingAction that it uses to let the user erase/un-erase the image by tapping on the view to update the mask.
The MaskableView attaches a UIPanGestureRecognizer and a UITapGestureRecognizer to itself, with an action of gestureRecognizerUpdate. The gestureRecognizerUpdate method takes the tap/drag location from the gesture recognizer and uses it to draw a circle onto the image mask that either decreases the image mask's alpha (to partly erase pixels) or increase the image mask's alpha (to make those pixels more opaque.)
The MaskableView's mask drawing is crude, and only meant for demonstration purposes. It draws a series of discrete circles intstead of rendering a path into the mask based on the user's drag gesture. A better solution would be to connect the points from the gesture recognizer and use them to render a smoothed curve into the mask.
The app's screen looks like this:
Edit #2:
If you want to export the resulting image to a file that preserves the transparency, you can convert the CGImage to a UIImage (Using the init(cgImage:) initializer) and then use the UIImage function
func pngData() -> Data?
to convert the image to PNG data. That function returns nil if it is unable to convert the image to PNG data.
If it succeeds, you can then save the data to a file with a .png extension.
I updated the sample project to include the ability to save the resulting image to disk.
First I added an image computed property to the MaskableView. That looks like this:
public var image: UIImage? {
guard let renderer = renderer else { return nil}
let result = renderer.image {
context in
return layer.render(in: context.cgContext)
}
return result
}
Then I added a save button to the view controller that fetches the image from the MaskableView and saves it to the app's Documents directory:
#IBAction func handleSaveButton(_ sender: UIButton) {
print("In handleSaveButton")
if let image = maskableView.image,
let pngData = image.pngData(){
print(image.description)
let imageURL = getDocumentsDirectory().appendingPathComponent("image.png", isDirectory: false)
do {
try pngData.write(to: imageURL)
print("Wrote png to \(imageURL.path)")
}
catch {
print("Error writing file to \(imageURL.path)")
}
}
}
You could also save the image to the user's camera roll. It's been a while since I've done that so I'd have to dig up the steps for that.

iOS - Swift - Clickable Regions on a UIImage

I am working on a project where I have to plot points in specific regions on an image that represents the human body. In Interface Builder, I have set up a container UIView, which takes up most of the vertical center of the main view. In that container view, I placed a UIImageView and set the graphic in IB. The graphic is much larger than both the UIImageView and the container UIView, more specifically, it’s taller. The ContentMode of the UIImageView is set to AspectFit because I want the image to not show as bigger than the container.
The code creates several CGRect instances which are regions where user taps mean something. When the user taps on the container view, code is used to determine if the point is within one of the regions and if it is, a dot is drawn in the center of that region.
The problem is that when I run the app on certain simulators, the region rectangles are not in the right place on the image. For example, when I run the app on an iPhone X, the rectangle region that is in place for the head looks fine. When I run the app on an iPhone XR, the rectangle region is off to the left of the head.
I am using coordinates to define the region rectangles that are based on where, for example, the human head is in the image. I feel like this is not the right way to do this since AspectFit for the ContentMode of the image is most likely causing the image to be scaled to maintain aspect.
Bottom line is that I want a rectangle to be in the right place and size no matter how the image scales. No sure if how I am doing it makes sense, so hope that some suggestions come in that offer a better way to do this.
Update 1: The UIImageView is pinned to the surrounding UIView, so it's width and height are as big as the container. Since the image is skinnier than the UIImageView, the image appears centered in it. In the attached images, the purple background is the UIImageView showing the topmost UIView's background color.
Update 2: I checked the scale for both width and height and found they are different. The width scale factor is 1.36565656565 and the height scale factor is 2.104. I tried the formulas given with both scale factors given by Sweeper and no luck.
You just need to do some maths.
On the original image, identify the region the user can tap. Note down its x, y, w, h, relative to the image.
Figure out how much the image shrank in the image view. Since you said the image is taller than the image view, the image underwent a scale factor of imageViewHeight / imageHeight. We'll now refer to this as scaleFactor.
The region's Y coordinate must have also gone down by scaleFactor, so you multiply regionY by scaleFactor to get newY.
The region's width and height will do the same thing, so multiply them by scaleFactor and get newWidth and newHeight.
The X coordinate of the region, relative to the image view, is a bit tricky. You need to account for the amount of empty space that the image view has created by scaling down the image. This emptySpace is calculated by (imageViewWidth - newWidth) / 2. Then to calculate the new region's X coordinate relative to the image view, you do emptySpace + X * scaleFactor.
Now the rect (newX, newY, newWidth, newHeight) is the region relative to the image view that the user can tap!
I made this code for a rectangle in the position exactly the same on every device.
To make the check if it's a x phone I do in the viewDidLoad and set the constrains to what I want and added tab bar size.
#IBOutlet weak var tabMenu: NSLayoutConstraint!
#IBOutlet weak var topConstraint: NSLayoutConstraint!
#IBOutlet weak var bottomConstraint: NSLayoutConstraint!
#IBOutlet weak var backgroundImg: UIImageView!
#IBOutlet weak var rectangleImg: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
// Check if it's a iPhone X
if #available(iOS 11, *) {
let safeArea = UIApplication.shared.delegate?.window??.safeAreaInsets
// If its a x phone i use safe instead of superView
guard let safe = safeArea else { return }
if safe.bottom > 0 {
topConstraint.constant = safe.top
bottomConstraint.constant = safe.bottom + tabMenu.constant
}
}
}
override func viewDidAppear(_ animated: Bool) {
// Here I set the rectangle to 36% of the width and height (change to what you want)
rectangleImg.frame = CGRect(x: 0, y: 0, width: backgroundImg.frame.width * 0.36,
height: backgroundImg.frame.height * 0.36)
// Last I put the rectangle in the center of background image
rectangleImg.center.x = backgroundImg.center.x
rectangleImg.center.y = backgroundImg.center.y
}
Hope this code could help!
The root of my problem is that I had the UIImageView's four sides pinned to its container view. When the width of the device changed, it correctly scaled the image, but it caused the graphic to be "stretched". This caused the width of the head, for example, to increase. I have solved this for now by locking the width of the image so the coordinates I come up with from the original image remain intact no matter what the device.
For those reading this, it may not sound like much of a solution, but I have to go with this since I have a pretty aggressive deadline. I have tested it on multiple device simulators and it works. I may have to revisit this in the future, but for now, it is working.
I also used this question's answer to rework the code: create a clickable body diagram

Adding custom view to ARKit

I just started looking at ARKitExample from apple and I am still studying. I need to do like interactive guide. For example, we can detect something (like QRCode), in that area, can I show with 1 label ?
Is it possible to add custom view (like may be UIVIew, UIlabel) to surface?
Edit
I saw some example to add line. I will need to find how to add additional view or image.
let mat = SCNMatrix4FromMat4(currentFrame.camera.transform)
let dir = SCNVector3(-1 * mat.m31, -1 * mat.m32, -1 * mat.m33)
let currentPosition = pointOfView.position + (dir * 0.1)
if button!.isHighlighted {
if let previousPoint = previousPoint {
let line = lineFrom(vector: previousPoint, toVector: currentPosition)
let lineNode = SCNNode(geometry: line)
lineNode.geometry?.firstMaterial?.diffuse.contents = lineColor
sceneView.scene.rootNode.addChildNode(lineNode)
}
}
I think this code should be able to add custom image. But I need to find the whole sample.
func updateRenderer(_ frame: ARFrame){
drawCameraImage(withPixelBuffer:frame.capturedImage)
let viewMatrix = simd_inverse(frame.came.transform)
let prijectionMatrix = frame.camera.prijectionMatrix
updateCamera(viewMatrix, projectionMatrix)
updateLighting(frame.lightEstimate?.ambientIntensity)
drawGeometry(forAnchors: frame.anchors)
}
ARKit isn't a rendering engine — it doesn't display any content for you. ARKit provides information about real-world spaces for use by rendering engines such as SceneKit, Unity, and any custom engine you build (with Metal, etc), so that they can display content that appears to inhabit real-world space. Thus, any "how do I show" question for ARKit is actually a question for whichever rendering engine you use with ARKit.
SceneKit is the easy out-of-the-box, no-additional-software-required way to display 3D content with ARKit, so I presume you're asking about that.
SceneKit can't render a UIView as part of a 3D scene. But it can render planes, cubes, or other shapes, and texture-map 2D content onto them. If you want to draw a text label on a plane detected by ARKit, that's the direction to investigate — follow the example's, um, example to create SCNPlane objects corresponding to detected ARPlaneAnchors, get yourself an image of some text, and set that image as the plane geometry's diffuse contents.
Yes you can add custom view in ARKit Scene.
Just make image of your view and add it wherever you want.
You can use following code to get image for UIView
func image(with view: UIView) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
view.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}

Cropping UIImage to custom path and keeping correct resolution?

I have a view (blue background...) which I'll call "main" here, on main I added a UIImageView that I then rotate, pan and scale. On main I have a another subview that shows the cropping area. Anything out of that under the darker area needs to be cropped.
I am trying to figure out how to properly create a cropped image from this state. I want the resulting image to look like this:
I want to make sure to keep the resolution of the image.
Any idea?
I have tried to figure out how to use the layer.mask property of the UIImageView. After some feedback, I think I could have another view (B) on the blue view, on B I would then add the image view, so then I would make sure that B's frame would match the rect of the cropping mask overlay. I think that could work? The only thing is I want to make sure I don't lose resolution.
So, earlier I tried this:
maskShape.frame = imageView.bounds
maskShape.path = UIBezierPath(rect: CGRect(x: 20, y: 20, width: 200, height: 200)).cgPath
imageView.layer.mask = maskShape
The rect was just a test rect and the image would be cropped to that path, but, I wasn't sure how to get a UIImage from all this that could keep the large resolution of the original image
So, I have implemented the method suggested by marco, it all works with the exception of keeping the resolution.
I use this call to take a screenshot of the view the contains the image and I have it clip to bounds:
public func renderToImage(afterScreenUpdates: Bool = false) -> UIImage {
let rendererFormat = UIGraphicsImageRendererFormat.default()
rendererFormat.opaque = isOpaque
let renderer = UIGraphicsImageRenderer(size: bounds.size, format: rendererFormat)
let snapshotImage = renderer.image { _ in
drawHierarchy(in: bounds, afterScreenUpdates: afterScreenUpdates)
}
return snapshotImage
}
The image I get is correct, but is not as sharp as the one I crop.
Hoe can I keep the resolution high?
In your view which keeps the image you must set clipsToBounds to true. Not sure if I got well but I suppose it's your "cropping area"

Swift how to place pictures on top of pictures

I would like to make an app which enables you to take a photo and then choose from a set of pre made "pictures" as you will to apply on top of that photo.
For example, you take a photo of someone and then apply a mustage, a chicken in it and fake lips.
App example is Aokify app.
However searched all corners of the internet but can't find an example that points me in the right direction.
Another more simple implementation may be to use a UIImageView as a parent view, then add a UIImageView as a subview for any images you wish to overlay on top of the original.
let mainImage = UIImage(named:"main-pic")
let overlayImage = UIImage(named:"overlay")
var mainImageView = UIImageView(image:mainImage)
var overlayImageView = UIImageView(image:overlayImage)
self.view.addSubview(mainImageView)
mainImageview.addSubview(overlayImageView)
Edit: Since this has become the accepted answer, I feel it is worth mentioning that there are also different options for positioning the overlayImageView: you can add the overlay to the same parent after the first view has been added, or you can add the overlay as a subview of the main imageView as the example demonstrates.
The difference is the frame of reference when setting the coordinates for your overlay frame: whether you want them to have the same coordinate space, or whether you want the overlay coordinates to be relative to the main image rather than the parent.
For answering the question properly and fulfilling the requirement, you will need to add option for moving and placing the overlay image at proper position according to the original image but the code for adding one image over another image will be the following one-
For Swift3
extension UIImage {
func overlayed(with overlay: UIImage) -> UIImage? {
defer {
UIGraphicsEndImageContext()
}
UIGraphicsBeginImageContextWithOptions(size, false, scale)
self.draw(in: CGRect(origin: CGPoint.zero, size: size))
overlay.draw(in: CGRect(origin: CGPoint.zero, size: size))
if let image = UIGraphicsGetImageFromCurrentImageContext() {
return image
}
return nil
}
}
Usage-
image.overlayed(with: overlayImage)
Also available here as a gist.
The code was originally written to answer this question.
Thanks to jesses.co.tt for providing the hint i needed.
The method is called UIGraphicsContext.
And the tutorial i finally found that did it: https://www.youtube.com/watch?v=m1QnT72I6f0 it's by thenewboston.

Resources