This question already has answers here:
Swift PNG Image being saved with incorrect orientation
(3 answers)
Closed 2 years ago.
I'm learning how to use UIImagePickerController and got stuck with a problem using UIImagePickerController.sourceType = .camera.
What my app is supposed to do is:
to allow a user to take a photo using the system view controller mentioned above
then convert this image using UIImage.pngData(_:)
save this data to an appropriate struct field (doesn't matter which one, in that case)
use data saved to the structure to make up an image and set this image as a UIButton foreground image
When i'm doing so (following App Development with Swift book's example project) image appears to be rotated 90 degrees for some reason, which I'd like to know.
I've tried to create additional UIImageView and set its image property before converting UIImage to pngData, and it appeared normally (see 2nd screenshot).
When choosing a photo from the photo library problem does not occur in any of the cases (before-after converting).
So I suppose, that pngData is somehow losing photo-orientation information? Or I've messed somewhere else, perhaps.
Here are screenshots from my app, so you can see how original taken photo looks, and how it looks in-app (above labels - UIButton, below - test UIImageView). Nevermind text :)
If you save UIImage as a JPEG, this will set the rotation flag.
PNGs do not support a rotation flag, so if you save a UIImage as a PNG, it will be rotated incorrectly and not have a flag set to fix it. So if you want PNGs you must rotate them yourself.
let jpgData = downloadedImage.jpegData(compressionQuality: 1)
To get rotated image you need to draw that.. you can use following extension
extension UIImage {
func rotateImage()-> UIImage? {
if (self.imageOrientation == UIImage.Orientation.up ) {
return self
}
UIGraphicsBeginImageContext(self.size)
self.draw(in: CGRect(origin: CGPoint.zero, size: self.size))
let copy = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return copy
}
}
How to use
let rotatedImage = downloadedImage.rotateImage()
Related
I'm trying to make a simple image eraser tool, where the user can erase and restore as drawing into an image, just like in this image:
After many attempts and testing, I have achieved the sufficient "erase" functionality with the following code on the UI side:
// Drawing code - on user touch
// `currentPath` is a `UIBezierPath` property of the containing class.
guard let image = pickedImage else { return }
UIGraphicsBeginImageContextWithOptions(imageView.frame.size, false, 0)
if let context = UIGraphicsGetCurrentContext() {
mainImageView.layer.render(in: context)
context.addPath(currentPath.cgPath)
context.setBlendMode(.clear)
context.setLineWidth(translatedBrushWidth)
context.setLineCap(.round)
context.setLineJoin(.round)
context.setStrokeColor(UIColor.clear.cgColor)
context.strokePath()
let capturedImage = UIGraphicsGetImageFromCurrentImageContext()
imageView.image = capturedImage
}
UIGraphicsEndImageContext()
And upon user touch-up I am applying a scale transform to currentPath to render the image with the cutout part in full size to preserve UI performance.
What I'm trying to figure out now is how to approach the "restore" functionality. Essentially, the user should draw on the erased parts to reveal the original image.
I've tried looking at CGContextClipToMask but I'm not sure how to approach the implementation.
I've also looked at other approaches to achieving this "erase/restore" effect before rendering the actual images, such as masking a CAShapeLayer over the image but also in this approach restoring becomes a problem.
Any help will be greatly appreciated, as well as alternative approaches to erase and restore with a path on the UI-level and rendering level.
Thank you!
Yes, I would recommend adding a CALayer to your image's layer as a mask.
You can either make the mask layer a CAShapeLayer and draw geometric shapes into it, or use a simple CALayer as a mask, where the contents property of the mask layer is a CGImage. You'd then draw opaque pixels into the mask to reveal the image contents, or transparent pixels to "erase" the corresponding image pixels.
This approach is hardware accelerated and quite fast.
Handling undo/redo of eraser functions would require you to collect changes to your mask layer as well as the previous state of the mask.
Edit:
I created a small demo app on Github that shows how to use a CGImage as a mask on an image view
Here is the ReadMe file from that project:
MaskableImageView
This project demonstrates how to use a CALayer to mask a UIView.
It defines a custom subclass of UIImageView, MaskableView.
The MaskableView class has a property maskLayer that contains a CALayer.
MaskableView defines a didSet method on its bounds property so that when the view's bounds change, it resizes the mask layer to match the size of the image view.
The MaskableView has a method installSampleMask which builds an image the same size as the image view, mostly filled with opaque black, but with a small rectangle in the center filled with black at an alpha of 0.7. The translucent center rectangle causes the image view to become partly transparent and show the view underneath.
The demo app installs a couple of subviews into the MaskableView, a sample image of Scampers, one of my dogs, and a UILabel. It also installs an image of a checkerboard under the MaskableView so that you can see the translucent parts more easily.
The MaskableView has properties circleRadius, maskDrawingAlpha, and drawingAction that it uses to let the user erase/un-erase the image by tapping on the view to update the mask.
The MaskableView attaches a UIPanGestureRecognizer and a UITapGestureRecognizer to itself, with an action of gestureRecognizerUpdate. The gestureRecognizerUpdate method takes the tap/drag location from the gesture recognizer and uses it to draw a circle onto the image mask that either decreases the image mask's alpha (to partly erase pixels) or increase the image mask's alpha (to make those pixels more opaque.)
The MaskableView's mask drawing is crude, and only meant for demonstration purposes. It draws a series of discrete circles intstead of rendering a path into the mask based on the user's drag gesture. A better solution would be to connect the points from the gesture recognizer and use them to render a smoothed curve into the mask.
The app's screen looks like this:
Edit #2:
If you want to export the resulting image to a file that preserves the transparency, you can convert the CGImage to a UIImage (Using the init(cgImage:) initializer) and then use the UIImage function
func pngData() -> Data?
to convert the image to PNG data. That function returns nil if it is unable to convert the image to PNG data.
If it succeeds, you can then save the data to a file with a .png extension.
I updated the sample project to include the ability to save the resulting image to disk.
First I added an image computed property to the MaskableView. That looks like this:
public var image: UIImage? {
guard let renderer = renderer else { return nil}
let result = renderer.image {
context in
return layer.render(in: context.cgContext)
}
return result
}
Then I added a save button to the view controller that fetches the image from the MaskableView and saves it to the app's Documents directory:
#IBAction func handleSaveButton(_ sender: UIButton) {
print("In handleSaveButton")
if let image = maskableView.image,
let pngData = image.pngData(){
print(image.description)
let imageURL = getDocumentsDirectory().appendingPathComponent("image.png", isDirectory: false)
do {
try pngData.write(to: imageURL)
print("Wrote png to \(imageURL.path)")
}
catch {
print("Error writing file to \(imageURL.path)")
}
}
}
You could also save the image to the user's camera roll. It's been a while since I've done that so I'd have to dig up the steps for that.
I am using below code in swift to capture a UIImageView into one image. It works but the image is not as same quality as the one showing on the UIImageView. Is there a way to configure the quality when capture this image?
private func getScreenshow(imageView:UIImageView) -> UIImage{
UIGraphicsBeginImageContext(self.imageView.frame.size)
let context = UIGraphicsGetCurrentContext()
imageView.layer.renderInContext(context!)
let screenShot = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(screenShot, nil, nil, nil)
return screenShot
}
After some searching I figured out the issue. I use below code to replace the "UIGraphicsBeginImageContext" and it works.
UIGraphicsBeginImageContextWithOptions(self.imageView.frame.size, true, 0)
This code looks pretty weird (why don’t you just use imageView.image?) but I don’t know the full context of your use case.
As you found, the reason for the loss of quality is you are ignoring the screen’s retina scale.
Read the documentation for UIGraphicsBeginImageContext and UIGraphicsBeginImageContextWithOptions and you’ll see the former uses a ‘scale factor of 1.0’.
I would like to make an app which enables you to take a photo and then choose from a set of pre made "pictures" as you will to apply on top of that photo.
For example, you take a photo of someone and then apply a mustage, a chicken in it and fake lips.
App example is Aokify app.
However searched all corners of the internet but can't find an example that points me in the right direction.
Another more simple implementation may be to use a UIImageView as a parent view, then add a UIImageView as a subview for any images you wish to overlay on top of the original.
let mainImage = UIImage(named:"main-pic")
let overlayImage = UIImage(named:"overlay")
var mainImageView = UIImageView(image:mainImage)
var overlayImageView = UIImageView(image:overlayImage)
self.view.addSubview(mainImageView)
mainImageview.addSubview(overlayImageView)
Edit: Since this has become the accepted answer, I feel it is worth mentioning that there are also different options for positioning the overlayImageView: you can add the overlay to the same parent after the first view has been added, or you can add the overlay as a subview of the main imageView as the example demonstrates.
The difference is the frame of reference when setting the coordinates for your overlay frame: whether you want them to have the same coordinate space, or whether you want the overlay coordinates to be relative to the main image rather than the parent.
For answering the question properly and fulfilling the requirement, you will need to add option for moving and placing the overlay image at proper position according to the original image but the code for adding one image over another image will be the following one-
For Swift3
extension UIImage {
func overlayed(with overlay: UIImage) -> UIImage? {
defer {
UIGraphicsEndImageContext()
}
UIGraphicsBeginImageContextWithOptions(size, false, scale)
self.draw(in: CGRect(origin: CGPoint.zero, size: size))
overlay.draw(in: CGRect(origin: CGPoint.zero, size: size))
if let image = UIGraphicsGetImageFromCurrentImageContext() {
return image
}
return nil
}
}
Usage-
image.overlayed(with: overlayImage)
Also available here as a gist.
The code was originally written to answer this question.
Thanks to jesses.co.tt for providing the hint i needed.
The method is called UIGraphicsContext.
And the tutorial i finally found that did it: https://www.youtube.com/watch?v=m1QnT72I6f0 it's by thenewboston.
I found a source code about view screenshot. I changed it a little and tried. But this code has a little problem. Screenshot resolution is really bad. I need a good resolution screenshot. I tried to add a comment, but I'm new on stackoverflow. Anyway, what can I do for this ?
Link : Screenshot in swift iOS?
My code :
func textViewSS() {
//Create the UIImage
UIGraphicsBeginImageContext(textView.frame.size)
textView.layer.renderInContext(UIGraphicsGetCurrentContext())
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//Save it to the camera roll
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
}
Sample Result :
http://i60.tinypic.com/s4wdn4.png
Trying modifying the first line in your code to pass the scale if you are not satisfied with the resolution
UIGraphicsBeginImageContextWithOptions(textView.frame.size, false, UIScreen.mainScreen().scale)
I don't know the requirement in your case but drawViewHierarchyInRect is quicker/cheaper than renderInContext. You may want to consider that if it is applicable.
How does the "move and scale screen" determine dimensions for its cropbox?
Basically I would like to set a fixed width and height for the "CropRect" and let the user move and scale his image to fit in to that box as desired.
Does anyone know how to do this? (Or if it is even possible with the UIImagePickerController)
Thanks!
Not possible with UIImagePickerController unfortunately. The solution I recommend is to disable editing for the image picker and handle it yourself. For instance, I put the image in a scrollable, zoomable image view. On top of the image view is a fixed position "crop guide view" that draws the crop indicator the user sees. Assuming the guide view has properties for the visible rect (the part to keep) and edge widths (the part to discard) you can get the cropping rectangle like so. You can use the UIImage+Resize category to do the actual cropping.
CGRect cropGuide = self.cropGuideView.visibleRect;
UIEdgeInsets edges = self.cropGuideView.edgeWidths;
CGPoint cropGuideOffset = self.cropScrollView.contentOffset;
CGPoint origin = CGPointMake( cropGuideOffset.x + edges.left, cropGuideOffset.y + edges.top );
CGSize size = cropGuide.size;
CGRect crop = { origin, size };
crop.origin.x = crop.origin.x / self.cropScrollView.zoomScale;
crop.origin.y = crop.origin.y / self.cropScrollView.zoomScale;
crop.size.width = crop.size.width / self.cropScrollView.zoomScale;
crop.size.height = crop.size.height / self.cropScrollView.zoomScale;
photo = [photo croppedImage:crop];
Kinda late to the game but I think this may be what you are looking for: https://github.com/gekitz/GKImagePicker
Here is a solution for manual cropping by Ming Yang.
https://github.com/myang-git/iOS-Image-Crop-View
It offers a rectangular frame, which the user can slide or drag to fit the required portion of the image in the rectangle. Please note that this solution does the reverse of the question asked - lets the rectangle size vary, but eventually brings the desired result.
It is coded in Objective-C. You may have to either code it in Swift or simply build a bridging header to connect the Objective-C code with Swift code.
It's now later than late but may be useful for someone. This is the library I've used for swift (many thanks to Tim Oliver):
TOCropViewController
as described in README file in GitHub link above, by using this library you can get cropped images in user-defined rectangular and also in a circular mode, e.g. for updating profile image.
below is sample code from GitHub:
func presentCropViewController {
let image: UIImage = ... //Load an image
let cropViewController = CropViewController(image: image)
cropViewController.delegate = self
present(cropViewController, animated: true, completion: nil)
}
func cropViewController(_ cropViewController: CropViewController, didCropToImage image: UIImage, withRect cropRect: CGRect, angle: Int) {
// 'image' is the newly cropped version of the original image
}