I have a UIImageView in a UIScrollView in which can be zoomed in and out. Now, after the user has selected the specific content to be zoomed in, I want to crop that part of image present on the scrollview and get it in the form on UIImage.
For that I am using
extension UIScrollView {
var snapshotVisibleArea: UIImage? {
UIGraphicsBeginImageContext(bounds.size)
UIGraphicsGetCurrentContext()?.translateBy(x: -contentOffset.x, y: -contentOffset.y)
layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
But when I implement this, the quality of the image get extremely degraded. Even If I use a 4K image, the final product looks like a 360p resolution.
This logic is just basic capturing of the screen content.
I know there can be a better way but I am not able to find a solution.
Any help is highly appreciated.
You can try this:
let context:CGContext = UIGraphicsGetCurrentContext()!
context.interpolationQuality = .high
Also I'm not sure but image quality could be improve if you initialize image context with this code: UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)
Related
I have a view (blue background...) which I'll call "main" here, on main I added a UIImageView that I then rotate, pan and scale. On main I have a another subview that shows the cropping area. Anything out of that under the darker area needs to be cropped.
I am trying to figure out how to properly create a cropped image from this state. I want the resulting image to look like this:
I want to make sure to keep the resolution of the image.
Any idea?
I have tried to figure out how to use the layer.mask property of the UIImageView. After some feedback, I think I could have another view (B) on the blue view, on B I would then add the image view, so then I would make sure that B's frame would match the rect of the cropping mask overlay. I think that could work? The only thing is I want to make sure I don't lose resolution.
So, earlier I tried this:
maskShape.frame = imageView.bounds
maskShape.path = UIBezierPath(rect: CGRect(x: 20, y: 20, width: 200, height: 200)).cgPath
imageView.layer.mask = maskShape
The rect was just a test rect and the image would be cropped to that path, but, I wasn't sure how to get a UIImage from all this that could keep the large resolution of the original image
So, I have implemented the method suggested by marco, it all works with the exception of keeping the resolution.
I use this call to take a screenshot of the view the contains the image and I have it clip to bounds:
public func renderToImage(afterScreenUpdates: Bool = false) -> UIImage {
let rendererFormat = UIGraphicsImageRendererFormat.default()
rendererFormat.opaque = isOpaque
let renderer = UIGraphicsImageRenderer(size: bounds.size, format: rendererFormat)
let snapshotImage = renderer.image { _ in
drawHierarchy(in: bounds, afterScreenUpdates: afterScreenUpdates)
}
return snapshotImage
}
The image I get is correct, but is not as sharp as the one I crop.
Hoe can I keep the resolution high?
In your view which keeps the image you must set clipsToBounds to true. Not sure if I got well but I suppose it's your "cropping area"
I am working on face swapping I am resizing, rotating a UIView. I know how to take a regular rectangular screenshots of the iPhone/iPad screen (code below). However, I'm looking for a way to get a screenshot that is rotated by an arbitrary number of degrees
UIGraphicsBeginImageContextWithOptions((userResizeView.bounds.size), false,0)
var ctx: CGContext = UIGraphicsGetCurrentContext()!
ctx.translateBy(x: -(userResizeView.frame.origin.x), y: -(userResizeView.frame.origin.y))
imageView.layer.render(in: ctx)[enter image description here][1]
var viewImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
userResizeView is my custom third party class(ResizableView).
This is the output which I am getting:
I am building a simple motivational app - my pet project. Pretty simple. It prints a random motivational message when a button is pressed.
I would like to user to be able to press a button and crop the motivational message itself on the screen and save it to the camera roll.
I found a tutorial that does what I wanted, but it takes a FULL screenshot AND a PARTIAL screenshot.
I'm trying to modify the code so it takes ONLY a partial screenshot.
Here's the Xcode:
print("SchreenShot")
// Start full screenshot
UIGraphicsBeginImageContext(view.frame.size)
view.layer.renderInContext(UIGraphicsGetCurrentContext()!)
var sourceImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(sourceImage,nil,nil,nil)
//partial Screen Shot
print("partial ss")
UIGraphicsBeginImageContext(view.frame.size)
sourceImage.drawAtPoint(CGPointMake(0, -100))
var croppedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(croppedImage,nil,nil,nil)
Also, in the PARTIAL screenshot, it takes a snapshot of the "page" 100 pixels from the top down to the bottom. How can I make it take a snapshot of the contents of the page say 100 pixels from the top of page to 150 pixels from bottom of page?
Many, many, many thanks!
Your sample code draws the view into a graphics context (the snapshot), crops it, and saves it. I am altering it a little with some extra comments because it looks like you are new to this API
// Declare the snapshot boundaries
let top: CGFloat = 100
let bottom: CGFloat = 150
// The size of the cropped image
let size = CGSize(width: view.frame.size.width, height: view.frame.size.height - top - bottom)
// Start the context
UIGraphicsBeginImageContext(size)
// we are going to use context in a couple of places
let context = UIGraphicsGetCurrentContext()!
// Transform the context so that anything drawn into it is displaced "top" pixels up
// Something drawn at coordinate (0, 0) will now be drawn at (0, -top)
// This will result in the "top" pixels being cut off
// The bottom pixels are cut off because the size of the of the context
CGContextTranslateCTM(context, 0, -top)
// Draw the view into the context (this is the snapshot)
view.layer.renderInContext(context)
let snapshot = UIGraphicsGetImageFromCurrentImageContext()
// End the context (this is required to not leak resources)
UIGraphicsEndImageContext()
// Save to photos
UIImageWriteToSavedPhotosAlbum(snapshot, nil, nil, nil)
I would like to make an app which enables you to take a photo and then choose from a set of pre made "pictures" as you will to apply on top of that photo.
For example, you take a photo of someone and then apply a mustage, a chicken in it and fake lips.
App example is Aokify app.
However searched all corners of the internet but can't find an example that points me in the right direction.
Another more simple implementation may be to use a UIImageView as a parent view, then add a UIImageView as a subview for any images you wish to overlay on top of the original.
let mainImage = UIImage(named:"main-pic")
let overlayImage = UIImage(named:"overlay")
var mainImageView = UIImageView(image:mainImage)
var overlayImageView = UIImageView(image:overlayImage)
self.view.addSubview(mainImageView)
mainImageview.addSubview(overlayImageView)
Edit: Since this has become the accepted answer, I feel it is worth mentioning that there are also different options for positioning the overlayImageView: you can add the overlay to the same parent after the first view has been added, or you can add the overlay as a subview of the main imageView as the example demonstrates.
The difference is the frame of reference when setting the coordinates for your overlay frame: whether you want them to have the same coordinate space, or whether you want the overlay coordinates to be relative to the main image rather than the parent.
For answering the question properly and fulfilling the requirement, you will need to add option for moving and placing the overlay image at proper position according to the original image but the code for adding one image over another image will be the following one-
For Swift3
extension UIImage {
func overlayed(with overlay: UIImage) -> UIImage? {
defer {
UIGraphicsEndImageContext()
}
UIGraphicsBeginImageContextWithOptions(size, false, scale)
self.draw(in: CGRect(origin: CGPoint.zero, size: size))
overlay.draw(in: CGRect(origin: CGPoint.zero, size: size))
if let image = UIGraphicsGetImageFromCurrentImageContext() {
return image
}
return nil
}
}
Usage-
image.overlayed(with: overlayImage)
Also available here as a gist.
The code was originally written to answer this question.
Thanks to jesses.co.tt for providing the hint i needed.
The method is called UIGraphicsContext.
And the tutorial i finally found that did it: https://www.youtube.com/watch?v=m1QnT72I6f0 it's by thenewboston.
Okay, sorry if the title is a little confusing. Basically I am trying get the image/subviews of the image view and combine them into a single exportable UIImage.
Here is my current code, however it has a large resolution loss.
func generateImage() -> UIImage{
UIGraphicsBeginImageContext(environmentImageView.frame.size)
var context : CGContextRef = UIGraphicsGetCurrentContext()
environmentImageView.layer.renderInContext(context)
var img : UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}
You have to set the scale of the context to be retina.
UIGraphicsBeginImageContextWithOptions(environmentImageView.frame.size, false, 0)
0 means to use the scale of the screen which will work for non-retina devices as well.