Email screenshot without saving to camera roll - ios

I have the following code which allows a UIButton to capture a partial screenshot and save it to the camera roll - Thanks Lou Franco :)
// Declare the snapshot boundaries
let top: CGFloat = 100
let bottom: CGFloat = 60
// The size of the cropped image
let size = CGSize(width: view.frame.size.width, height: view.frame.size.height - top - bottom)
// Start the context
UIGraphicsBeginImageContext(size)
// we are going to use context in a couple of places
let context = UIGraphicsGetCurrentContext()!
// Transform the context so that anything drawn into it is displaced "top" pixels up
// Something drawn at coordinate (0, 0) will now be drawn at (0, -top)
// This will result in the "top" pixels being cut off
// The bottom pixels are cut off because the size of the of the context
CGContextTranslateCTM(context, 0, -top)
// Draw the view into the context (this is the snapshot)
view.layer.renderInContext(context)
let snapshot = UIGraphicsGetImageFromCurrentImageContext()
// End the context (this is required to not leak resources)
UIGraphicsEndImageContext()
// Save to photos
UIImageWriteToSavedPhotosAlbum(snapshot, nil, nil, nil)
How would I amend the above code so that:
Another UIButton could capture the same partial screenshot and EMAIL it as an attachment?; and
Another UIButton could capture the same partial screenshot and SMS it as an attachment?
I realise that my questions may make me look lazy, not so, I'm just very green at the moment :)
I've tried scouring the Web and modifying various snippets of code, but I really am stumped!
Many thanks in advance. Your time and effort is greatly appreciated!

The below code will set you up for emailing the image.
let composer = MFMailComposeViewController()
composer.setToRecipients(["someemail#email.com"])
composer.setMessageBody("Body", isHTML: false)
composer.setSubject("Subject")
let imageData = UIImagePNGRepresentation(snapshot)
composer.addAttachmentData(imageData, mimeType: "image/png", fileName:"myImage")
presentViewController(composer, animated: true, completion: nil)
To send it as an SMS it's pretty much the exact same process, except this time you use MFMessageComposeViewController rather than MFMailComposeViewController

Related

Get Visible portion Image from UIimageView in Scrollview

I have a UIImageView in a UIScrollView in which can be zoomed in and out. Now, after the user has selected the specific content to be zoomed in, I want to crop that part of image present on the scrollview and get it in the form on UIImage.
For that I am using
extension UIScrollView {
var snapshotVisibleArea: UIImage? {
UIGraphicsBeginImageContext(bounds.size)
UIGraphicsGetCurrentContext()?.translateBy(x: -contentOffset.x, y: -contentOffset.y)
layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
But when I implement this, the quality of the image get extremely degraded. Even If I use a 4K image, the final product looks like a 360p resolution.
This logic is just basic capturing of the screen content.
I know there can be a better way but I am not able to find a solution.
Any help is highly appreciated.
You can try this:
let context:CGContext = UIGraphicsGetCurrentContext()!
context.interpolationQuality = .high
Also I'm not sure but image quality could be improve if you initialize image context with this code: UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)

Cropping UIImage to custom path and keeping correct resolution?

I have a view (blue background...) which I'll call "main" here, on main I added a UIImageView that I then rotate, pan and scale. On main I have a another subview that shows the cropping area. Anything out of that under the darker area needs to be cropped.
I am trying to figure out how to properly create a cropped image from this state. I want the resulting image to look like this:
I want to make sure to keep the resolution of the image.
Any idea?
I have tried to figure out how to use the layer.mask property of the UIImageView. After some feedback, I think I could have another view (B) on the blue view, on B I would then add the image view, so then I would make sure that B's frame would match the rect of the cropping mask overlay. I think that could work? The only thing is I want to make sure I don't lose resolution.
So, earlier I tried this:
maskShape.frame = imageView.bounds
maskShape.path = UIBezierPath(rect: CGRect(x: 20, y: 20, width: 200, height: 200)).cgPath
imageView.layer.mask = maskShape
The rect was just a test rect and the image would be cropped to that path, but, I wasn't sure how to get a UIImage from all this that could keep the large resolution of the original image
So, I have implemented the method suggested by marco, it all works with the exception of keeping the resolution.
I use this call to take a screenshot of the view the contains the image and I have it clip to bounds:
public func renderToImage(afterScreenUpdates: Bool = false) -> UIImage {
let rendererFormat = UIGraphicsImageRendererFormat.default()
rendererFormat.opaque = isOpaque
let renderer = UIGraphicsImageRenderer(size: bounds.size, format: rendererFormat)
let snapshotImage = renderer.image { _ in
drawHierarchy(in: bounds, afterScreenUpdates: afterScreenUpdates)
}
return snapshotImage
}
The image I get is correct, but is not as sharp as the one I crop.
Hoe can I keep the resolution high?
In your view which keeps the image you must set clipsToBounds to true. Not sure if I got well but I suppose it's your "cropping area"

Cropping UI images

I have a SceneKit view that fills my screen. My goal is to let the user take snapshots of that scene, but the snapshots are not the whole screen, but an inset portion in a UIImageView which is slightly smaller than the screen. Ideally, the user should not notice, the image on top should be identical to the scene behind it.
I have coded this up using snapshot and cropped, but as you can see in the image, the scale ends up way off - see the width of the yellow line, and the size of the windows? It's also not positioned correctly, it's somewhat down and to the left from where it should be - the upper left should be below the line of windows, but you can see it is at the roofline above them. I can't see the original snapshot because the debugger QuickLook refuses to show it.
There's not much code to it, anyone see the problem:
let background = sceneView.snapshot().cgImage!
let cropped = background.cropping(to: overlayView.frame)
UIGraphicsBeginImageContextWithOptions(overlayView.frame.size, false, 1.0)
let context = UIGraphicsGetCurrentContext()
context!.setAlpha(0.50)
context!.draw(cropped!, in: overlayView.bounds)
let transparent = context!.makeImage();
UIGraphicsEndImageContext()
overlayView.image = UIImage.init(cgImage: transparent!, scale: 1.0, orientation: .downMirrored)
I have tried various scales and rects to no avail. I assume this is something very easy.
UPDATE: after several tries I was able to get quicklook to work. The snapshot is indeed the entire background as I would expect. But it is much larger than I would expect too - its 640, 998 while the cropped version is 228, 304. That explains the "zooming". This leads me to believe that the frame size of the inset view is NOT a direct relationship to the image size. Does that ring any bells? Is there some other rect I should be using rather than overlayView.frame?
So I assume the problem is that the frame coordinates are in one set of units and the image coordinates are in another. I was able to solve the problem this way:
let croprect = CGRect(x: overlayView.frame.origin.x * 2, y: overlayView.frame.origin.y * 2 - 45, width: overlayView.frame.width * 2, height: overlayView.frame.height * 2)
let drawrect = CGRect(x: 0, y: 0, width: overlayView.frame.width * 2, height: overlayView.frame.height * 2)
let background = sceneView.snapshot()
let cropped = background.cgImage!.cropping(to: croprect)
UIGraphicsBeginImageContextWithOptions(drawrect.size, false, 0.0)
let context = UIGraphicsGetCurrentContext()
context!.setAlpha(0.50)
context!.draw(cropped!, in: drawrect)
let transparent = context!.makeImage();
UIGraphicsEndImageContext()
I'm extremely curious why I had to adjust the Y starting point to get them to line up, anyone have an idea?

How to render a complex UIView into a PDF Context with high resolution?

There are several questions on SO asking how to render a UIView into a PDF context, but they all use view.layer.renderInContext(pdfContext), which results in a 72 DPI image (and one that looks terrible when printed). What I'm looking for is a technique to somehow get the UIView to render at something like 300 DPI.
In the end, I was able to take hints from several prior posts and put together a solution. I'm posting this since it took me a long time to get working, and I really hope to save someone else time and effort doing the same.
This solution uses two basic techniques:
Render the UIView into a scaled bitmap context to produce a large image
Draw the image into a PDF Context which has been scaled down, so that the drawn image has a high resolution
Build your view:
let v = UIView()
... // then add subviews, constraints, etc
Create the PDF Context:
UIGraphicsBeginPDFContextToData(data, docRect, stats.headerDict) // zero == (612 by 792 points)
defer { UIGraphicsEndPDFContext() }
UIGraphicsBeginPDFPage();
guard let pdfContext = UIGraphicsGetCurrentContext() else { return nil }
// I tried 300.0/72.0 but was not happy with the results
let rescale: CGFloat = 4 // 288 DPI rendering of VIew
// You need to change the scale factor on all subviews, not just the top view!
// This is a vital step, and there may be other types of views that need to be excluded
Then create a large bitmap of the image with an expanded scale:
func scaler(v: UIView) {
if !v.isKindOfClass(UIStackView.self) {
v.contentScaleFactor = 8
}
for sv in v.subviews {
scaler(sv)
}
}
scaler(v)
// Create a large Image by rendering the scaled view
let bigSize = CGSize(width: v.frame.size.width*rescale, height: v.frame.size.height*rescale)
UIGraphicsBeginImageContextWithOptions(bigSize, true, 1)
let context = UIGraphicsGetCurrentContext()!
CGContextSetFillColorWithColor(context, UIColor.whiteColor().CGColor)
CGContextFillRect(context, CGRect(origin: CGPoint(x: 0, y: 0), size: bigSize))
// Must increase the transform scale
CGContextScaleCTM(context, rescale, rescale)
v.layer.renderInContext(context)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Now we have a large image with each point representing one pixel.
To get it drawn into the PDF at high resolution, we need to scale the PDF down while drawing the image at its large size:
CGContextSaveGState(pdfContext)
CGContextTranslateCTM(pdfContext, v.frame.origin.x, v.frame.origin.y) // where the view should be shown
CGContextScaleCTM(pdfContext, 1/rescale, 1/rescale)
let frame = CGRect(origin: CGPoint(x: 0, y: 0), size: bigSize)
image.drawInRect(frame)
CGContextRestoreGState(pdfContext)
... // Continue with adding other items
You can see that the left "S" contained in the cream colored bitmap looks pretty nice compared to a "S" drawn but an attributed string:
When the same PDF is viewed by a simple rendering of the PDF without all the scaling, this is what you would see:

How do I take a PARTIAL screenshot & save to CameraRoll?

I am building a simple motivational app - my pet project. Pretty simple. It prints a random motivational message when a button is pressed.
I would like to user to be able to press a button and crop the motivational message itself on the screen and save it to the camera roll.
I found a tutorial that does what I wanted, but it takes a FULL screenshot AND a PARTIAL screenshot.
I'm trying to modify the code so it takes ONLY a partial screenshot.
Here's the Xcode:
print("SchreenShot")
// Start full screenshot
UIGraphicsBeginImageContext(view.frame.size)
view.layer.renderInContext(UIGraphicsGetCurrentContext()!)
var sourceImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(sourceImage,nil,nil,nil)
//partial Screen Shot
print("partial ss")
UIGraphicsBeginImageContext(view.frame.size)
sourceImage.drawAtPoint(CGPointMake(0, -100))
var croppedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(croppedImage,nil,nil,nil)
Also, in the PARTIAL screenshot, it takes a snapshot of the "page" 100 pixels from the top down to the bottom. How can I make it take a snapshot of the contents of the page say 100 pixels from the top of page to 150 pixels from bottom of page?
Many, many, many thanks!
Your sample code draws the view into a graphics context (the snapshot), crops it, and saves it. I am altering it a little with some extra comments because it looks like you are new to this API
// Declare the snapshot boundaries
let top: CGFloat = 100
let bottom: CGFloat = 150
// The size of the cropped image
let size = CGSize(width: view.frame.size.width, height: view.frame.size.height - top - bottom)
// Start the context
UIGraphicsBeginImageContext(size)
// we are going to use context in a couple of places
let context = UIGraphicsGetCurrentContext()!
// Transform the context so that anything drawn into it is displaced "top" pixels up
// Something drawn at coordinate (0, 0) will now be drawn at (0, -top)
// This will result in the "top" pixels being cut off
// The bottom pixels are cut off because the size of the of the context
CGContextTranslateCTM(context, 0, -top)
// Draw the view into the context (this is the snapshot)
view.layer.renderInContext(context)
let snapshot = UIGraphicsGetImageFromCurrentImageContext()
// End the context (this is required to not leak resources)
UIGraphicsEndImageContext()
// Save to photos
UIImageWriteToSavedPhotosAlbum(snapshot, nil, nil, nil)

Resources