How to mimic an actual 'Hardware Screenshot'? (AVCaptureSessionPreview not showing...) - ios

I need to capture the entire contents of my app's screen (screenshot) with a UIbutton on screen. There are labels/static images/etc. in addition to a live preview box for what the camera is showing(which is a sublayer of the main view).
I've already tried every version of UIGraphicsGetImageFromCurrentImageContext and view.drawHierarchy() methods (posted here and other places) for capturing a screenshot programmatically but no matter what I try, the AVCaptureVideoPreviewLayer I have in the middle of the screen NEVER shows up.
Does anyone know how to mimic the code when a user presses the two hardware buttons to initiate a screenshot? When I do that, the resulting picture DOES have the entire contents of the screen! If I try any other programmatic method, the PreviewLayer is always blank.
Here's an example of one of the methods I've used:
Create extension -
extension UIView {
func screenShot () -> UIImage? {
let scale = UIScreen.main.scale
let bounds = self.bounds
UIGraphicsBeginImageContextWithOptions(bounds.size, true, scale)
if let _ = UIGraphicsGetCurrentContext() {
self.drawHierarchy(in: bounds, afterScreenUpdates: true)
let screenshot = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return screenshot
}
return nil
}
}
Then, function to save the image-
func saveImage(screenshot: UIImage) {
UIImageWriteToSavedPhotosAlbum(screenshot, nil, nil, nil)
}
Finally, run the functions-
guard let screenshot = self.view.screenShot() else {return}
saveImage(screenshot: screenshot)
I know many people have different variations of this but nothing I try will include the PreviewLayer (which is a sublayer of the view).

Related

Can you convert uiview that is not currently being displayed to uiimage

I have found numerous examples of converting UIView to UIImage, and they work perfectly for once the view has been laid out etc. Even in my view controller with many rows, some of which are net yet displaying on the screen, I can do the conversion. Unfortunately, for some of the views that are too far down in the table (and hence have not yet been "drawn"), doing the conversion produces a blank UIImage.
I've tried calling setNeedsDisplay and layoutIfNeeded, but these don't work. I've even tried to automatically scroll through the table, but perhaps I'm not doing in a way (using threads) that ensures that the scroll happens first, allowing the views to update, before the conversion takes place. I suspect this can't be done because I have found various questions asking this, and none have found a solution. Alternatively, can I just redraw my entire view in a UIImage, not requiring a UIView?
From Paul Hudson's website
Using any UIView that is not showing on the screen (say a row in a UITableview that is way down below the bottom of the screen.
let renderer = UIGraphicsImageRenderer(size: view.bounds.size)
let image = renderer.image { ctx in
view.drawHierarchy(in: view.bounds, afterScreenUpdates: true)
}
You don't have to have a view in a window/on-screen to be able to render it into an image. I've done exactly this in PixelTest:
extension UIView {
/// Creates an image from the view's contents, using its layer.
///
/// - Returns: An image, or nil if an image couldn't be created.
func image() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(bounds.size, false, 0)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
context.saveGState()
layer.render(in: context)
context.restoreGState()
guard let image = UIGraphicsGetImageFromCurrentImageContext() else { return nil }
UIGraphicsEndImageContext()
return image
}
}
This will render a view's layer into an image as it currently looks if it was to be rendered on-screen. That is to say, if the view hasn't been laid out yet, then it won't look the way you expect. PixelTest does this by force-laying out the view beforehand when verifying a view for snapshot testing.
You can also accomplish this using UIGraphicsImageRenderer.
extension UIView {
func image() -> UIImage {
let imageRenderer = UIGraphicsImageRenderer(bounds: bounds)
if let format = imageRenderer.format as? UIGraphicsImageRendererFormat {
format.opaque = true
}
let image = imageRenderer.image { context in
return layer.render(in: context.cgContext)
}
return image
}
}

How to save UIView into camera roll without displaying view

I want to implement simple sharing function. In my app if user long pressed on UITableViewCell, it's present UIActionController with some buttons.
The first one allows to share content with friends (UIActivityViewController)
My goal is to save cell text with brand watermark in the right corner.
For now, I'm using this extension for converting UIView to UImage:
extension UIView {
func convertToImage() -> UIImage {
if #available(iOS 10.0, *) {
let renderer = UIGraphicsImageRenderer(bounds: bounds)
return renderer.image { rendererContext in
layer.render(in: rendererContext.cgContext)
}
} else {
UIGraphicsBeginImageContext(self.frame.size)
self.layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return UIImage(cgImage: image!.cgImage!)
}
}
}
But I have some problems with it. It's works only when the view presented on screen. If I try to get an image from UIView which is not shown on the screen, I get an empty variable.
Even with vc:
let vc = ViewController()
let image = vc.view.convertToImage()
//Image empty
I don't want that user see content with watermark on the screen, I need watermark be added only to the camera roll.
Can I do it?

Screenshot of ARSCNView

I need to make a screenshot of ARSCNView, the class is defined as below:
public class SceneLocationView: ARSCNView, ARSCNViewDelegate { }
I am following the below methods to have the screenshot done:
How do I take a full screen Screenshot in Swift?
but the scenelocationview view is not printed, all the rest is.
Anyone was able to do this successfully ?
There is also another way:
ARSCNView has a built in function: .snapshot()
This can be called easily like so:
let renderedImage = augmentedRealityView.snapshot()
And it may also be more useful as I believe that it doesn't render any UIKit elements such as UINavigationBar as well.
Try this extension,
extension UIView {
var snapshot: UIImage? {
UIGraphicsBeginImageContextWithOptions(bounds.size, false, 0)
defer { UIGraphicsEndImageContext() }
drawHierarchy(in: bounds, afterScreenUpdates: true)
return UIGraphicsGetImageFromCurrentImageContext()
}
}
and use
let screenShot = self.view?.snapshot
I was able to save the screenshot of OpenGL View from this extension.

Convert GMSPanoramaView to UIImage?

For my application, I need to get a Google Street View image from gps coordinates. I know how to get a full screen GMSPanoramaView from coordinates, but I ultimately need it to be a UIImage.
let panoView = GMSPanoramaView(frame: .zero)
self.view = panoView
panoView.moveNearCoordinate(location.coordinate)
panoView.setAllGesturesEnabled(false)
// can this be converted to a UIImage?
var streetViewImage: UIImage?
streetViewImage = panoView
I'm seeing that other people have presented the GMSPanoramaView in a subview - is this a better option? Or are there any other ways to get static UIImages from Google Street View?
public extension GMSPanoramaView {
#objc public func toImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(bounds.size, false, UIScreen.main.scale)
drawHierarchy(in: self.bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return image
}
}
The above code will take a screenshot of your view. To have the right data loaded from the network (and not get a black screenshot) you will need to implement GMSPanoramaViewDelegate, probably panoramaView:didMoveToPanorama: will be called when the network request is completed and the image is visible. By then you can call toImage() and store your image.
https://developers.google.com/maps/documentation/ios-sdk/reference/protocol_g_m_s_panorama_view_delegate-p

Snapshot of UIView not scaling correctly?

I have a UIView canvas which I would like to save a screen shot of it and it's subviews (which are the colorful shapes) on the camera roll when I press the UIBarButton shareBarButton. This works on the simulator, however, it does not produce the image in the way I like:
Ideally what I would like the snapshot to look like (except w/out the carrier, time, & battery status on the top of the screen).
What the snapshot in the camera roll actually looks like.
I want the snapshot to look exactly like the way it looks on the iPhone screen. So if part of the shape goes beyond the screen, the snapshot will capture only the part of the shape that is visible on screen. I also want the snapshot to have the size of the canvas which is basically the size of the view except slightly shorter height:
canvas = UIView(frame: CGRectMake(0, 0, view.bounds.height, view.bounds.height-toolbar.bounds.height))
If someone could tell what I'm doing wrong in creating the snapshot that would be greatly appreciated!
My code:
func share(sender: UIBarButtonItem) {
let masterpiece = canvas.snapshotViewAfterScreenUpdates(true)
let image = snapshot(masterpiece)
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
}
func snapshot(masterpiece: UIView) -> UIImage{
UIGraphicsBeginImageContextWithOptions(masterpiece.bounds.size, false, UIScreen.mainScreen().scale)
masterpiece.drawViewHierarchyInRect(masterpiece.bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
In the first instance I would try snap-shotting the UIWindow to see if that solves your issue:
Here is a UIWindow extension I use (not specifically for camera work) - try that.
import UIKit
extension UIWindow {
func capture() -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.frame.size, self.opaque, UIScreen.mainScreen().scale)
self.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
I call it like:
let window: UIWindow! = UIApplication.sharedApplication().keyWindow
let windowImage = window.capture()

Resources