I have found numerous examples of converting UIView to UIImage, and they work perfectly for once the view has been laid out etc. Even in my view controller with many rows, some of which are net yet displaying on the screen, I can do the conversion. Unfortunately, for some of the views that are too far down in the table (and hence have not yet been "drawn"), doing the conversion produces a blank UIImage.
I've tried calling setNeedsDisplay and layoutIfNeeded, but these don't work. I've even tried to automatically scroll through the table, but perhaps I'm not doing in a way (using threads) that ensures that the scroll happens first, allowing the views to update, before the conversion takes place. I suspect this can't be done because I have found various questions asking this, and none have found a solution. Alternatively, can I just redraw my entire view in a UIImage, not requiring a UIView?
From Paul Hudson's website
Using any UIView that is not showing on the screen (say a row in a UITableview that is way down below the bottom of the screen.
let renderer = UIGraphicsImageRenderer(size: view.bounds.size)
let image = renderer.image { ctx in
view.drawHierarchy(in: view.bounds, afterScreenUpdates: true)
}
You don't have to have a view in a window/on-screen to be able to render it into an image. I've done exactly this in PixelTest:
extension UIView {
/// Creates an image from the view's contents, using its layer.
///
/// - Returns: An image, or nil if an image couldn't be created.
func image() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(bounds.size, false, 0)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
context.saveGState()
layer.render(in: context)
context.restoreGState()
guard let image = UIGraphicsGetImageFromCurrentImageContext() else { return nil }
UIGraphicsEndImageContext()
return image
}
}
This will render a view's layer into an image as it currently looks if it was to be rendered on-screen. That is to say, if the view hasn't been laid out yet, then it won't look the way you expect. PixelTest does this by force-laying out the view beforehand when verifying a view for snapshot testing.
You can also accomplish this using UIGraphicsImageRenderer.
extension UIView {
func image() -> UIImage {
let imageRenderer = UIGraphicsImageRenderer(bounds: bounds)
if let format = imageRenderer.format as? UIGraphicsImageRendererFormat {
format.opaque = true
}
let image = imageRenderer.image { context in
return layer.render(in: context.cgContext)
}
return image
}
}
Related
I need to capture the entire contents of my app's screen (screenshot) with a UIbutton on screen. There are labels/static images/etc. in addition to a live preview box for what the camera is showing(which is a sublayer of the main view).
I've already tried every version of UIGraphicsGetImageFromCurrentImageContext and view.drawHierarchy() methods (posted here and other places) for capturing a screenshot programmatically but no matter what I try, the AVCaptureVideoPreviewLayer I have in the middle of the screen NEVER shows up.
Does anyone know how to mimic the code when a user presses the two hardware buttons to initiate a screenshot? When I do that, the resulting picture DOES have the entire contents of the screen! If I try any other programmatic method, the PreviewLayer is always blank.
Here's an example of one of the methods I've used:
Create extension -
extension UIView {
func screenShot () -> UIImage? {
let scale = UIScreen.main.scale
let bounds = self.bounds
UIGraphicsBeginImageContextWithOptions(bounds.size, true, scale)
if let _ = UIGraphicsGetCurrentContext() {
self.drawHierarchy(in: bounds, afterScreenUpdates: true)
let screenshot = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return screenshot
}
return nil
}
}
Then, function to save the image-
func saveImage(screenshot: UIImage) {
UIImageWriteToSavedPhotosAlbum(screenshot, nil, nil, nil)
}
Finally, run the functions-
guard let screenshot = self.view.screenShot() else {return}
saveImage(screenshot: screenshot)
I know many people have different variations of this but nothing I try will include the PreviewLayer (which is a sublayer of the view).
I have a UIView that can be drawn like a finger paint application, but sometimes it is not visible. I want to be able to take a screenshot of it when it is not visible. Also, I want a screenshot where it is visible, but I don't want any subviews. I just want the UIView itself. This is the method I have tried:
func shapshot() -> UIImage? {
UIGraphicsBeginImageContext(self.frame.size)
guard let context = UIGraphicsGetCurrentContext() else {
return nil
}
self.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
if image == nil {
return nil
}
return image
}
func snapshot() -> UIImage {
UIGraphicsBeginImageContextWithOptions(bounds.size, self.isOpaque, UIScreen.main.scale)
layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image!
}
To get view rendered as UIImage, you could introduce a very simple protocol and extend UIView with it.
protocol Renderable {
var render: UIImage { get }
}
extension UIView: Renderable {
var render: UIImage {
UIGraphicsImageRenderer(bounds: bounds).image { context in
layer.render(in: context.cgContext)
}
}
}
and now it's super easy to get the image of any view
let image: UIImage = someView.render
then if you plan to share it or save it, you probably want to convert it to Data
let data: Data? = image.pngData()
I am not sure what you mean with the "when it is not visible" but this should work as long as the view is in the view hierarchy and it's properly laid out. I have been using this method in many apps for sharing stuff and it never failed me.
And of course there is no need for a protocol, feel free to use only the render computed property. It's just a matter of preference.
Documentation:
UIGraphicsImageRenderer, image(actions:)
I'm trying to take a screenshot of a SCNView to display elsewhere. Since SCNView inherits from UIView, I thought I could use a UIGraphicsImageRenderer:
extension UIView {
// Using a function since `var image` might conflict with an existing variable
// (like on `UIImageView`)
func asImage() -> UIImage {
let renderer = UIGraphicsImageRenderer(bounds: bounds)
return renderer.image { rendererContext in
layer.render(in: rendererContext.cgContext)
}
}
}
When I attempt to use this code on an SCNView, it causes a blank (white) UIImage (of the correct bounds).
How do I take a screenshot of a SCNView correctly?
Instead of using a UIGraphicsImageRenderer to get a screenshot, just call the snapshot method on SCNView:
let image = sceneView.snapshot()
I want to implement simple sharing function. In my app if user long pressed on UITableViewCell, it's present UIActionController with some buttons.
The first one allows to share content with friends (UIActivityViewController)
My goal is to save cell text with brand watermark in the right corner.
For now, I'm using this extension for converting UIView to UImage:
extension UIView {
func convertToImage() -> UIImage {
if #available(iOS 10.0, *) {
let renderer = UIGraphicsImageRenderer(bounds: bounds)
return renderer.image { rendererContext in
layer.render(in: rendererContext.cgContext)
}
} else {
UIGraphicsBeginImageContext(self.frame.size)
self.layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return UIImage(cgImage: image!.cgImage!)
}
}
}
But I have some problems with it. It's works only when the view presented on screen. If I try to get an image from UIView which is not shown on the screen, I get an empty variable.
Even with vc:
let vc = ViewController()
let image = vc.view.convertToImage()
//Image empty
I don't want that user see content with watermark on the screen, I need watermark be added only to the camera roll.
Can I do it?
For my application, I need to get a Google Street View image from gps coordinates. I know how to get a full screen GMSPanoramaView from coordinates, but I ultimately need it to be a UIImage.
let panoView = GMSPanoramaView(frame: .zero)
self.view = panoView
panoView.moveNearCoordinate(location.coordinate)
panoView.setAllGesturesEnabled(false)
// can this be converted to a UIImage?
var streetViewImage: UIImage?
streetViewImage = panoView
I'm seeing that other people have presented the GMSPanoramaView in a subview - is this a better option? Or are there any other ways to get static UIImages from Google Street View?
public extension GMSPanoramaView {
#objc public func toImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(bounds.size, false, UIScreen.main.scale)
drawHierarchy(in: self.bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return image
}
}
The above code will take a screenshot of your view. To have the right data loaded from the network (and not get a black screenshot) you will need to implement GMSPanoramaViewDelegate, probably panoramaView:didMoveToPanorama: will be called when the network request is completed and the image is visible. By then you can call toImage() and store your image.
https://developers.google.com/maps/documentation/ios-sdk/reference/protocol_g_m_s_panorama_view_delegate-p