Snapshot of SwiftUI view is partially cut off - ios

I tried to create a UIImage from a SwiftUI view, a snapshot, with the code from HWS: How to convert a SwiftUI view to an image.
I get the following result, which is obviously incorrect because the image is cut-off.
Code:
struct ContentView: View {
#State private var savedImage: UIImage?
var textView: some View {
Text("Hello, SwiftUI")
.padding()
.background(Color.blue)
.foregroundColor(.white)
.clipShape(Capsule())
}
var body: some View {
ZStack {
VStack(spacing: 100) {
textView
Button("Save to image") {
savedImage = textView.snapshot()
}
}
if let savedImage = savedImage {
Image(uiImage: savedImage)
.border(Color.red)
}
}
}
}
extension View {
func snapshot() -> UIImage {
let controller = UIHostingController(rootView: self)
let view = controller.view
let targetSize = controller.view.intrinsicContentSize
view?.bounds = CGRect(origin: .zero, size: targetSize)
view?.backgroundColor = .clear
let renderer = UIGraphicsImageRenderer(size: targetSize)
return renderer.image { _ in
view?.drawHierarchy(in: controller.view.bounds, afterScreenUpdates: true)
}
}
}
It looks like the original view that is snapshot is lower down than it should be, but I'm not sure. How do I fix this?
Edits
We have discovered this problem does not occur on iOS 14, only iOS 15. So the question is... how can this be fixed for iOS 15?

I also recently noticed this issue. I tested on different Simulators (for example, iPhone 8 and iPhone 13 Pro) and realized that the offset seems always half the status bar height. So I suspect that when you call drawHierarchy(in:afterScreenUpdates:), internally SwiftUI always takes safe area insets into account.
Therefore, I modified the snapshot() function in your View extension by using the edgesIgnoringSafeArea(_:) view modifier, and it worked:
extension View {
func snapshot() -> UIImage {
let controller = UIHostingController(rootView: self.edgesIgnoringSafeArea(.all))
let view = controller.view
let targetSize = controller.view.intrinsicContentSize
view?.bounds = CGRect(origin: .zero, size: targetSize)
view?.backgroundColor = .clear
let renderer = UIGraphicsImageRenderer(size: targetSize)
return renderer.image { _ in
view?.drawHierarchy(in: controller.view.bounds, afterScreenUpdates: true)
}
}
}

The main thing that is necessary to avoid the extra whitespace is that the additional safe area insets should minus the original safe area insets
This is my current implementation and works to render views that are not on screen. Tested with iPhone 7 iOS 15.7 and iPhone 13 with iOS 16.2 beta, and iPad mini 6 with iOS 16.2 beta.
extension View {
func asImage() -> UIImage {
let controller = UIHostingController(rootView: self.edgesIgnoringSafeArea(.all))
let scenes = UIApplication.shared.connectedScenes
let windowScene = scenes.first as? UIWindowScene
let window = windowScene?.windows.first
window?.rootViewController?.view.addSubview(controller.view)
controller.view.frame = CGRect(x: 0, y: CGFloat(Int.max), width: 1, height: 1)
controller.additionalSafeAreaInsets = UIEdgeInsets(top: -controller.view.safeAreaInsets.top, left: -controller.view.safeAreaInsets.left, bottom: -controller.view.safeAreaInsets.bottom, right: -controller.view.safeAreaInsets.right)
let targetSize = controller.view.intrinsicContentSize
controller.view.bounds = CGRect(origin: .zero, size: targetSize)
controller.view.sizeToFit()
let image = controller.view.asImage()
controller.view.removeFromSuperview()
return image
}
}
extension UIView {
func asImage() -> UIImage {
let renderer = UIGraphicsImageRenderer(bounds: bounds)
return renderer.image { rendererContext in
layer.render(in: rendererContext.cgContext)
}
}
}

I also noted this annoying behaviour in iOS15 and think I found a workaround solution until the issue with iOS15 and drawHierarchy(in: afterScreenUpdates:) is solved.
This extension worked (tested on simulator iOS15.0 & iOS14.5 and iPhone XS Max iOS15.0.1) for me in my app, you can set the scale to higher than 1 if you need a higher resolution image:
extension View {
func snapshotiOS15() -> UIImage {
let controller = UIHostingController(rootView: self)
let view = controller.view
let format = UIGraphicsImageRendererFormat()
format.scale = 1
format.opaque = true
let targetSize = controller.view.intrinsicContentSize
view?.bounds = CGRect(origin: .zero, size: targetSize)
view?.backgroundColor = .clear
let window = UIWindow(frame: view!.bounds)
window.addSubview(controller.view)
window.makeKeyAndVisible()
let renderer = UIGraphicsImageRenderer(bounds: view!.bounds, format: format)
return renderer.image { rendererContext in
view?.layer.render(in: rendererContext.cgContext)
}
}
}
Edit
It turns out that the above solution only works if the view (to be saved as an UIImage) is embedded in a NavigationView for which the view modifier .statusBar(hidden: true) is added. See example code below to reproduce the result:
import SwiftUI
struct StartView: View {
var body: some View {
NavigationView {
TestView()
}.statusBar(hidden: true)
}
}
struct TestView: View {
var testView: some View {
Text("Hello, World!")
.font(.system(size: 42.0))
.foregroundColor(.white)
.frame(width: 200, height: 200, alignment: .center)
.background(Color.blue)
}
var body: some View {
VStack {
testView
Button(action: {
let imageiOS15 = testView.snapshotiOS15()
}, label: {
Text("Take snapshot")
.font(.headline)
})
}
}
}

After iOS 16 you can use ImageRenderer to export bitmap image data from a SwiftUI view.
Just keep in mind, you have to call it on the main thread. I used MainActor here. However, in this example, because it is firing from the Button action which is always on the main thread the MainActor is unnecessary.
struct ContentView: View {
#State private var renderedImage = Image(systemName: "photo.artframe")
#Environment(\.displayScale) var displayScale
var body: some View {
VStack(spacing: 30) {
renderedImage
.frame(width: 300, height: 300)
.background(.gray)
Button("Render SampleView") {
let randomNumber = Int.random(in: 0...100)
let renderer = ImageRenderer(content: createSampleView(number: randomNumber))
/// The default value of scale property is 1.0
renderer.scale = displayScale
if let uiImage = renderer.uiImage {
renderedImage = Image(uiImage: uiImage)
}
}
}
}
#MainActor func createSampleView(number: Int) -> some View {
Text("Random Number: \(number)")
.font(.title)
.foregroundColor(.white)
.padding()
.background(.blue)
}
}

Related

Using SwiftUI View as UIImage for rendering MKAnnotation error

Update:
Since I'm using iOS16, could I do something with https://www.hackingwithswift.com/quick-start/swiftui/how-to-convert-a-swiftui-view-to-an-image?
So I am experiencing some errors on my end, and I wanted to see if anyone could spot anything off with the way that I am doing things.
So this is the error that I am getting:
2022-11-05 20:48:52.650233-0500 [2347:488099] [Snapshotting] Rendering
a view (0x12c9d6800,
TtGC7SwiftUI14_UIHostingViewGVS_15ModifiedContentGS1_GS1_GS1_VS_5ImageVS_18_AspectRatioLayout_GVS_11_ClipEffectVS_9Rectangle__GVS_30_EnvironmentKeyWritingModifierGSqVS_5Color___GVS_16_OverlayModifierGS1_GS1_VS_4TextVS_13_OffsetEffect_GS6_GSqS7______)
that has not been committed to render server is not supported.
I am creating a SwiftUI View as an image and then using that image to render out a MKAnnotationView. By the looks of it, I must be doing something wrong, but I can't pin point if exactly.
So let's say that my view looks like this:
import SwiftUI
struct LandmarkPin: View {
var isAnnotationSelected = false
var body: some View {
Image("map-pin-full-cluster-1")
.renderingMode(.template)
.resizable()
.aspectRatio(contentMode: .fit)
.clipped()
.foregroundColor(self.isAnnotationSelected ? .red : Color("LandmarkAnnotation"))
.overlay(
Image(systemName: "building.columns.fill")
.font(.system(size: 10, weight: .bold))
.offset(y: -8)
.foregroundColor(Color.white)
)
}
}
That view gets called inside the MapView:
final class LandmarkAnnotationView: MKAnnotationView {
var place: Place?
override func prepareForDisplay() {
super.prepareForDisplay()
image = LandmarkPin(
isAnnotationSelected: (place?.show ?? false)
).takeScreenshot(
origin: CGPoint(x: 0, y: 0),
size: CGSize(width: 35, height: 35)
)
}
}
Then I am using two helper extensions to help me achieve this:
extension View {
func takeScreenshot(origin: CGPoint, size: CGSize) -> UIImage {
let hosting = UIHostingController(rootView: self)
let window = UIWindow(frame: CGRect(origin: origin, size: size))
hosting.view.frame = window.frame
window.addSubview(hosting.view)
window.makeKeyAndVisible()
hosting.view.backgroundColor = .clear
return hosting.view.renderedImage
}
}
extension UIView {
var renderedImage: UIImage {
let rect = self.bounds
UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)
let ctx: CGContext = UIGraphicsGetCurrentContext()!
UIColor.clear.set()
ctx.fill(bounds)
drawHierarchy(in: bounds, afterScreenUpdates: true)
layer.backgroundColor = UIColor.clear.cgColor
layer.render(in: ctx)
let capturedImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return capturedImage
}
}
Could anyone spot what the error might be hinting at?

Swift UI IOS, Generated Image from Canvas has a top margin

I'm trying to understand which is the problem in the code below.
The purpose is to generate an image from a canvas (real code is more complex), and I'm having this issue in the last step.
When generating image, code is always adding some space on top of generated image (see in red color). I was able to removed adding some padding, but in my real code has different value (here it's just 20, but it's 43 in my code)
The image shown it's 800x499 pixels
extension UIView {
func asImage() -> UIImage {
let format = UIGraphicsImageRendererFormat()
format.scale = 1
backgroundColor = .red
return UIGraphicsImageRenderer(size: self.layer.frame.size, format: format).image { context in
self.drawHierarchy(in: self.layer.bounds, afterScreenUpdates: true)
}
}
}
extension View {
func asImage(size: CGSize) -> UIImage {
let controller = UIHostingController(rootView: self)
controller.view.bounds = CGRect(origin: .zero, size: size)
let image = controller.view.asImage()
return image
}
}
struct TestView: View {
var body: some View {
VStack(spacing: 0.0) {
let canvass = Canvas {
context, size in
context.draw(Image ("empty_card"), at: .zero, anchor: .topLeading)
}.frame(width: 800, height: 499, alignment: .topLeading)
//.padding([.top], -20)
ScrollView([.horizontal, .vertical]) {
Image(uiImage: canvass.asImage(size: CGSize(width: 800, height: 499)))
}.frame(width: 300, height: 600)
}
}
}
struct TestView_Previews: PreviewProvider {
static var previews: some View {
TestView()
}
}
See result

UIGraphicsImageRenderer generates a strange top padding on iOS 15, working fine on iOS 14.x

iOS 15 generates a strange top padding when I export a SwiftUI View to image using UIGraphicsImageRenderer . It works fine on iOS 14.x . Any one has an idea why?
It turns out to be a real issue of iOS 15.
I posted the same question on Twitter and It has been replied with a working solution.
You can read more about the problem here
The solution is in this gist
I post the gist here for completeness, but all credit goes to its author (not me):
//
// View+Snapshot.swift
//
// Created by Vinzius on 2021-11-06.
//
import SwiftUI
import UIKit.UIImage
import UIKit.UIGraphicsImageRenderer
extension View {
func snapshot() -> UIImage? {
// Note: since iOS 15 it seems these two modifiers are required.
let controller = UIHostingController(
rootView: self.ignoresSafeArea()
.fixedSize(horizontal: true, vertical: true)
)
guard let view = controller.view else { return nil }
let targetSize = view.intrinsicContentSize
if targetSize.width <= 0 || targetSize.height <= 0 { return nil }
view.bounds = CGRect(origin: .zero, size: targetSize)
view.backgroundColor = .clear
let renderer = UIGraphicsImageRenderer(size: targetSize)
return renderer.image { _ in
view.drawHierarchy(in: controller.view.bounds, afterScreenUpdates: true)
}
}
}

In SwiftUI, how do I continuously update a view while a gesture is being preformed?

This is part of an ongoing attempt at teaching myself how to create a basic painting app in iOS, like MSPaint. I'm using SwiftUI and CoreImage to do this.
While I have my mind wrapped around the pixel manipulation in CoreImage (I've been looking at this), I'm not sure how to add a drag gesture to SwiftUI so that I can "paint".
With the drag gesture, I'd like to do this:
onBegin and onChanged:
send the current x,y position of my finger to the function handling the CoreImage manipulation;
receive and display the updated image;
repeat until gesture ends.
So in other words, continuously update the image as my finger moves.
UPDATE: I've taken a look at what Asperi below responded with, and added .gesture below .onAppear. However, this results in a warning "Modifying state during view update, this will cause undefined behavior."
struct ContentView: View {
#State private var image: Image?
#GestureState var location = CGPoint(x: 0, y: 0)
var body: some View {
VStack {
image?
.resizable()
.scaledToFit()
}
.onAppear(perform: newPainting)
.gesture(
DragGesture()
.updating($location) { (value, gestureState, transaction) in
gestureState = value.location
paint(location: location)
}
)
}
func newPainting() {
guard let newPainting = createBlankCanvas(size: CGSize(width: 128, height: 128)) else {
print("failed to create a blank canvas")
return
}
image = Image(uiImage: newPainting)
}
func createBlankCanvas(size: CGSize, filledWithColor color: UIColor = UIColor.clear, scale: CGFloat = 0.0, opaque: Bool = false) -> UIImage? {
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
UIGraphicsBeginImageContextWithOptions(size, opaque, scale)
color.set()
UIRectFill(rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
func paint(location: CGPoint) {
// do the CoreImage manipulation here
// now, take the output of the CI manipulation and
// attempt to get a CGImage from our CIImage
if let cgimg = context.createCGImage(outputImage, from: outputImage.extent) {
// convert that to a UIImage
let uiImage = UIImage(cgImage: cgimg)
// and convert that to a SwiftUI image
image = Image(uiImage: uiImage) // <- Modifying state during view update, this will cause undefined behavior.
}
}
}
Where do I add the gesture and have it repeatedly call the paint() func?
How do I get view to update continuously as long as the gesture continues?
Thank you!
You shouldn't store SwiftUI views(like Image) inside #State variables. Instead you should store UIImage:
struct ContentView: View {
#State private var uiImage: UIImage?
#GestureState var location = CGPoint(x: 0, y: 0)
var body: some View {
VStack {
uiImage.map { uiImage in
Image(uiImage: uiImage)
.resizable()
.scaledToFit()
}
}
.onAppear(perform: newPainting)
.gesture(
DragGesture()
.updating($location) { (value, gestureState, transaction) in
gestureState = value.location
paint(location: location)
}
)
}
func newPainting() {
guard let newPainting = createBlankCanvas(size: CGSize(width: 128, height: 128)) else {
print("failed to create a blank canvas")
return
}
uiImage = newPainting
}
func createBlankCanvas(size: CGSize, filledWithColor color: UIColor = UIColor.clear, scale: CGFloat = 0.0, opaque: Bool = false) -> UIImage? {
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
UIGraphicsBeginImageContextWithOptions(size, opaque, scale)
color.set()
UIRectFill(rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
func paint(location: CGPoint) {
// do the CoreImage manipulation here
// now, take the output of the CI manipulation and
// attempt to get a CGImage from our CIImage
if let cgimg = context.createCGImage(outputImage, from: outputImage.extent) {
// convert that to a UIImage
uiImage = UIImage(cgImage: cgimg)
}
}
}

Is it possible to use UIKit extensions in SwiftUI?

I have a nice UIImage extension that renders circular images in high quality using less memory. I want to either use this extension or re-create it in SwiftUI so I can use it. The problem is I am very new to SwiftUI and am not sure if it is even possible. Is there a way to use this?
Here's the extension:
extension UIImage {
class func circularImage(from image: UIImage, size: CGSize) -> UIImage? {
let scale = UIScreen.main.scale
let circleRect = CGRect(x: 0, y: 0, width: size.width * scale, height: size.height * scale)
UIGraphicsBeginImageContextWithOptions(circleRect.size, false, scale)
let circlePath = UIBezierPath(roundedRect: circleRect, cornerRadius: circleRect.size.width/2.0)
circlePath.addClip()
image.draw(in: circleRect)
if let roundImage = UIGraphicsGetImageFromCurrentImageContext() {
return roundImage
}
return nil
}
}
You can create your UIImage like normal.
Then, just convert this to a SwiftUI image with:
Image(uiImage: image)
Do not initialize your UIImage in the view body or initializer, as this can be quite expensive - instead do it on appear with onAppear(perform:).
Example:
struct ContentView: View {
#State private var circularImage: UIImage?
var body: some View {
VStack {
Text("Hello world!")
if let circularImage = circularImage {
Image(uiImage: circularImage)
}
}
.onAppear {
guard let image: UIImage = UIImage(named: "background") else { return }
circularImage = UIImage.circularImage(from: image, size: CGSize(width: 100, height: 100))
}
}
}

Resources