I'm trying to understand which is the problem in the code below.
The purpose is to generate an image from a canvas (real code is more complex), and I'm having this issue in the last step.
When generating image, code is always adding some space on top of generated image (see in red color). I was able to removed adding some padding, but in my real code has different value (here it's just 20, but it's 43 in my code)
The image shown it's 800x499 pixels
extension UIView {
func asImage() -> UIImage {
let format = UIGraphicsImageRendererFormat()
format.scale = 1
backgroundColor = .red
return UIGraphicsImageRenderer(size: self.layer.frame.size, format: format).image { context in
self.drawHierarchy(in: self.layer.bounds, afterScreenUpdates: true)
}
}
}
extension View {
func asImage(size: CGSize) -> UIImage {
let controller = UIHostingController(rootView: self)
controller.view.bounds = CGRect(origin: .zero, size: size)
let image = controller.view.asImage()
return image
}
}
struct TestView: View {
var body: some View {
VStack(spacing: 0.0) {
let canvass = Canvas {
context, size in
context.draw(Image ("empty_card"), at: .zero, anchor: .topLeading)
}.frame(width: 800, height: 499, alignment: .topLeading)
//.padding([.top], -20)
ScrollView([.horizontal, .vertical]) {
Image(uiImage: canvass.asImage(size: CGSize(width: 800, height: 499)))
}.frame(width: 300, height: 600)
}
}
}
struct TestView_Previews: PreviewProvider {
static var previews: some View {
TestView()
}
}
See result
Related
Update:
Since I'm using iOS16, could I do something with https://www.hackingwithswift.com/quick-start/swiftui/how-to-convert-a-swiftui-view-to-an-image?
So I am experiencing some errors on my end, and I wanted to see if anyone could spot anything off with the way that I am doing things.
So this is the error that I am getting:
2022-11-05 20:48:52.650233-0500 [2347:488099] [Snapshotting] Rendering
a view (0x12c9d6800,
TtGC7SwiftUI14_UIHostingViewGVS_15ModifiedContentGS1_GS1_GS1_VS_5ImageVS_18_AspectRatioLayout_GVS_11_ClipEffectVS_9Rectangle__GVS_30_EnvironmentKeyWritingModifierGSqVS_5Color___GVS_16_OverlayModifierGS1_GS1_VS_4TextVS_13_OffsetEffect_GS6_GSqS7______)
that has not been committed to render server is not supported.
I am creating a SwiftUI View as an image and then using that image to render out a MKAnnotationView. By the looks of it, I must be doing something wrong, but I can't pin point if exactly.
So let's say that my view looks like this:
import SwiftUI
struct LandmarkPin: View {
var isAnnotationSelected = false
var body: some View {
Image("map-pin-full-cluster-1")
.renderingMode(.template)
.resizable()
.aspectRatio(contentMode: .fit)
.clipped()
.foregroundColor(self.isAnnotationSelected ? .red : Color("LandmarkAnnotation"))
.overlay(
Image(systemName: "building.columns.fill")
.font(.system(size: 10, weight: .bold))
.offset(y: -8)
.foregroundColor(Color.white)
)
}
}
That view gets called inside the MapView:
final class LandmarkAnnotationView: MKAnnotationView {
var place: Place?
override func prepareForDisplay() {
super.prepareForDisplay()
image = LandmarkPin(
isAnnotationSelected: (place?.show ?? false)
).takeScreenshot(
origin: CGPoint(x: 0, y: 0),
size: CGSize(width: 35, height: 35)
)
}
}
Then I am using two helper extensions to help me achieve this:
extension View {
func takeScreenshot(origin: CGPoint, size: CGSize) -> UIImage {
let hosting = UIHostingController(rootView: self)
let window = UIWindow(frame: CGRect(origin: origin, size: size))
hosting.view.frame = window.frame
window.addSubview(hosting.view)
window.makeKeyAndVisible()
hosting.view.backgroundColor = .clear
return hosting.view.renderedImage
}
}
extension UIView {
var renderedImage: UIImage {
let rect = self.bounds
UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)
let ctx: CGContext = UIGraphicsGetCurrentContext()!
UIColor.clear.set()
ctx.fill(bounds)
drawHierarchy(in: bounds, afterScreenUpdates: true)
layer.backgroundColor = UIColor.clear.cgColor
layer.render(in: ctx)
let capturedImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return capturedImage
}
}
Could anyone spot what the error might be hinting at?
This is part of an ongoing attempt at teaching myself how to create a basic painting app in iOS, like MSPaint. I'm using SwiftUI and CoreImage to do this.
While I have my mind wrapped around the pixel manipulation in CoreImage (I've been looking at this), I'm not sure how to add a drag gesture to SwiftUI so that I can "paint".
With the drag gesture, I'd like to do this:
onBegin and onChanged:
send the current x,y position of my finger to the function handling the CoreImage manipulation;
receive and display the updated image;
repeat until gesture ends.
So in other words, continuously update the image as my finger moves.
UPDATE: I've taken a look at what Asperi below responded with, and added .gesture below .onAppear. However, this results in a warning "Modifying state during view update, this will cause undefined behavior."
struct ContentView: View {
#State private var image: Image?
#GestureState var location = CGPoint(x: 0, y: 0)
var body: some View {
VStack {
image?
.resizable()
.scaledToFit()
}
.onAppear(perform: newPainting)
.gesture(
DragGesture()
.updating($location) { (value, gestureState, transaction) in
gestureState = value.location
paint(location: location)
}
)
}
func newPainting() {
guard let newPainting = createBlankCanvas(size: CGSize(width: 128, height: 128)) else {
print("failed to create a blank canvas")
return
}
image = Image(uiImage: newPainting)
}
func createBlankCanvas(size: CGSize, filledWithColor color: UIColor = UIColor.clear, scale: CGFloat = 0.0, opaque: Bool = false) -> UIImage? {
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
UIGraphicsBeginImageContextWithOptions(size, opaque, scale)
color.set()
UIRectFill(rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
func paint(location: CGPoint) {
// do the CoreImage manipulation here
// now, take the output of the CI manipulation and
// attempt to get a CGImage from our CIImage
if let cgimg = context.createCGImage(outputImage, from: outputImage.extent) {
// convert that to a UIImage
let uiImage = UIImage(cgImage: cgimg)
// and convert that to a SwiftUI image
image = Image(uiImage: uiImage) // <- Modifying state during view update, this will cause undefined behavior.
}
}
}
Where do I add the gesture and have it repeatedly call the paint() func?
How do I get view to update continuously as long as the gesture continues?
Thank you!
You shouldn't store SwiftUI views(like Image) inside #State variables. Instead you should store UIImage:
struct ContentView: View {
#State private var uiImage: UIImage?
#GestureState var location = CGPoint(x: 0, y: 0)
var body: some View {
VStack {
uiImage.map { uiImage in
Image(uiImage: uiImage)
.resizable()
.scaledToFit()
}
}
.onAppear(perform: newPainting)
.gesture(
DragGesture()
.updating($location) { (value, gestureState, transaction) in
gestureState = value.location
paint(location: location)
}
)
}
func newPainting() {
guard let newPainting = createBlankCanvas(size: CGSize(width: 128, height: 128)) else {
print("failed to create a blank canvas")
return
}
uiImage = newPainting
}
func createBlankCanvas(size: CGSize, filledWithColor color: UIColor = UIColor.clear, scale: CGFloat = 0.0, opaque: Bool = false) -> UIImage? {
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
UIGraphicsBeginImageContextWithOptions(size, opaque, scale)
color.set()
UIRectFill(rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
func paint(location: CGPoint) {
// do the CoreImage manipulation here
// now, take the output of the CI manipulation and
// attempt to get a CGImage from our CIImage
if let cgimg = context.createCGImage(outputImage, from: outputImage.extent) {
// convert that to a UIImage
uiImage = UIImage(cgImage: cgimg)
}
}
}
I tried to create a UIImage from a SwiftUI view, a snapshot, with the code from HWS: How to convert a SwiftUI view to an image.
I get the following result, which is obviously incorrect because the image is cut-off.
Code:
struct ContentView: View {
#State private var savedImage: UIImage?
var textView: some View {
Text("Hello, SwiftUI")
.padding()
.background(Color.blue)
.foregroundColor(.white)
.clipShape(Capsule())
}
var body: some View {
ZStack {
VStack(spacing: 100) {
textView
Button("Save to image") {
savedImage = textView.snapshot()
}
}
if let savedImage = savedImage {
Image(uiImage: savedImage)
.border(Color.red)
}
}
}
}
extension View {
func snapshot() -> UIImage {
let controller = UIHostingController(rootView: self)
let view = controller.view
let targetSize = controller.view.intrinsicContentSize
view?.bounds = CGRect(origin: .zero, size: targetSize)
view?.backgroundColor = .clear
let renderer = UIGraphicsImageRenderer(size: targetSize)
return renderer.image { _ in
view?.drawHierarchy(in: controller.view.bounds, afterScreenUpdates: true)
}
}
}
It looks like the original view that is snapshot is lower down than it should be, but I'm not sure. How do I fix this?
Edits
We have discovered this problem does not occur on iOS 14, only iOS 15. So the question is... how can this be fixed for iOS 15?
I also recently noticed this issue. I tested on different Simulators (for example, iPhone 8 and iPhone 13 Pro) and realized that the offset seems always half the status bar height. So I suspect that when you call drawHierarchy(in:afterScreenUpdates:), internally SwiftUI always takes safe area insets into account.
Therefore, I modified the snapshot() function in your View extension by using the edgesIgnoringSafeArea(_:) view modifier, and it worked:
extension View {
func snapshot() -> UIImage {
let controller = UIHostingController(rootView: self.edgesIgnoringSafeArea(.all))
let view = controller.view
let targetSize = controller.view.intrinsicContentSize
view?.bounds = CGRect(origin: .zero, size: targetSize)
view?.backgroundColor = .clear
let renderer = UIGraphicsImageRenderer(size: targetSize)
return renderer.image { _ in
view?.drawHierarchy(in: controller.view.bounds, afterScreenUpdates: true)
}
}
}
The main thing that is necessary to avoid the extra whitespace is that the additional safe area insets should minus the original safe area insets
This is my current implementation and works to render views that are not on screen. Tested with iPhone 7 iOS 15.7 and iPhone 13 with iOS 16.2 beta, and iPad mini 6 with iOS 16.2 beta.
extension View {
func asImage() -> UIImage {
let controller = UIHostingController(rootView: self.edgesIgnoringSafeArea(.all))
let scenes = UIApplication.shared.connectedScenes
let windowScene = scenes.first as? UIWindowScene
let window = windowScene?.windows.first
window?.rootViewController?.view.addSubview(controller.view)
controller.view.frame = CGRect(x: 0, y: CGFloat(Int.max), width: 1, height: 1)
controller.additionalSafeAreaInsets = UIEdgeInsets(top: -controller.view.safeAreaInsets.top, left: -controller.view.safeAreaInsets.left, bottom: -controller.view.safeAreaInsets.bottom, right: -controller.view.safeAreaInsets.right)
let targetSize = controller.view.intrinsicContentSize
controller.view.bounds = CGRect(origin: .zero, size: targetSize)
controller.view.sizeToFit()
let image = controller.view.asImage()
controller.view.removeFromSuperview()
return image
}
}
extension UIView {
func asImage() -> UIImage {
let renderer = UIGraphicsImageRenderer(bounds: bounds)
return renderer.image { rendererContext in
layer.render(in: rendererContext.cgContext)
}
}
}
I also noted this annoying behaviour in iOS15 and think I found a workaround solution until the issue with iOS15 and drawHierarchy(in: afterScreenUpdates:) is solved.
This extension worked (tested on simulator iOS15.0 & iOS14.5 and iPhone XS Max iOS15.0.1) for me in my app, you can set the scale to higher than 1 if you need a higher resolution image:
extension View {
func snapshotiOS15() -> UIImage {
let controller = UIHostingController(rootView: self)
let view = controller.view
let format = UIGraphicsImageRendererFormat()
format.scale = 1
format.opaque = true
let targetSize = controller.view.intrinsicContentSize
view?.bounds = CGRect(origin: .zero, size: targetSize)
view?.backgroundColor = .clear
let window = UIWindow(frame: view!.bounds)
window.addSubview(controller.view)
window.makeKeyAndVisible()
let renderer = UIGraphicsImageRenderer(bounds: view!.bounds, format: format)
return renderer.image { rendererContext in
view?.layer.render(in: rendererContext.cgContext)
}
}
}
Edit
It turns out that the above solution only works if the view (to be saved as an UIImage) is embedded in a NavigationView for which the view modifier .statusBar(hidden: true) is added. See example code below to reproduce the result:
import SwiftUI
struct StartView: View {
var body: some View {
NavigationView {
TestView()
}.statusBar(hidden: true)
}
}
struct TestView: View {
var testView: some View {
Text("Hello, World!")
.font(.system(size: 42.0))
.foregroundColor(.white)
.frame(width: 200, height: 200, alignment: .center)
.background(Color.blue)
}
var body: some View {
VStack {
testView
Button(action: {
let imageiOS15 = testView.snapshotiOS15()
}, label: {
Text("Take snapshot")
.font(.headline)
})
}
}
}
After iOS 16 you can use ImageRenderer to export bitmap image data from a SwiftUI view.
Just keep in mind, you have to call it on the main thread. I used MainActor here. However, in this example, because it is firing from the Button action which is always on the main thread the MainActor is unnecessary.
struct ContentView: View {
#State private var renderedImage = Image(systemName: "photo.artframe")
#Environment(\.displayScale) var displayScale
var body: some View {
VStack(spacing: 30) {
renderedImage
.frame(width: 300, height: 300)
.background(.gray)
Button("Render SampleView") {
let randomNumber = Int.random(in: 0...100)
let renderer = ImageRenderer(content: createSampleView(number: randomNumber))
/// The default value of scale property is 1.0
renderer.scale = displayScale
if let uiImage = renderer.uiImage {
renderedImage = Image(uiImage: uiImage)
}
}
}
}
#MainActor func createSampleView(number: Int) -> some View {
Text("Random Number: \(number)")
.font(.title)
.foregroundColor(.white)
.padding()
.background(.blue)
}
}
I have a nice UIImage extension that renders circular images in high quality using less memory. I want to either use this extension or re-create it in SwiftUI so I can use it. The problem is I am very new to SwiftUI and am not sure if it is even possible. Is there a way to use this?
Here's the extension:
extension UIImage {
class func circularImage(from image: UIImage, size: CGSize) -> UIImage? {
let scale = UIScreen.main.scale
let circleRect = CGRect(x: 0, y: 0, width: size.width * scale, height: size.height * scale)
UIGraphicsBeginImageContextWithOptions(circleRect.size, false, scale)
let circlePath = UIBezierPath(roundedRect: circleRect, cornerRadius: circleRect.size.width/2.0)
circlePath.addClip()
image.draw(in: circleRect)
if let roundImage = UIGraphicsGetImageFromCurrentImageContext() {
return roundImage
}
return nil
}
}
You can create your UIImage like normal.
Then, just convert this to a SwiftUI image with:
Image(uiImage: image)
Do not initialize your UIImage in the view body or initializer, as this can be quite expensive - instead do it on appear with onAppear(perform:).
Example:
struct ContentView: View {
#State private var circularImage: UIImage?
var body: some View {
VStack {
Text("Hello world!")
if let circularImage = circularImage {
Image(uiImage: circularImage)
}
}
.onAppear {
guard let image: UIImage = UIImage(named: "background") else { return }
circularImage = UIImage.circularImage(from: image, size: CGSize(width: 100, height: 100))
}
}
}
I am building a widget that will hold some text that is a list of short words and phrases. Something like this:
Because it's a list of short items it would work best if it could wrap into two columns.
Here's the current simple code (with font and spacings removed):
struct WidgetEntryView : View {
var entry: Provider.Entry
var body: some View {
ZStack {
Color(entry.color)
VStack(alignment: .leading) {
Text(entry.name)
Text("Updated in 6 hours")
Text(entry.content)
}
}
}
}
I found this guide to tell me whether or not the text is truncated, but what I need is to know what text has been truncated so that I can add another Text view to the right with the remaining characters. Or ideally use some native method to continue the text between two text views.
This is certainly not ideal but here's what I came up with. The gist is that I used the truncated text paradigm linked in the question to get the available height. Then I use the width of the widget minus padding to iterate through the text until it can no longer fit in half the width.
Some downsides are that (1) The left column must be half or less than the width of the widget, when in reality it could sometimes fit more content if it was greater, (2) it is difficult to be 100% certain the spacings are all accounted for, and (3) had to hardcode the dimensions of the widget.
In any case, hope this helps anyone looking for a similar solution!
Here's the code with spacings and colors removed for clarity:
struct SizePreferenceKey: PreferenceKey {
static var defaultValue: CGSize = .zero
static func reduce(value: inout CGSize, nextValue: () -> CGSize) {}
}
extension View {
func readSize(onChange: #escaping (CGSize) -> Void) -> some View {
background(
GeometryReader {
geometryProxy in
Color.clear
.preference(key: SizePreferenceKey.self, value: geometryProxy.size)
})
.onPreferenceChange(SizePreferenceKey.self, perform: onChange)
}
}
struct TruncableText: View {
let text: Text
#State private var intrinsicSize: CGSize = .zero
#State private var truncatedSize: CGSize = .zero
let isTruncatedUpdate: (_ isTruncated: Bool, _ truncatedSize: CGSize) -> Void
var body: some View {
text
.readSize { size in
truncatedSize = size
isTruncatedUpdate(truncatedSize != intrinsicSize, size)
}
.background(
text
.fixedSize(horizontal: false, vertical: true)
.hidden()
.readSize { size in
intrinsicSize = size
if truncatedSize != .zero {
isTruncatedUpdate(truncatedSize != intrinsicSize, truncatedSize)
}
})
}
}
/**
- Parameter text: The entire contents of the note
- Parameter size: The size of the text area that was used to initially render the first note
- Parameter widgetWidth: exact width of the widget for the current family/screen size
*/
func partitionText(_ text: String, size: CGSize, widgetWidth: CGFloat) -> (String, String)? {
var part1 = ""
var part2 = text
let colWidth = widgetWidth / 2 - 32 // padding
let colHeight = size.height
// Shouldn't happen but just block against infinite loops
for i in 0...100 {
// Find the first line, or if that doesn't work the first space
var splitAt = part2.firstIndex(of: "\n")
if (splitAt == nil) {
splitAt = part2.firstIndex(of: "\r")
if (splitAt == nil) {
splitAt = part2.firstIndex(of: " ")
}
}
// We have a block of letters remaining. Let's not split it.
if splitAt == nil {
if i == 0 {
// If we haven't split anything yet, just show the text as a single block
return nil
} else {
// Divide what we had
break
}
}
let part1Test = String(text[...text.index(splitAt!, offsetBy: part1.count)])
let part1TestSize = part1Test
.trimmingCharacters(in: .newlines)
.boundingRect(with: CGSize(width: colWidth, height: .infinity),
options: .usesLineFragmentOrigin,
attributes: [.font: UIFont.systemFont(ofSize: 12)],
context: nil)
if (part1TestSize.height > colHeight) {
// We exceeded the limit! return what we have
break;
}
part1 = part1Test
part2 = String(part2[part2.index(splitAt!, offsetBy: 1)...])
}
return (part1.trimmingCharacters(in: .newlines), part2.trimmingCharacters(in: .newlines))
}
func getWidgetWidth(_ family: WidgetFamily) -> CGFloat {
switch family {
case .systemLarge, .systemMedium:
switch UIScreen.main.bounds.size {
case CGSize(width: 428, height: 926): return 364
case CGSize(width: 414, height: 896): return 360
case CGSize(width: 414, height: 736): return 348
case CGSize(width: 390, height: 844): return 338
case CGSize(width: 375, height: 812): return 329
case CGSize(width: 375, height: 667): return 321
case CGSize(width: 360, height: 780): return 329
case CGSize(width: 320, height: 568): return 292
default: return 330
}
default:
switch UIScreen.main.bounds.size {
case CGSize(width: 428, height: 926): return 170
case CGSize(width: 414, height: 896): return 169
case CGSize(width: 414, height: 736): return 159
case CGSize(width: 390, height: 844): return 158
case CGSize(width: 375, height: 812): return 155
case CGSize(width: 375, height: 667): return 148
case CGSize(width: 360, height: 780): return 155
case CGSize(width: 320, height: 568): return 141
default: return 155
}
}
}
struct NoteWidgetEntryView : View {
#State var isTruncated: Bool = false
#State var colOneText: String = ""
#State var colTwoText: String = ""
var entry: Provider.Entry
#Environment(\.widgetFamily) var family: WidgetFamily
var body: some View {
ZStack{
Color(entry.color)
VStack {
Text(entry.name)
Text("Updated 6 hours ago")
if entry.twoColumn {
if (isTruncated) {
HStack {
Text(colOneText).font(.system(size:12))
Text(colTwoText).font(.system(size:12))
}
} else {
TruncableText(text: Text(entry.content).font(.system(size:12))) {
let size = $1
if ($0 && colTwoText == "") {
if let (part1, part2) = partitionText(entry.content, size: size, widgetWidth: getWidgetWidth(family)) {
colOneText = part1
colTwoText = part2
// Only set this if we successfully partitioned the text
isTruncated = true
}
}
}
}
} else {
Text(entry.content).font(.system(size:12))
}
}
}
}
}
Using the views' frames modifier
This can be done using a frame modifier.
Try to create an HStack and each view in it will get the same frame modifier as .frame(minWidth: 0, maxWidth: .infinity).
This will equally distribute the views.
Looking at your code I think this could work.
struct WidgetEntryView : View {
var entry: Provider.Entry
var body: some View {
ZStack {
Color(entry.color)
VStack(alignment: .leading) {
Text(entry.name)
Text("Updated in 6 hours")
// your entry.content needs to be formatted to an HStack
HStack {
Text(entry.content)
.frame(minWidth: 0, maxWidth: .infinity)
Text(entry.content)
.frame(minWidth: 0, maxWidth: .infinity)
}
}
}
}
See this article too:
SwiftUI: Two equal width columns