I have a nice UIImage extension that renders circular images in high quality using less memory. I want to either use this extension or re-create it in SwiftUI so I can use it. The problem is I am very new to SwiftUI and am not sure if it is even possible. Is there a way to use this?
Here's the extension:
extension UIImage {
class func circularImage(from image: UIImage, size: CGSize) -> UIImage? {
let scale = UIScreen.main.scale
let circleRect = CGRect(x: 0, y: 0, width: size.width * scale, height: size.height * scale)
UIGraphicsBeginImageContextWithOptions(circleRect.size, false, scale)
let circlePath = UIBezierPath(roundedRect: circleRect, cornerRadius: circleRect.size.width/2.0)
circlePath.addClip()
image.draw(in: circleRect)
if let roundImage = UIGraphicsGetImageFromCurrentImageContext() {
return roundImage
}
return nil
}
}
You can create your UIImage like normal.
Then, just convert this to a SwiftUI image with:
Image(uiImage: image)
Do not initialize your UIImage in the view body or initializer, as this can be quite expensive - instead do it on appear with onAppear(perform:).
Example:
struct ContentView: View {
#State private var circularImage: UIImage?
var body: some View {
VStack {
Text("Hello world!")
if let circularImage = circularImage {
Image(uiImage: circularImage)
}
}
.onAppear {
guard let image: UIImage = UIImage(named: "background") else { return }
circularImage = UIImage.circularImage(from: image, size: CGSize(width: 100, height: 100))
}
}
}
Related
Update:
Since I'm using iOS16, could I do something with https://www.hackingwithswift.com/quick-start/swiftui/how-to-convert-a-swiftui-view-to-an-image?
So I am experiencing some errors on my end, and I wanted to see if anyone could spot anything off with the way that I am doing things.
So this is the error that I am getting:
2022-11-05 20:48:52.650233-0500 [2347:488099] [Snapshotting] Rendering
a view (0x12c9d6800,
TtGC7SwiftUI14_UIHostingViewGVS_15ModifiedContentGS1_GS1_GS1_VS_5ImageVS_18_AspectRatioLayout_GVS_11_ClipEffectVS_9Rectangle__GVS_30_EnvironmentKeyWritingModifierGSqVS_5Color___GVS_16_OverlayModifierGS1_GS1_VS_4TextVS_13_OffsetEffect_GS6_GSqS7______)
that has not been committed to render server is not supported.
I am creating a SwiftUI View as an image and then using that image to render out a MKAnnotationView. By the looks of it, I must be doing something wrong, but I can't pin point if exactly.
So let's say that my view looks like this:
import SwiftUI
struct LandmarkPin: View {
var isAnnotationSelected = false
var body: some View {
Image("map-pin-full-cluster-1")
.renderingMode(.template)
.resizable()
.aspectRatio(contentMode: .fit)
.clipped()
.foregroundColor(self.isAnnotationSelected ? .red : Color("LandmarkAnnotation"))
.overlay(
Image(systemName: "building.columns.fill")
.font(.system(size: 10, weight: .bold))
.offset(y: -8)
.foregroundColor(Color.white)
)
}
}
That view gets called inside the MapView:
final class LandmarkAnnotationView: MKAnnotationView {
var place: Place?
override func prepareForDisplay() {
super.prepareForDisplay()
image = LandmarkPin(
isAnnotationSelected: (place?.show ?? false)
).takeScreenshot(
origin: CGPoint(x: 0, y: 0),
size: CGSize(width: 35, height: 35)
)
}
}
Then I am using two helper extensions to help me achieve this:
extension View {
func takeScreenshot(origin: CGPoint, size: CGSize) -> UIImage {
let hosting = UIHostingController(rootView: self)
let window = UIWindow(frame: CGRect(origin: origin, size: size))
hosting.view.frame = window.frame
window.addSubview(hosting.view)
window.makeKeyAndVisible()
hosting.view.backgroundColor = .clear
return hosting.view.renderedImage
}
}
extension UIView {
var renderedImage: UIImage {
let rect = self.bounds
UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)
let ctx: CGContext = UIGraphicsGetCurrentContext()!
UIColor.clear.set()
ctx.fill(bounds)
drawHierarchy(in: bounds, afterScreenUpdates: true)
layer.backgroundColor = UIColor.clear.cgColor
layer.render(in: ctx)
let capturedImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return capturedImage
}
}
Could anyone spot what the error might be hinting at?
I'm trying to understand which is the problem in the code below.
The purpose is to generate an image from a canvas (real code is more complex), and I'm having this issue in the last step.
When generating image, code is always adding some space on top of generated image (see in red color). I was able to removed adding some padding, but in my real code has different value (here it's just 20, but it's 43 in my code)
The image shown it's 800x499 pixels
extension UIView {
func asImage() -> UIImage {
let format = UIGraphicsImageRendererFormat()
format.scale = 1
backgroundColor = .red
return UIGraphicsImageRenderer(size: self.layer.frame.size, format: format).image { context in
self.drawHierarchy(in: self.layer.bounds, afterScreenUpdates: true)
}
}
}
extension View {
func asImage(size: CGSize) -> UIImage {
let controller = UIHostingController(rootView: self)
controller.view.bounds = CGRect(origin: .zero, size: size)
let image = controller.view.asImage()
return image
}
}
struct TestView: View {
var body: some View {
VStack(spacing: 0.0) {
let canvass = Canvas {
context, size in
context.draw(Image ("empty_card"), at: .zero, anchor: .topLeading)
}.frame(width: 800, height: 499, alignment: .topLeading)
//.padding([.top], -20)
ScrollView([.horizontal, .vertical]) {
Image(uiImage: canvass.asImage(size: CGSize(width: 800, height: 499)))
}.frame(width: 300, height: 600)
}
}
}
struct TestView_Previews: PreviewProvider {
static var previews: some View {
TestView()
}
}
See result
This is part of an ongoing attempt at teaching myself how to create a basic painting app in iOS, like MSPaint. I'm using SwiftUI and CoreImage to do this.
While I have my mind wrapped around the pixel manipulation in CoreImage (I've been looking at this), I'm not sure how to add a drag gesture to SwiftUI so that I can "paint".
With the drag gesture, I'd like to do this:
onBegin and onChanged:
send the current x,y position of my finger to the function handling the CoreImage manipulation;
receive and display the updated image;
repeat until gesture ends.
So in other words, continuously update the image as my finger moves.
UPDATE: I've taken a look at what Asperi below responded with, and added .gesture below .onAppear. However, this results in a warning "Modifying state during view update, this will cause undefined behavior."
struct ContentView: View {
#State private var image: Image?
#GestureState var location = CGPoint(x: 0, y: 0)
var body: some View {
VStack {
image?
.resizable()
.scaledToFit()
}
.onAppear(perform: newPainting)
.gesture(
DragGesture()
.updating($location) { (value, gestureState, transaction) in
gestureState = value.location
paint(location: location)
}
)
}
func newPainting() {
guard let newPainting = createBlankCanvas(size: CGSize(width: 128, height: 128)) else {
print("failed to create a blank canvas")
return
}
image = Image(uiImage: newPainting)
}
func createBlankCanvas(size: CGSize, filledWithColor color: UIColor = UIColor.clear, scale: CGFloat = 0.0, opaque: Bool = false) -> UIImage? {
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
UIGraphicsBeginImageContextWithOptions(size, opaque, scale)
color.set()
UIRectFill(rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
func paint(location: CGPoint) {
// do the CoreImage manipulation here
// now, take the output of the CI manipulation and
// attempt to get a CGImage from our CIImage
if let cgimg = context.createCGImage(outputImage, from: outputImage.extent) {
// convert that to a UIImage
let uiImage = UIImage(cgImage: cgimg)
// and convert that to a SwiftUI image
image = Image(uiImage: uiImage) // <- Modifying state during view update, this will cause undefined behavior.
}
}
}
Where do I add the gesture and have it repeatedly call the paint() func?
How do I get view to update continuously as long as the gesture continues?
Thank you!
You shouldn't store SwiftUI views(like Image) inside #State variables. Instead you should store UIImage:
struct ContentView: View {
#State private var uiImage: UIImage?
#GestureState var location = CGPoint(x: 0, y: 0)
var body: some View {
VStack {
uiImage.map { uiImage in
Image(uiImage: uiImage)
.resizable()
.scaledToFit()
}
}
.onAppear(perform: newPainting)
.gesture(
DragGesture()
.updating($location) { (value, gestureState, transaction) in
gestureState = value.location
paint(location: location)
}
)
}
func newPainting() {
guard let newPainting = createBlankCanvas(size: CGSize(width: 128, height: 128)) else {
print("failed to create a blank canvas")
return
}
uiImage = newPainting
}
func createBlankCanvas(size: CGSize, filledWithColor color: UIColor = UIColor.clear, scale: CGFloat = 0.0, opaque: Bool = false) -> UIImage? {
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
UIGraphicsBeginImageContextWithOptions(size, opaque, scale)
color.set()
UIRectFill(rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
func paint(location: CGPoint) {
// do the CoreImage manipulation here
// now, take the output of the CI manipulation and
// attempt to get a CGImage from our CIImage
if let cgimg = context.createCGImage(outputImage, from: outputImage.extent) {
// convert that to a UIImage
uiImage = UIImage(cgImage: cgimg)
}
}
}
I found an answer for this same question for Objective-C #
Saving two UIImages side-by-side as one combined image? but not for Swift.
I want to combine two UIImages side-by-side to create one UIImage. Both images would be merged at the border and maintain original resolution(no screen-shots).
How can I save the two images as one image?
ImageA + ImageB = ImageC
You can use the idea from the linked question of using UIGraphicsContext and UIImage.draw to draw the 2 images side-by-side and then create a new UIImage from the context.
extension UIImage {
func mergedSideBySide(with otherImage: UIImage) -> UIImage? {
let mergedWidth = self.size.width + otherImage.size.width
let mergedHeight = max(self.size.height, otherImage.size.height)
let mergedSize = CGSize(width: mergedWidth, height: mergedHeight)
UIGraphicsBeginImageContext(mergedSize)
self.draw(in: CGRect(x: 0, y: 0, width: mergedWidth, height: mergedHeight))
otherImage.draw(in: CGRect(x: self.size.width, y: 0, width: mergedWidth, height: mergedHeight))
let mergedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return mergedImage
}
}
Usage (assuming leftImage and rightImage are both UIImages):
let mergedImage = leftImage.mergedSideBySide(with: rightImage)
Demo of the function (using SwiftUI for quicker display using previews):
struct Playground_Previews: PreviewProvider {
static let leftImage = UIColor.blue.image(CGSize(width: 128, height: 128))
static let rightImage = UIColor.red.image(CGSize(width: 128, height: 128))
static let mergedImage = leftImage.mergedSideBySide(with: rightImage) ?? UIImage()
static var previews: some View {
VStack(spacing: 10) {
Image(uiImage: leftImage)
Image(uiImage: rightImage)
Image(uiImage: mergedImage)
}
}
}
The UIColor.image function is from Create UIImage with solid color in Swift.
I want to blend two images with darkenBlendMode and using Slider to manipulate alpha of first and second image. Unfortunately my method is laggy on iPhone 6S. I know that problem is in calling "loadImage" function in every Slider touch but I have no idea how to make it less aggravating. Have you any idea how to repair it?
extension UIImage {
func alpha(_ value:CGFloat) -> UIImage {
UIGraphicsBeginImageContextWithOptions(size, false, scale)
draw(at: CGPoint.zero, blendMode: .normal, alpha: value)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}
struct ContentView: View {
#State private var image: Image?
#State private var opacity: CGFloat = 0
let context = CIContext()
var body: some View {
return VStack(alignment: .center) {
image?
.resizable()
.padding(14.0)
.scaledToFit()
Slider(value: Binding(
get: {
self.opacity
},
set: {(newValue) in
self.opacity = newValue
self.loadImage()
}
), in: -0.5...0.5)
}
.onAppear(perform: loadImage)
}
func loadImage(){
let uiInputImage = UIImage(named: "photo1")!.alpha(0.5 + opacity)
let uiBackgroundInputImage = UIImage(named: "photo2")!.alpha(0.5 - opacity)
let ciInputImage = CIImage(image: uiInputImage)
let ciBackgroundInputImage = CIImage(image: uiBackgroundInputImage)
let currentFilter = CIFilter.darkenBlendMode()
currentFilter.inputImage = ciInputImage
currentFilter.backgroundImage = ciBackgroundInputImage
guard let blendedImage = currentFilter.outputImage else {return}
if let cgBlendedImage = context.createCGImage(blendedImage, from: blendedImage.extent){
let uiblendedImage = UIImage(cgImage: cgBlendedImage)
image = Image(uiImage: uiblendedImage)
}
}
}
You could use the CIColorMatrix filter to change the alpha of the images without loading the bitmap data again:
let colorMatrixFilter = CIFilter.colorMatrix()
colorMatrixFilter.inputImage = ciInputImage // no need to load that every time
colorMatrixFilter.inputAVector = CIVector(x: 0, y: 0, z: 0, w: 0.5 + opacity)
let semiTransparentImage = colorMatrixFilter.outputImage
// perform blending