I am really confused about how a CGRect is interpreted and drawn. I have the following routine to draw a rectangle:
let rectangle = CGRect(x: 0, y: 0, width: 100, height: 100)
ctx.cgContext.setFillColor(UIColor.red.cgColor)
ctx.cgContext.setStrokeColor(UIColor.green.cgColor)
ctx.cgContext.setLineWidth(1)
ctx.cgContext.addRect(rectangle)
ctx.cgContext.drawPath(using: .fillStroke)
Now I was expecting this to draw a rectangle at the origin of the screen with the width and height of 100 (pixels/points, not sure about this).
However, I can barely see the drawn rectangle. Most of it seem to lie outside the screen (on the left hand side).
Changing this to:
let rectangle = CGRect(x: 50, y: 50, width: 100, height: 100)
Still around half the rectangle is outside the screen.
Is the origin of this CGRect at the bottom right? I am not sure how this is all getting interpreted.
I thought if the origin is at the center of the rectangle, then surely the second call should show the whole rectangle?
[EDIT]
I am running this on iphone X running ios 14.4.
[EDIT]
So, the full drawing code is as follows. This is part of a View. So, in the end the view image is assigned to the image we draw on
func show(on frame: CGImage) {
// This is of dimension 480 x 640
let dstImageSize = CGSize(width: frame.width, height: frame.height)
let dstImageFormat = UIGraphicsImageRendererFormat()
dstImageFormat.scale = 1
let renderer = UIGraphicsImageRenderer(size: dstImageSize,
format: dstImageFormat)
let dstImage = renderer.image { rendererContext in
// Draw the current frame as the background for the new image.
draw(image: frame, in: rendererContext.cgContext)
// This is where I draw my rectangle
let rectangle = CGRect(x: 0, y: 0, width: 100, height: 100)
rendererContext.cgContext.setAlpha(0.3)
rendererContext.cgContext.setFillColor(UIColor.red.cgColor)
rendererContext.cgContext.setStrokeColor(UIColor.green.cgColor)
//rendererContext.cgContext.setLineWidth(1)
rendererContext.cgContext.addRect(rectangle)
rendererContext.cgContext.drawPath(using: .fill)
}
image = dstImage
}
So, from what I can tell, it should draw it on the context of the image and should not go out of bounds with the parameters I gave.
I mean, the context is that of the bitmap that has been initialized to 480 x 640. I am nt sure why this is shown out of bounds when I view it on the device. Should this bitmap/image not be shown correctly?
Related
I currently having a block of code that is trying to add a text view on top of an image, with the ultimate goal to save down the new image with the overlaid text. Here is the code to do that:
class func addText(label: UITextView,imageSize: CGSize, image: UIImage) -> UIImage {
let scale = UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions(CGSize(width: imageSize.width, height: imageSize.height), false, scale)
let currentView = UIView.init(frame: CGRect(x: 0, y: 0, width: imageSize.width, height: imageSize.height))
let currentImage = UIImageView.init(image: image)
currentImage.frame = CGRect(x: 0, y: 0, width: imageSize.width, height: imageSize.height)
currentView.addSubview(currentImage)
currentView.addSubview(label)
currentView.layer.render(in: UIGraphicsGetCurrentContext()!)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img!
}
And it is called like below (The image is just a standard 1920x1080 image taken by the phone's camera):
self.imageToEdit.image = UIImage.addText(label: textView, imageSize: UIScreen.main.bounds.size, image: self.imageToEdit.image!)
This works great when I test when an iPhone 6s, but when I test on an iPhone X, it "squeezes" the sides of the image so faces and other features become skinnier on the image that is returned by addText.
I have a hunch it is due to the image being extended up through the notch of the iPhone X which is causing some type of scaling/aspect fill, but I'm not sure where to begin looking.
Does anyone know how to stop the "squeezing" from happening in iPhone X (I am also guessing this is happening in all the other iPhone models that have a notch)
Thanks.
Just figured it out!
I needed to included this line:
currentImage.contentMode = .scaleAspectFill
in my addText func.
Because I was returning a new UIImageView I needed to make sure it had the same content mode as the original view.
I am trying to draw a "zoomed-in" portion of UIImage and can't find a way to do that...
Say I have an image sized 1000x1000, and I zoom-in to the top left (x:0, y:0 width: 500, height: 500), and I want to draw that portion into another 1000x1000 image.
I am using UIGraphicsImageRenderer but UIImage draw method doesn't accept any source rect, only destination rect (and it draws the entire image).
I was able to achieve what I want by specifying a larger rect in the draw method, but that crashes when the zoom-in level is big (out-of-memory).
This is what I tried:
let srcImg: UIImage = {some UIImage sized 1000x1000}
let renderer = UIGraphicsImageRenderer(size: CGSize(width: 1000, height: 1000))
let scaled = renderer.image { ctx in
srcImg.draw(in: CGRect(origin: CGPoint.zero, size: CGSize(width: 2000, height: 2000)))
}
Basically I am trying to achieve something like the drawImage API of HTML5 canvas, which gets both src and dst rectangles...
Thanks.
Use the UIImage extension from this answer: https://stackoverflow.com/a/28513086/6257435
If I understand your question correctly -- you want to clip the top-left 500x500 pixels from a 1000x1000 image, and scale that up to 1000x1000 -- here is a simple example using that extension:
if let img1000x1000 = UIImage(named: "img1000x1000") {
if let topLeft500x500 = img1000x1000.cropped(to: CGRect(x: 0, y: 0, width: 500, height: 500)) {
if let new1000x1000 = topLeft500x500.filled(to: CGSize(width: 1000, height: 1000)) {
// do something with new 1000x1000 image
print()
}
}
}
I tried to draw scaleAspectFill like contents mode.
I found how to make sacelAspectFit using AVFoundation But I can't find scaleAspectFill.
if I draw horizontal image, I don't know x value
image.draw(in: CGRect(origin: CGPoint.init(x: ?, y: 0), size: CGSize(width: displayWidth*(image.size.width/image.size.height), height: displayWidth)))
Assuming you have an image called image, and you want to draw it inside a rectangle targetRect so that it fills the rect without being distorted, you can use the following code:
let aspect = image.size.width / image.size.height
let rect: CGRect
if targetRect.size.width / aspect > targetRect.size.height {
let height = targetRect.size.width / aspect
rect = CGRect(x: 0, y: (targetRect.size.height - height) / 2,
width: targetRect.size.width, height: height)
} else {
let width = targetRect.size.height * aspect
rect = CGRect(x: (targetRect.size.width - width) / 2, y: 0,
width: width, height: targetRect.size.height)
}
image.draw(in: rect)
Note: this doesn't clip the image, so it will draw outside the edges of the target rect. if you want to clip the image, call CGContextClipToRect(context, rect) before drawing.
Note also that the core graphics vertical axis is flipped, with zero starting in the bottom-left instead of top-left compared to UIGraphics, so you may need to flip the rect and clipping rect accordingly.
I'm trying to determine the right scaling factor for my node tree to make it fit exactly in my presentation rectangle, so I'm trying to find the smallest bounding rectangle around all my nodes. Apple's docs say that calculateAccumulatedFrame "Calculates a rectangle in the parent’s coordinate system that contains the content of the node and all of its descendants." That sounds like what I need, but it's not giving me the tight fit that I expect. My complete playground code is:
import SpriteKit
import PlaygroundSupport
let view:SKView = SKView(frame: CGRect(x: 0, y: 0, width: 800, height: 800))
PlaygroundPage.current.liveView = view
let scene:SKScene = SKScene(size: CGSize(width: 1000, height: 800))
scene.scaleMode = SKSceneScaleMode.aspectFill
view.presentScene(scene)
let yellowBox = SKSpriteNode(color: .yellow, size:CGSize(width: 300, height: 300))
yellowBox.position = CGPoint(x: 400, y: 500)
yellowBox.zRotation = CGFloat.pi / 10
scene.addChild(yellowBox)
let greenCircle = SKShapeNode(circleOfRadius: 100)
greenCircle.fillColor = .green
greenCircle.position = CGPoint(x: 300, y: 50)
greenCircle.frame
yellowBox.addChild(greenCircle)
let uberFrame = yellowBox.calculateAccumulatedFrame()
let blueBox = SKShapeNode(rect: uberFrame)
blueBox.strokeColor = .blue
blueBox.lineWidth = 2
scene.addChild(blueBox)
And the results are:
The left and bottom edges of the blue rectangle look good, but why are there gaps between the blue rectangle and the green circle on the top and right?
The notion "frame" does funny things when you add a transform. The bounding box around the box and circle is a rectangle. You have rotated that rectangle. Therefore its corners stick out. The accumulated frame embraces that rotated rectangle, including the sticking-out corners. It does not magically hug the drawn appearance of the nodes (e.g. the circle).
I am building a circle crop function in Swift. I pretty much have everything working however when I save the cropped photo it is slightly blurry:
Not sure if it is visible here or not but on my iPhone I can see a difference, slight blurring. I'm not zooming in more than the actual size of the image. I am using a UIScrollView with max zoom factor set to 1.0. The crop code is:
func didTapOk() {
let scale = scrollView.zoomScale
let newSize = CGSize(width: image!.size.width*scale, height: image!.size.height*scale)
let offset = scrollView.contentOffset
UIGraphicsBeginImageContext(CGSize(width: 240, height: 240))
let circlePath = UIBezierPath(ovalInRect: CGRect(x: 0, y: 0, width: 240, height: 240))
circlePath.addClip()
var sharpRect = CGRect(x: -offset.x, y: -offset.y, width: newSize.width, height: newSize.height)
sharpRect = CGRectIntegral(sharpRect)
image?.drawInRect(sharpRect)
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIImageWriteToSavedPhotosAlbum(finalImage, nil, nil, nil)
}
Here I am trying to use CGRectIntegral to improve the result but it doesn't seem to make any difference. Any pointers on what I could do to improve this?
What's happening is your crop is blurring because it hasn't accounted for the scale of your screen (whether you're using a retina display etc).
Use UIGraphicsBeginImageContextWithOptions(CGSize(width: 240, height: 240), true, 0) to account for the retina screen. Check here for more details on how to use this function.
I suspect that, although the accepted answer above is working for you, it is only working due to the screen scale being the same as the image scale.
As you are not rendering your image from the screen itself (you're rendering from a UIImage to another UIImage), the screen scale should be irrelevant.
You should instead be passing in the scale of the image when you create your context, like so:
UIGraphicsBeginImageContextWithOptions(CGSize(width: 240, height: 240), false, image.scale)
Here's a way in Swift 3 to disable any interpolation.
UIGraphicsBeginImageContextWithOptions(imageSize, false, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()
context?.interpolationQuality = CGInterpolationQuality.none
Additionally, since this is often for pixel drawing purposes, you may want to add:
// Ensure the image is not drawn upside down
context?.saveGState()
context?.translateBy(x: 0.0, y: maxHeight)
context?.scaleBy(x: 1.0, y: -1.0)
// Make the drawing
context?.draw(yourImage, in: CGRect(x: 0, y: 0, width: maxWidth, height: maxHeight))
// Restore the GState of the context
context?.restoreGState()