I am trying to draw a "zoomed-in" portion of UIImage and can't find a way to do that...
Say I have an image sized 1000x1000, and I zoom-in to the top left (x:0, y:0 width: 500, height: 500), and I want to draw that portion into another 1000x1000 image.
I am using UIGraphicsImageRenderer but UIImage draw method doesn't accept any source rect, only destination rect (and it draws the entire image).
I was able to achieve what I want by specifying a larger rect in the draw method, but that crashes when the zoom-in level is big (out-of-memory).
This is what I tried:
let srcImg: UIImage = {some UIImage sized 1000x1000}
let renderer = UIGraphicsImageRenderer(size: CGSize(width: 1000, height: 1000))
let scaled = renderer.image { ctx in
srcImg.draw(in: CGRect(origin: CGPoint.zero, size: CGSize(width: 2000, height: 2000)))
}
Basically I am trying to achieve something like the drawImage API of HTML5 canvas, which gets both src and dst rectangles...
Thanks.
Use the UIImage extension from this answer: https://stackoverflow.com/a/28513086/6257435
If I understand your question correctly -- you want to clip the top-left 500x500 pixels from a 1000x1000 image, and scale that up to 1000x1000 -- here is a simple example using that extension:
if let img1000x1000 = UIImage(named: "img1000x1000") {
if let topLeft500x500 = img1000x1000.cropped(to: CGRect(x: 0, y: 0, width: 500, height: 500)) {
if let new1000x1000 = topLeft500x500.filled(to: CGSize(width: 1000, height: 1000)) {
// do something with new 1000x1000 image
print()
}
}
}
Related
I am really confused about how a CGRect is interpreted and drawn. I have the following routine to draw a rectangle:
let rectangle = CGRect(x: 0, y: 0, width: 100, height: 100)
ctx.cgContext.setFillColor(UIColor.red.cgColor)
ctx.cgContext.setStrokeColor(UIColor.green.cgColor)
ctx.cgContext.setLineWidth(1)
ctx.cgContext.addRect(rectangle)
ctx.cgContext.drawPath(using: .fillStroke)
Now I was expecting this to draw a rectangle at the origin of the screen with the width and height of 100 (pixels/points, not sure about this).
However, I can barely see the drawn rectangle. Most of it seem to lie outside the screen (on the left hand side).
Changing this to:
let rectangle = CGRect(x: 50, y: 50, width: 100, height: 100)
Still around half the rectangle is outside the screen.
Is the origin of this CGRect at the bottom right? I am not sure how this is all getting interpreted.
I thought if the origin is at the center of the rectangle, then surely the second call should show the whole rectangle?
[EDIT]
I am running this on iphone X running ios 14.4.
[EDIT]
So, the full drawing code is as follows. This is part of a View. So, in the end the view image is assigned to the image we draw on
func show(on frame: CGImage) {
// This is of dimension 480 x 640
let dstImageSize = CGSize(width: frame.width, height: frame.height)
let dstImageFormat = UIGraphicsImageRendererFormat()
dstImageFormat.scale = 1
let renderer = UIGraphicsImageRenderer(size: dstImageSize,
format: dstImageFormat)
let dstImage = renderer.image { rendererContext in
// Draw the current frame as the background for the new image.
draw(image: frame, in: rendererContext.cgContext)
// This is where I draw my rectangle
let rectangle = CGRect(x: 0, y: 0, width: 100, height: 100)
rendererContext.cgContext.setAlpha(0.3)
rendererContext.cgContext.setFillColor(UIColor.red.cgColor)
rendererContext.cgContext.setStrokeColor(UIColor.green.cgColor)
//rendererContext.cgContext.setLineWidth(1)
rendererContext.cgContext.addRect(rectangle)
rendererContext.cgContext.drawPath(using: .fill)
}
image = dstImage
}
So, from what I can tell, it should draw it on the context of the image and should not go out of bounds with the parameters I gave.
I mean, the context is that of the bitmap that has been initialized to 480 x 640. I am nt sure why this is shown out of bounds when I view it on the device. Should this bitmap/image not be shown correctly?
I currently having a block of code that is trying to add a text view on top of an image, with the ultimate goal to save down the new image with the overlaid text. Here is the code to do that:
class func addText(label: UITextView,imageSize: CGSize, image: UIImage) -> UIImage {
let scale = UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions(CGSize(width: imageSize.width, height: imageSize.height), false, scale)
let currentView = UIView.init(frame: CGRect(x: 0, y: 0, width: imageSize.width, height: imageSize.height))
let currentImage = UIImageView.init(image: image)
currentImage.frame = CGRect(x: 0, y: 0, width: imageSize.width, height: imageSize.height)
currentView.addSubview(currentImage)
currentView.addSubview(label)
currentView.layer.render(in: UIGraphicsGetCurrentContext()!)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img!
}
And it is called like below (The image is just a standard 1920x1080 image taken by the phone's camera):
self.imageToEdit.image = UIImage.addText(label: textView, imageSize: UIScreen.main.bounds.size, image: self.imageToEdit.image!)
This works great when I test when an iPhone 6s, but when I test on an iPhone X, it "squeezes" the sides of the image so faces and other features become skinnier on the image that is returned by addText.
I have a hunch it is due to the image being extended up through the notch of the iPhone X which is causing some type of scaling/aspect fill, but I'm not sure where to begin looking.
Does anyone know how to stop the "squeezing" from happening in iPhone X (I am also guessing this is happening in all the other iPhone models that have a notch)
Thanks.
Just figured it out!
I needed to included this line:
currentImage.contentMode = .scaleAspectFill
in my addText func.
Because I was returning a new UIImageView I needed to make sure it had the same content mode as the original view.
I created empty gray UIImage, using below code
let size = CGSize(width: 212, height: 332)
UIGraphicsBeginImageContextWithOptions(size, true, 0)
UIColor.gray.setFill()
UIRectFill(CGRect(x: 0, y: 0, width: size.width, height: size.height))
let backgroundImage2: UIImage? = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
It shows output as
Now I need to put UIImage on specific area in this UIImage. as shown in below Image. Say top, left, right should be 30 pixels, and bottom more than that, say 200 pixels. maintaining inner image aspect ratio.
Use two image views (either UIImageView or GLKView), making the "image" a subview of the "gray background" view. After positioning the "image" correctly, merge the two images into one.
Here's an extension to UIView that I use:
extension UIView {
public func createImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(
CGSize(width: self.frame.width, height: self.frame.height), true, 1)
self.layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image!
}
}
Why does Xcode shows a image bigger then AS-IS?
http://users.telenet.be/thomazz/ScreenShot4.png
http://users.telenet.be/thomazz/ScreenShot3.png
Scenario:
I got an image.
I resize this UIImage.
I export the resized UIImage.
I comment out my resize code.
I import the resized image in Xcode.
problem 1: Xcode shows the image twice as big as normal.
problem 2: when I run my app with the exported-resized image, it is twice as big.
view screenshots.
This totally depends on your frame of your UIImageView and not its dimensions.
So if you have an 1024x1024 image and you place it in a 10x10 frame, it will render to 10x10 size and vice versa.
If you want it bigger, then make your UIImageView bigger
Edit: so it is a google maps icon
Set the resized image as marker icon ,i.e,
marker.icon = self.imageWithImage(image: UIImage(named: "imageName")!, scaledToSize: CGSize(width: 3.0, height: 3.0))
Add this function
func imageWithImage(image:UIImage, scaledToSize newSize:CGSize) -> UIImage{
UIGraphicsBeginImageContextWithOptions(newSize, false, 0.0);
image.drawInRect(CGRectMake(0, 0, newSize.width, newSize.height))
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
Dear Mohammad Bashir Sidani, I have this code.
And this code works. but it creates a new UIImage.
Then I use UIImagePNGRepresentation(resizedImage) to export the image.
I disable the code below to use the "programmatically-resized image".
this new resized image is blown up by Xcode... :(
extension UIImage {
func resizeImage(_ dimension: CGFloat, opaque: Bool, contentMode: UIViewContentMode = .scaleAspectFit) -> UIImage {
var width: CGFloat
var height: CGFloat
var newImage: UIImage
let size = self.size
let aspectRatio = size.width/size.height
switch contentMode {
case .scaleAspectFit:
if aspectRatio > 1 { // Landscape image
width = dimension
height = dimension / aspectRatio
} else { // Portrait image
height = dimension
width = dimension * aspectRatio
}
default:
fatalError("UIIMage.resizeToFit(): FATAL: Unimplemented ContentMode")
}
if #available(iOS 10.0, *) {
let renderFormat = UIGraphicsImageRendererFormat.default()
renderFormat.opaque = opaque
let renderer = UIGraphicsImageRenderer(size: CGSize(width: width, height: height), format: renderFormat)
newImage = renderer.image {
(context) in
self.draw(in: CGRect(x: 0, y: 0, width: width, height: height))
}
} else {
UIGraphicsBeginImageContextWithOptions(CGSize(width: width, height: height), opaque, 0)
self.draw(in: CGRect(x: 0, y: 0, width: width, height: height))
newImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
}
return newImage
}
}
I am building a circle crop function in Swift. I pretty much have everything working however when I save the cropped photo it is slightly blurry:
Not sure if it is visible here or not but on my iPhone I can see a difference, slight blurring. I'm not zooming in more than the actual size of the image. I am using a UIScrollView with max zoom factor set to 1.0. The crop code is:
func didTapOk() {
let scale = scrollView.zoomScale
let newSize = CGSize(width: image!.size.width*scale, height: image!.size.height*scale)
let offset = scrollView.contentOffset
UIGraphicsBeginImageContext(CGSize(width: 240, height: 240))
let circlePath = UIBezierPath(ovalInRect: CGRect(x: 0, y: 0, width: 240, height: 240))
circlePath.addClip()
var sharpRect = CGRect(x: -offset.x, y: -offset.y, width: newSize.width, height: newSize.height)
sharpRect = CGRectIntegral(sharpRect)
image?.drawInRect(sharpRect)
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIImageWriteToSavedPhotosAlbum(finalImage, nil, nil, nil)
}
Here I am trying to use CGRectIntegral to improve the result but it doesn't seem to make any difference. Any pointers on what I could do to improve this?
What's happening is your crop is blurring because it hasn't accounted for the scale of your screen (whether you're using a retina display etc).
Use UIGraphicsBeginImageContextWithOptions(CGSize(width: 240, height: 240), true, 0) to account for the retina screen. Check here for more details on how to use this function.
I suspect that, although the accepted answer above is working for you, it is only working due to the screen scale being the same as the image scale.
As you are not rendering your image from the screen itself (you're rendering from a UIImage to another UIImage), the screen scale should be irrelevant.
You should instead be passing in the scale of the image when you create your context, like so:
UIGraphicsBeginImageContextWithOptions(CGSize(width: 240, height: 240), false, image.scale)
Here's a way in Swift 3 to disable any interpolation.
UIGraphicsBeginImageContextWithOptions(imageSize, false, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()
context?.interpolationQuality = CGInterpolationQuality.none
Additionally, since this is often for pixel drawing purposes, you may want to add:
// Ensure the image is not drawn upside down
context?.saveGState()
context?.translateBy(x: 0.0, y: maxHeight)
context?.scaleBy(x: 1.0, y: -1.0)
// Make the drawing
context?.draw(yourImage, in: CGRect(x: 0, y: 0, width: maxWidth, height: maxHeight))
// Restore the GState of the context
context?.restoreGState()