I'm doing some basic line drawing using UIGraphicsBeginImageContextWithOptions:
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
let path = UIGraphicsGetCurrentContext()
path!.setLineWidth(1.5)
....
x = (az / 360) * Double(size.width)
y = halfheight - Double(alt / 90 * halfheight)
path?.addLine(to: CGPoint(x:x, y:y))
path?.strokePath()
....
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//DEBUGGING - makes quick look work
let _ = image!.cgImage
return image!
The debugger reports the right size for the image, 1440x960. But when I use QuickLook, the image is 2880 × 1920 pixels. I'm curious if there is an obvious reason for this? Is this something that quicklook is doing, or Preview perhaps?
The debugger is most likely giving you the size in points and Quick Look will be giving you the size in pixels.
The problem is that by passing 0 in the scale parameter of UIGraphicsBeginImageContextWithOptions, you're using the device's screen scale for the scale of the context. Thus if you're on a device with a 2x display, and input a size of 1440 x 960, you'll get a context with 2880 x 1920 pixels.
If you want to work with pixels instead of points, then simply pass 1 into the scale parameter:
UIGraphicsBeginImageContextWithOptions(size, false, 1)
Or use the equivalent UIGraphicsBeginImageContext(_:) call:
UIGraphicsBeginImageContext(size)
Related
I'm trying to use UIGraphicsImageRenderer to fix the orientation of images before uploading to my web server, and the following code works but the images it produces have a higher PPI than I want:
let imageRendererFormat = UIGraphicsImageRendererFormat()
let imageRenderer = UIGraphicsImageRenderer(size: image.size, format: imageRendererFormat)
let pngData = imageRenderer.pngData(actions: { context in
let rect = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
image.draw(in: rect)
})
The images that are produced have the correct resolution, but the PPI is 216 instead of the expected 72. I observed that the original image's scale is 1.0, but the scale of context.currentImage is 3.0. I'm not sure where this 3.0 number is coming from. This probably explains the 216 PPI, though, since 3 x 72 = 216.
How can I fix the PPI of the images being rendered? Should I apply a scale factor to the context or something?
I figured it out on my own - UIGraphicsImageRendererFormat has a 'scale' property which defaults to the same scale as the main screen.
I had to explicitly set the scale to 1.0 in order to get an image with the correct PPI.
I am receiving an image from backend that is of a large size as i have to place the same image as profile picture and show the same image on bottom bar in tab bar of size 30x30. I tried to scale down image in various ways but nothing is working.
Tried Alamofire's method which also didn't worked(the image appears to be blurred and distorted):
func resizeImageWithoutDistortion(image: UIImage, size : CGSize) -> UIImage{
// 1. Scale image to size disregarding aspect ratio
let scaledImage = image.af_imageScaled(to: size)
// 2. Scale image to fit within specified size while maintaining aspect ratio
let aspectScaledToFitImage = image.af_imageAspectScaled(toFit: size)
// 3. Scale image to fill specified size while maintaining aspect ratio
let aspectScaledToFillImage = image.af_imageAspectScaled(toFill: size)
return scaledImage.roundImage()
}
Also tried as follows which also didn't worked:
func resizeImage(_ newWidth: CGFloat) -> UIImage {
let ratio = size.width / size.height
if ratio > 1 {
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newWidth))
draw(in: CGRect(x: newWidth * ratio / 2 - newWidth, y: 0, width: newWidth * ratio, height: newWidth))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!.roundImage()
} else {
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newWidth))
draw(in: CGRect(x: 0, y: (newWidth / ratio - newWidth) / 2 * (-1), width: newWidth, height: newWidth / ratio))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!.roundImage()
}
}
In the screenshot the image in bottom is very distorted.
Ishika, your problem is not the quality loss. Your problem is that you don't take the iOS Scaling into consideration.
Points and Pixels are not the same thing.
If you have an UIImageView in W: 30 H: 30 (points) to calculate your Image in pixels to show it clearly without affecting the quality, you need to have an Image pixel size of:
30 * UIScreen.main.scale = 60 pixels (if 2X scale)
or
30 * UIScreen.main.scale = 90 pixels (if 3x scales)
This is also the same reason why you need to provide iOS with #2x and #3x scaled images.
So if you want to resize your UIImage to a smaller size you need to take scaling into consideration. Otherwise your Images will be scaled to fill out your UIImageView and they will become blurry because the UIImageView is bigger than the UIImage size.
A good way to see this is if you set your yourImageView.contentMode = .Center you will notice the UIImage is smaller than the UIImageView itself.
I don't code in Swift, so I cant provide you with direct code ( to tired to translate) but if you look at other threads:
scale image to smaller size in swift3
You see that your UIGraphicsBeginImageContext is for example missing the scale input.
scale
The scale factor to apply to the bitmap. If you specify a value
of 0.0, the scale factor is set to the scale factor of the device’s
main screen.
Edit:
In your case, something like this:
newWidth = newWidth / UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions(CGSize(width: newWidth, height: newWidth), true, 0)
I have an SKSpiteNode:
private var btnSound = SKSpriteNode(imageNamed: "btnSound")
Now I made this image in Adobe Illustrator with a size of 2048x2048 pixels (overkill really), so it has good resolution. My problem is when I set the size of it the image, the lines in it go serrated or jagged...not smooth.
This is how I size it:
btnSound.position = CGPoint(x: self.frame.width * 1 / 5 , y: self.frame.height * (5.2 / 8))
btnSound.size.width = self.frame.width * 1 / 7
btnSound.size.height = btnSound.size.width
btnSound.zPosition = 1
self.addChild(btnSound)
This is the image when in Illustrator (screenshot)and this is the image in the app (screenshot)
Things I have tried:
Making the image PDF
Making the image PNG
Making the PNG 72 DPI, making it 300 DPI
Run on simulator / device (iPhone7)
btnSound.setScale(preDetermineScale)
Using the following function, though I am not familiar with the UIGraphicsBeginImageContext method. The image just comes out blurry with this. Heres the code and the resulting image:
func resizeImage(image: UIImage, newWidth: CGFloat) -> UIImage? {
let scale = newWidth / image.size.width
let newHeight = image.size.height * scale
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newHeight))
image.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
func setup() {
let btnSoundImg = UIImage(named: "btnSound")
let resizeBtnSoundImage = resizeImage(image: btnSoundImg!, newWidth: self.frame.width * 1 / 7)
let btnSoundTexture = SKTexture(image: resizeBtnSoundImage!)
btnSound.texture = btnSoundTexture
btnSound.position = CGPoint(x: self.frame.width * 1 / 5 , y: self.frame.height * (5.2 / 8))
btnSound.size.width = self.frame.width * 1 / 7
btnSound.size.height = btnSound.size.width
btnSound.zPosition = 1
self.addChild(btnSound)
}
I am self taught and haven't done a whole lot of programming so I'd love to learn how to do this correctly as I'm only finding solutions for resizing UIImageViews.
Another thought I had was maybe it shouldn't be a spriteNode as its just used for a button?
First up, there's some primitive rules to follow, to get the best results.
Only scale by factors of 2. ie 50%, 25%, 12.5% 6.25% etc.
This way, any four pixels in your original image become 1 pixel in your scaled image, for each step down in scale size.
Make your original image a square of an exponent of 2 in size. So: 128x128, 256x256, 512x512, etc. You've covered this already with your 2048x2048 sizing.
Turn on mipmapping. This is off, by default, in SpriteKit, so you have to switch it on: https://developer.apple.com/reference/spritekit/sktexture/1519960-usesmipmaps
Play with the different filtering modes to get the best reductions of noise and banding in your image: https://developer.apple.com/reference/spritekit/sktexture/1519659-filteringmode hint, linear will probably be better.
As has always been the case, judicious use of Photoshop for manually scaling will give you the best results and least flexibility
View.Bounds gave me values of (0.0, 0.0, 375.0, 667.0) to help calculate view dimensions. And CGImage, size of bitmap gave me pixel count of 490 - pixel width and 751 - pixel height. I don't understand why I get UIView bounds content size less than CGImage pixel width and height when scale factor gave me value of 2. But how can I calculate number of points from the dimensions ? How can I take out 375.0 and 667.0 and calculate ? Below code helped me to get view dimensions and scale factor.
let ourImage: UIImage? = imageView.image
let viewBounds = view.bounds
print("\(viewBounds)")
var scale: CGFloat = UIScreen.mainScreen().scale
print("\(scale)")
And this is the code I worked to receive pixel height and width of 490 * 751.
public init?(image: UIImage) {
guard let cgImage = image.CGImage else { return nil }
// Redraw image for correct pixel format
let colorSpace = CGColorSpaceCreateDeviceRGB()
var bitmapInfo: UInt32 = CGBitmapInfo.ByteOrder32Big.rawValue
bitmapInfo |= CGImageAlphaInfo.PremultipliedLast.rawValue & CGBitmapInfo.AlphaInfoMask.rawValue
width = Int(image.size.width)
height = Int(image.size.height)
.... }
Or can I use (pixelWidth * 2 + pixelHeight * 2) as to calculate number of points ? I need to calculate or fix number of points (n) to substitute in further equation of image segmentation using active contour method.
An image view does not resize to match the image property you set to it. Nor does the image you set resize to match the image view's frame.
Instead, the image view presents a representation of that image scaled to whatever size matches the image view (based on the rules you pick, aspect fit, aspect fill, scale to fill, others, etc).
So the actual size of the image is exactly whatever is returns from the image's size property.
To get the actual image's actual width, use:
image.size.width
Likewise, to get the actual image's actual height, use:
image.size.height
I've been trying to shrink down an image to a smaller size for a while and cannot figure out why it loses quality even though I've come across tutorials saying it should not. First, I crop my image into a square and then use this code:
let newRect = CGRectIntegral(CGRectMake(0,0, newSize.width, newSize.height))
let imageRef = image.CGImage
UIGraphicsBeginImageContextWithOptions(newSize, false, 0)
let context = UIGraphicsGetCurrentContext()
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, CGInterpolationQuality.High)
let flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height)
CGContextConcatCTM(context, flipVertical)
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef)
let newImageRef = CGBitmapContextCreateImage(context)! as CGImage
let newImage = UIImage(CGImage: newImageRef)
// Get the resized image from the context and a UIImage
UIGraphicsEndImageContext()
I've also tried this code with the same results:
let newSize:CGSize = CGSize(width: 30,` height: 30)
let rect = CGRectMake(0, 0, newSize.width, newSize.height)
UIGraphicsBeginImageContextWithOptions(newSize, false, 0.0)
UIBezierPath(
roundedRect: rect,
cornerRadius: 2
).addClip()
image.drawInRect(rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let imageData = UIImageJPEGRepresentation(newImage, 1.0)
sharedInstance.my.append(UIImage(data: imageData!)!)
I still get a blurry image after resizing. I compare it to when I have an image view and set it to aspect fill/fit, and the image is much clearer and still smaller. That is the quality I'm trying to get and can't figure out what I'm missing. I put two pictures here, the first is the clearer image using an imageView and the second is a picture resized with my code. How can I manipulate an image to look clear like in the image View?
You should use
let newSize:CGSize = CGSize(width: 30 * UIScreen.mainScreen().scale, height: 30 * UIScreen.mainScreen().scale)
This is because different iPhones have different size.
select the image view, click "Size" inspector and change the "X",
"Y", "Width" and "Height" attributes.
X = 14
Y = 10
Width = 60
Height = 60
For the round radius you can implement this code:
cell.ImageView.layer.cornerRadius = 30.0
cell.ImageView.clipsToBounds = true
or
go to the Identity inspector, click the Add button (+) in the lower left of
the user defined runtime attributes editor.
Double click on the Key Path field of the new attribute to edit the key path for the attribute to layer.cornerRadius
Set the type to Number and
the value to 30. To make a circular image from a square image, the
radius is set to half the width of the image view.
Duncan gave you a good explanation 30 by 30 is too small that's why the pixels or the quality of the image is loss, I recommend you to use 60 by 60
In the sample images you show, you're drawing the "after" image larger than the starting image. If you reduce an image from some larger size to 30 pixels by 30 pixels, you are throwing away a lot of information. If you then draw the 30x30 image at a larger size, it's going to look bad. 30 by 30 is a tiny image, without much detail.