iOS Wrong Image Size - ios

I am getting wrong image size in MB. The actual image is 7 MB but I am getting 10 MB size. I am using following code
let imgData = NSData(data: image.jpegData(compressionQuality: 1)!)
var imageSize: Int = imgData.count
print("actual size of image in MB: %f ", (Double(imageSize) / 1000.0) / 1000.0)
I have tried different ways but always get wrong size

Related

ios - height and width of image is changed when convert image to data?

let profileImage = UIImage(named:"profile")!
let imageData = profileImage.pngData()
if image size is 680 * 400, now imageData gives 2040 * 1800 or something. how can i make the same size ?
i tried with compression but didn't work

Scale down a very large sized image to very small sized image without distortion and quality loss

I am receiving an image from backend that is of a large size as i have to place the same image as profile picture and show the same image on bottom bar in tab bar of size 30x30. I tried to scale down image in various ways but nothing is working.
Tried Alamofire's method which also didn't worked(the image appears to be blurred and distorted):
func resizeImageWithoutDistortion(image: UIImage, size : CGSize) -> UIImage{
// 1. Scale image to size disregarding aspect ratio
let scaledImage = image.af_imageScaled(to: size)
// 2. Scale image to fit within specified size while maintaining aspect ratio
let aspectScaledToFitImage = image.af_imageAspectScaled(toFit: size)
// 3. Scale image to fill specified size while maintaining aspect ratio
let aspectScaledToFillImage = image.af_imageAspectScaled(toFill: size)
return scaledImage.roundImage()
}
Also tried as follows which also didn't worked:
func resizeImage(_ newWidth: CGFloat) -> UIImage {
let ratio = size.width / size.height
if ratio > 1 {
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newWidth))
draw(in: CGRect(x: newWidth * ratio / 2 - newWidth, y: 0, width: newWidth * ratio, height: newWidth))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!.roundImage()
} else {
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newWidth))
draw(in: CGRect(x: 0, y: (newWidth / ratio - newWidth) / 2 * (-1), width: newWidth, height: newWidth / ratio))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!.roundImage()
}
}
In the screenshot the image in bottom is very distorted.
Ishika, your problem is not the quality loss. Your problem is that you don't take the iOS Scaling into consideration.
Points and Pixels are not the same thing.
If you have an UIImageView in W: 30 H: 30 (points) to calculate your Image in pixels to show it clearly without affecting the quality, you need to have an Image pixel size of:
30 * UIScreen.main.scale = 60 pixels (if 2X scale)
or
30 * UIScreen.main.scale = 90 pixels (if 3x scales)
This is also the same reason why you need to provide iOS with #2x and #3x scaled images.
So if you want to resize your UIImage to a smaller size you need to take scaling into consideration. Otherwise your Images will be scaled to fill out your UIImageView and they will become blurry because the UIImageView is bigger than the UIImage size.
A good way to see this is if you set your yourImageView.contentMode = .Center you will notice the UIImage is smaller than the UIImageView itself.
I don't code in Swift, so I cant provide you with direct code ( to tired to translate) but if you look at other threads:
scale image to smaller size in swift3
You see that your UIGraphicsBeginImageContext is for example missing the scale input.
scale
The scale factor to apply to the bitmap. If you specify a value
of 0.0, the scale factor is set to the scale factor of the device’s
main screen.
Edit:
In your case, something like this:
newWidth = newWidth / UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions(CGSize(width: newWidth, height: newWidth), true, 0)

UIImage size is larger than what I set it to

I'm doing some basic line drawing using UIGraphicsBeginImageContextWithOptions:
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
let path = UIGraphicsGetCurrentContext()
path!.setLineWidth(1.5)
....
x = (az / 360) * Double(size.width)
y = halfheight - Double(alt / 90 * halfheight)
path?.addLine(to: CGPoint(x:x, y:y))
path?.strokePath()
....
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//DEBUGGING - makes quick look work
let _ = image!.cgImage
return image!
The debugger reports the right size for the image, 1440x960. But when I use QuickLook, the image is 2880 × 1920 pixels. I'm curious if there is an obvious reason for this? Is this something that quicklook is doing, or Preview perhaps?
The debugger is most likely giving you the size in points and Quick Look will be giving you the size in pixels.
The problem is that by passing 0 in the scale parameter of UIGraphicsBeginImageContextWithOptions, you're using the device's screen scale for the scale of the context. Thus if you're on a device with a 2x display, and input a size of 1440 x 960, you'll get a context with 2880 x 1920 pixels.
If you want to work with pixels instead of points, then simply pass 1 into the scale parameter:
UIGraphicsBeginImageContextWithOptions(size, false, 1)
Or use the equivalent UIGraphicsBeginImageContext(_:) call:
UIGraphicsBeginImageContext(size)

Colour of pixel is wrong only for iPhone 6 simulator

I need to read a colour of a pixel located at a point in an image and the code I have works in simulator for all iPhones (including iPhone 6 Plus) except iPhone 6.
I do not know why, but my guess is the index of the pixel is not correct since it detects a colour in a wrong location.
I appreciate any help.
This is the code that I have.
UIGraphicsBeginImageContext(upperCaseView.frame.size)
upperCaseView.layer.renderInContext(UIGraphicsGetCurrentContext()!)
snapshotImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(snapshotImage.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(snapshotImage.size.width) * Int(point.y)) + Int(point.x)) * 4
print(data[pixelInfo])
print(data[pixelInfo+1])
print(data[pixelInfo+2])
Thank you so much Scott Thompson.
I used CGImageGetBytesPerRow instead of the image width and now it works.
The correct code is below:
UIGraphicsBeginImageContext(upperCaseView.frame.size)
upperCaseView.layer.renderInContext(UIGraphicsGetCurrentContext()!)
snapshotImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(snapshotImage.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let bytesPerPixel = (CGImageGetBitsPerPixel(snapshotImage.CGImage) / 8)
let bytesPerRow = CGImageGetBytesPerRow(snapshotImage.CGImage)
let pixelInfo: Int = (bytesPerRow * Int(point.y)) + (Int(point.x) * bytesPerPixel)
print(data[pixelInfo])
print(data[pixelInfo+1])
print(data[pixelInfo+2])

Which is the right way to calculate number of points of an image?

View.Bounds gave me values of (0.0, 0.0, 375.0, 667.0) to help calculate view dimensions. And CGImage, size of bitmap gave me pixel count of 490 - pixel width and 751 - pixel height. I don't understand why I get UIView bounds content size less than CGImage pixel width and height when scale factor gave me value of 2. But how can I calculate number of points from the dimensions ? How can I take out 375.0 and 667.0 and calculate ? Below code helped me to get view dimensions and scale factor.
let ourImage: UIImage? = imageView.image
let viewBounds = view.bounds
print("\(viewBounds)")
var scale: CGFloat = UIScreen.mainScreen().scale
print("\(scale)")
And this is the code I worked to receive pixel height and width of 490 * 751.
public init?(image: UIImage) {
guard let cgImage = image.CGImage else { return nil }
// Redraw image for correct pixel format
let colorSpace = CGColorSpaceCreateDeviceRGB()
var bitmapInfo: UInt32 = CGBitmapInfo.ByteOrder32Big.rawValue
bitmapInfo |= CGImageAlphaInfo.PremultipliedLast.rawValue & CGBitmapInfo.AlphaInfoMask.rawValue
width = Int(image.size.width)
height = Int(image.size.height)
.... }
Or can I use (pixelWidth * 2 + pixelHeight * 2) as to calculate number of points ? I need to calculate or fix number of points (n) to substitute in further equation of image segmentation using active contour method.
An image view does not resize to match the image property you set to it. Nor does the image you set resize to match the image view's frame.
Instead, the image view presents a representation of that image scaled to whatever size matches the image view (based on the rules you pick, aspect fit, aspect fill, scale to fill, others, etc).
So the actual size of the image is exactly whatever is returns from the image's size property.
To get the actual image's actual width, use:
image.size.width
Likewise, to get the actual image's actual height, use:
image.size.height

Resources