I have a crash which only occurs on 4S (not on 3GS). I am doubting its because of #2x. Basically I get raw bytes of image and manipulate. Here's the question I have.
I load a image as mentioned in the sample code below. At the end, uiWidth should be 2000 and cgwidth should be 2000. Correct? (Would it still be true if image is loaded from camera rolls? Or its autoscaling and uiWidth will be 4000?)
//test.jpg is 2000x 1500 pixels.
NSString *fileName = [[NSBundle mainBundle] pathForResource:#"test" ofType:#"jpg"];
UIImage *image = [UIImage imageWithContentsOfFile:fileName];
int uiWidth = image.size.width;
CGImageRef cgimg = image.CGImage;
int cgWidth = CGImageGetWidth(cgimg);
Thank you for your help.
The size reported by UIImage is in points, not pixels. You need to take into account the scale property of UIImage.
In other words, if test.jpg is 1000x1000 then UIImage.size will report 1000x1000. If test#2x.png is 2000x2000 then UIImage.size will also report 1000x1000. But in the 2nd case, UIImage.scale will report 2.
CGImageGetWidth reports its width in pixels, not points.
Related
I'm cropping UIImages with a UIBezierPath using UIGraphicsContext:
CGSize thumbnailSize = CGSizeMake(54.0f, 45.0f); // dimensions of UIBezierPath
UIGraphicsBeginImageContextWithOptions(thumbnailSize, NO, 0);
[path addClip];
[originalImage drawInRect:CGRectMake(0, originalImage.size.height/-3, thumbnailSize.width, originalImage.size.height)];
UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But for some reason my images are getting stretched vertically (everything looks slightly long and skinny), and this effect is stronger the bigger my originalImage is. I'm sure the originalImages are perfectly fine before I do these operations (I've checked)
My images are all 9:16 (say 72px wide by 128px tall) if that matters.
I've seen UIGraphics creates a bitmap with an "ARGB 32-bit integer pixel format using host-byte order"; and I'll admit a bit of ignorance when it comes to pixel formats, but felt this MAY be relevant because I'm not sure if that's the same pixel format I use to encode the picture data.
No idea how relevant this is but here is the FULL processing pipeline:
I'm capturing using AVFoundation and I set my photoSettings as
NSDictionary *photoSettings = #{AVVideoCodecKey : AVVideoCodecH264};
capturing using captureStillImageAsynchronouslyFromConnection:.. then turning it into NSData using [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer]; then downsizing into thumbnail by creating a CGDataProviderRefWithCFData and converting to CGImageRef using CGImageSourceCreateThumbnailAtIndex and getting a UIImage from that.
Later, I once again turn it into NSData using UIImageJPEGRepresentation(thumbnail, 0.7) so I can store. And finally when I'm ready to display I call my own method detailed on top [self maskImage:[UIImage imageWithData:imageData] toPath:_thumbnailPath] and display it on a UIImageView and set contentMode = UIViewContentModeScaleAspectFit.
If the method I'm using to mask the UIImage with the UIBezierPath is fine, I may end up explicitly setting the photoOutput settings with [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil] and the I can probably use something like how to convert a CVImageBufferRef to UIImage and change a lot of my code... but I really rather not do that unless completely necessary since, as I've mentioned, I really don't know much about video encoding / all these graphical, low level objects.
This line:
[originalImage drawInRect:CGRectMake(0, originalImage.size.height/-3, thumbnailSize.width, originalImage.size.height)];
is a problem. You are drawing originalImage but you specify the width of thumbnailSize.width and the height of originalImage. This messes up the image's aspect ratio.
You need a width and a height based on the same image size. Pick one as needed to maintain the proper aspect ratio.
This question already has answers here:
How to downscale a UIImage in IOS by the Data size
(5 answers)
Closed 8 years ago.
If I have a UIImage and I convert is to NSData I can see how many bytes it is.
If I have a variable requiredSize and I want to set that UIImage to a certain length and width so that when it is rendered as a PNG-file NSData UIImagePNGRepresentation(); it is a certain byte-size (requiredSize). How do I go about doing this.
I know how to get the current byte size [NSData length];
And I know how to downscale a UIImage (If there's a better way please tell me)
//UIImage *tempImage = whateverTheImagePointerIs;
int tempWidth = tempImage.size.width/2;//50% width of original
int tempHeight = tempImage.size.height/2;//50% height of original
UIImageView *tempImageRender = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, tempWidth, tempHeight)];
tempImageRender.image = tempImage;
UIGraphicsBeginImageContextWithOptions(tempImageRender.bounds.size, tempImageRender.opaque, 1.0);
[tempImageRender.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *tempFinalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But when I scale it by 50% on width and 50% on height (25% net-total) the final bytes of the new rendered scaled image (when converted to PNG using UIImagePNGRepresentation();) is not 25% of the original bytes... it seems to just be random (I'm sure this is happening because PNG compression algorithms change with image quality/size.)
Is there no way to resize an image to a given byte size?
See this answer for how to scale an image fairly effectively:
As you've discovered, there really isn't a direct correlation between image size and data size when compressed, you'll just have to do it iteratively:
-(NSData*)pngRepresentationWithMaxSize:(NSInteger)maxSize
{
UIImage* image = self;
while(1)
{
NSData* data = UIImagePNGRepresentation(image);
if(data.length < maxSize)
return data;
CGSize size = image.size;
image = [UIImage imageScaledToSize:CGSizeMake(image.size.width / 2., image.size.height / 2.)];
}
return nil;
}
Note that I assume you change the referenced scaling code to be a category method on UIImage and put the method above into a UIImage category as well.
I am debugging a piece of code where an UIImage may be gone through UIImageJPEGRepresentation multiple times, I thought that must be a bug and the image quality will get worsen, but surprisingly we can't see the difference visually.
So I did a test, loading an image, and try to let it go through UIImageJPEGRepresentation 1000 times, surprisingly, whether 1 or 1000 times doesn't really make a difference in the image quality visually, why is that so?
This is the testing code:
UIImage *image = [UIImage imageNamed:#"photo.jpeg"];
// Create a data reference here for the for loop later
// First JPEG compression here
// I would imagine the data here already has low image quality
NSData *data = UIImageJPEGRepresentation(image, 0);
for(int i=0; i<1000; i++)
{
// Convert the data with low image quality to UIImage
UIImage *image = [UIImage imageWithData:data];
// Compress the image into a low quality data again
// at this point i would imagine the image get even more low quality, like u resaved a jpeg twice in phootshop
data = UIImageJPEGRepresentation(image, 0);
}
// up to this point I would imagine the "data" has gone through JPEG compression 1000 times
// like you resave a jpeg as a jpeg in photoshop 1000 times, it should look like a piece of crap
UIImage *imageFinal = [UIImage imageWithData:data];
UIImageView *view = [[UIImageView alloc] initWithImage:imageFinal];
[self.view addSubview:view];
// but it didn't, the final image looks like it has only gone through the jpeg compression once.
EDIT: my doubt can be summarised into a simpler code, if you do this in objectiveC:
UIImage *image1 = an image..
NSData *data1 = UIImageJPEGRepresentation(image1, 0);
UIImage *image2 = [UIImage imageWithData:data1];
NSData *data2 = UIImageJPEGRepresentation(image2, 0);
UIImage *imageFinal = [UIImage imageWithData:data2];
Did imageFinal gone through JPEG compression twice?
As you know, JPG compression works by altering the image to produce smaller file size. The reason why you don't see progressively worse quality is because you're using the same compression setting each time.
The algorithm alters the source image just enough to fit into the compression profile - in other words, compressing the result of 50% JPG again at 50% will produce the same image, because the image doesn't need to be altered any more.
You can test this in Photoshop - save a photo out at say 30% quality JPG. Reopen the file you just saved, and go to Save for Web - flip between PNG (uncompressed/original) and JPG 30% - there will be no difference.
Hope this helps.
All types of compression will ideally reduce the size of an image. There are two types of compression which describes how they affects images:
Lossy Compression:
Lossy compression will reduces the size of the image by removing some data from it. This generally cause, effect the quality of the image, which means it reduce your image quality
Lossless Compression:
Lossless compression reduce the size of the image by changing the way in which the data is stored. Therefore this type of compression will make no change in the image quality.
Please check out the compression type you are using.
This may help you in decrease the image size. put the number from yourself how many times you want to perform loop;
UIImage *image = [UIImage imageNamed:#"photo.jpeg"];
for(int i=100; i>0; i--)
{
UIImage *image = [UIImage imageWithData:data];
NSData *data = UIImageJPEGRepresentation(image, (0.1 * i);
NSLog(#"%d",data.length);
}
Given a UIImage, I'm trying to make it into a square. Just chop some of the largest dimension off to make it 1:1 in aspect ratio.
UIImage *pic = [UIImage imageNamed:#"pic"];
CGFloat originalWidth = pic.size.width;
CGFloat originalHeight = pic.size.height;
float smallestDimension = fminf(originalWidth, originalHeight);
CGRect square = CGRectMake(0, 0, smallestDimension, smallestDimension);
CGImageRef imageRef = CGImageCreateWithImageInRect([pic CGImage], square);
UIImage *squareImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
UIImageView *imageView = [[UIImageView alloc] initWithImage:squareImage];
imageView.frame = CGRectMake(100, 100, imageView.bounds.size.width, imageView.bounds.size.height);
[self.view addSubview:imageView];
But this is what it results in:
When it should look like this, but just a little narrower.
Why is this? The images are pic(150x114) / pic#2x(300x228).
The problem is you're mixing up logical and pixel sizes. On non retina devices these two are the same, but on retina devices (like in your case) the pixel size is actually double the logical size.
Usually, when designing your GUI, you can always just think in logical sizes and coordinates, and iOS (or OS X) will make sure, that everything is doubled on retina screens. However, in some cases, especially when creating images yourself, you have to explicitly specify what size you mean.
UIImage's size method returns the logical size. That is the resolution on non-retina screens for instance. This is why CGImageCreateWithImageInRect will only create an new image, from the upper left half of the image.
Multiply your logical size with the scale of the image (1 on non-retina devices, 2 on retina devices):
CGFloat originalWidth = pic.size.width * pic.scale;
CGFloat originalHeight = pic.size.height * pic.scale;
This will make sure, that the new image is created from the full height (or width) of the original image. Now, one remaining problem is, that when you create a new UIImage using
UIImage *squareImage = [UIImage imageWithCGImage:imageRef];
iOS will think, this is a regular, non-retina image and it will display it twice as large as you would expect. To fix this, you have to specify the scale when you create the UIImage:
UIImage *squareImage = [UIImage imageWithCGImage:imageRef
scale:pic.scale
orientation:pic.imageOrientation];
I am working on a universal app with hundreds of background images. To save disk space and prevent further duplication and disk spamming I want to reuse the non-retina #1x iPad images as retina #2x iPhone images.
Example:
background125_iPad#2x.png
background125_iPad.png
iPhone 4 and 5 have a different aspect ratio so I will scale the 1024x768 images to fit.
But the problem is, if I use this on iPhone 5:
UIImage *img = [UIImage imageNamed:#"background125_iPad.png"];
then iOS will try to be smarter than me and pick the huge memory monster #"background125_iPad#2x.png" version.
Is there a way of saying: "iOS, look. I am smarter than you. I want that you load this file. And I really mean this file. THIS one. And treat it as if it was a #2x version with a scale factor of 2." such that it really loads the requested "background125_iPad.png" file, but then UIImageView acts as if it had 512 x 384 points (= 1024x768 px)?
I assume UIImage imageNamed is not the way to go then?
I don't think you can turn off that functionality.
But you can always do that:
UIImage *img = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:#"background125_iPad" ofType:#"png"]];
UIImage *scaledImage = [UIImage imageWithCGImage:[img CGImage]
scale:[[UIScreen mainScreen] scale]
orientation:img.imageOrientation];
That will not add automatically device specific postfixes.
I would recommend encapsulating that into UIImage category for simpler usage :)
To load the image exactly as specified and get the scale factor right, this should work:
UIImage *img = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:#"background125_iPad" ofType:#"png"]];
img = [UIImage imageWithCGImage:[tmpImage CGImage]
scale:[[UIScreen mainScreen] scale]
orientation:UIImageOrientationUp];
Thanks to Grzegorz for the -imageWithContentsOfFile: and [[UIScreen mainScreen] scale] hint.