Reading a image at 2x or 3x in iOS - ios

I am fetching a image from my server based on the scale so I fetch something like :
http://myserver.com/image1.png
or http://myserver.com/image1#2x.png
However what I see is that once I initialize a image with the contents of http://myserver.com/image1#2x.png the scale on the UIImage says it is 1x and it gets rendered badly where I want it to be rendered, it renders it in full size.. instead of 1/2 the size with double the pixels.. how do I make this work correctly?

You can create a new UIImage with a scale factor of 2 using the code below:
UIImage * img = [UIImage imageNamed:#"myimagename"];
img = [UIImage imageWithCGImage:img.CGImage scale:2 orientation:img.imageOrientation];
To make the code device-independent, you should get the scale factor of the current device using the code below and replace the number 2 with it.
[UIScreen mainScreen].scale

Related

UIImage Distortion from UIGraphicsBeginImageContext with larger files (pixel formats, codecs?)

I'm cropping UIImages with a UIBezierPath using UIGraphicsContext:
CGSize thumbnailSize = CGSizeMake(54.0f, 45.0f); // dimensions of UIBezierPath
UIGraphicsBeginImageContextWithOptions(thumbnailSize, NO, 0);
[path addClip];
[originalImage drawInRect:CGRectMake(0, originalImage.size.height/-3, thumbnailSize.width, originalImage.size.height)];
UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But for some reason my images are getting stretched vertically (everything looks slightly long and skinny), and this effect is stronger the bigger my originalImage is. I'm sure the originalImages are perfectly fine before I do these operations (I've checked)
My images are all 9:16 (say 72px wide by 128px tall) if that matters.
I've seen UIGraphics creates a bitmap with an "ARGB 32-bit integer pixel format using host-byte order"; and I'll admit a bit of ignorance when it comes to pixel formats, but felt this MAY be relevant because I'm not sure if that's the same pixel format I use to encode the picture data.
No idea how relevant this is but here is the FULL processing pipeline:
I'm capturing using AVFoundation and I set my photoSettings as
NSDictionary *photoSettings = #{AVVideoCodecKey : AVVideoCodecH264};
capturing using captureStillImageAsynchronouslyFromConnection:.. then turning it into NSData using [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer]; then downsizing into thumbnail by creating a CGDataProviderRefWithCFData and converting to CGImageRef using CGImageSourceCreateThumbnailAtIndex and getting a UIImage from that.
Later, I once again turn it into NSData using UIImageJPEGRepresentation(thumbnail, 0.7) so I can store. And finally when I'm ready to display I call my own method detailed on top [self maskImage:[UIImage imageWithData:imageData] toPath:_thumbnailPath] and display it on a UIImageView and set contentMode = UIViewContentModeScaleAspectFit.
If the method I'm using to mask the UIImage with the UIBezierPath is fine, I may end up explicitly setting the photoOutput settings with [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil] and the I can probably use something like how to convert a CVImageBufferRef to UIImage and change a lot of my code... but I really rather not do that unless completely necessary since, as I've mentioned, I really don't know much about video encoding / all these graphical, low level objects.
This line:
[originalImage drawInRect:CGRectMake(0, originalImage.size.height/-3, thumbnailSize.width, originalImage.size.height)];
is a problem. You are drawing originalImage but you specify the width of thumbnailSize.width and the height of originalImage. This messes up the image's aspect ratio.
You need a width and a height based on the same image size. Pick one as needed to maintain the proper aspect ratio.

scale property of ALAssetRepresentation always return 1

I'm trying to get original image from ALAsset and find that the scale property of ALAssetRepresentation always returns 1.0. So I wonder is there a situation that the property will return other value like 2.0 ?
ALAssetRepresentation *assetRepresentation = [asset defaultRepresentation] ;
CGImageRef imgRef = assetRepresentation.fullResolutionImage ;
UIImage *image = [UIImage imageWithCGImage:imgRef] ;
After retina displays were introduced physical resolution was doubled but for API calls it was remained the same. So in some methods and functions (see UIGraphicsBeginImageContextWithOptions for example) was added additional argument 'scale'. I do not know why [ALAssetRepresentation scale] description is so poor
Returns the representation's scale.
but you can look at UIScreen.scale description
This value reflects the scale factor needed to convert from the
default logical coordinate space into the device coordinate space of
this screen. The default logical coordinate space is measured using
points. For standard-resolution displays, the scale factor is 1.0 and
one point equals one pixel. For Retina displays, the scale factor is
2.0 and one point is represented by four pixels.
I think [ALAssetRepresentation scale] should be 2.0 if you will run this code on device with retina display.

Trying to crop my UIImage to a 1:1 aspect ratio (square) but it keeps enlarging the image causing it to be blurry. Why?

Given a UIImage, I'm trying to make it into a square. Just chop some of the largest dimension off to make it 1:1 in aspect ratio.
UIImage *pic = [UIImage imageNamed:#"pic"];
CGFloat originalWidth = pic.size.width;
CGFloat originalHeight = pic.size.height;
float smallestDimension = fminf(originalWidth, originalHeight);
CGRect square = CGRectMake(0, 0, smallestDimension, smallestDimension);
CGImageRef imageRef = CGImageCreateWithImageInRect([pic CGImage], square);
UIImage *squareImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
UIImageView *imageView = [[UIImageView alloc] initWithImage:squareImage];
imageView.frame = CGRectMake(100, 100, imageView.bounds.size.width, imageView.bounds.size.height);
[self.view addSubview:imageView];
But this is what it results in:
When it should look like this, but just a little narrower.
Why is this? The images are pic(150x114) / pic#2x(300x228).
The problem is you're mixing up logical and pixel sizes. On non retina devices these two are the same, but on retina devices (like in your case) the pixel size is actually double the logical size.
Usually, when designing your GUI, you can always just think in logical sizes and coordinates, and iOS (or OS X) will make sure, that everything is doubled on retina screens. However, in some cases, especially when creating images yourself, you have to explicitly specify what size you mean.
UIImage's size method returns the logical size. That is the resolution on non-retina screens for instance. This is why CGImageCreateWithImageInRect will only create an new image, from the upper left half of the image.
Multiply your logical size with the scale of the image (1 on non-retina devices, 2 on retina devices):
CGFloat originalWidth = pic.size.width * pic.scale;
CGFloat originalHeight = pic.size.height * pic.scale;
This will make sure, that the new image is created from the full height (or width) of the original image. Now, one remaining problem is, that when you create a new UIImage using
UIImage *squareImage = [UIImage imageWithCGImage:imageRef];
iOS will think, this is a regular, non-retina image and it will display it twice as large as you would expect. To fix this, you have to specify the scale when you create the UIImage:
UIImage *squareImage = [UIImage imageWithCGImage:imageRef
scale:pic.scale
orientation:pic.imageOrientation];

Retina image displayed too big in retina simulator

I display a retina image (with #2x.png extension) using:
myImage = [UIImage imageNamed:#"iPhoneBackground#2x.jpg"];
UIGraphicsBeginImageContext(myImage.size);
[myImage drawAtPoint: CGPointZero];
myImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageView = [[UIImageView alloc] initWithImage:myImage];
NSLog(#"Dimension:%f x %f",myImage.size.width,myImage.size.height);
[self.view addSubview:imageView];
However the image is displayed twice its size on the retina simulation. Image and simulator both have 640 x 960 resolution, so I would expect the image filling the screen.
I know there are other ways than CGContext to display an image, but that's the way I would need to other purposes in my code.
Any idea why I have this definition issue ?
Don't use #2x suffix
From apple documentation:
The UIImage class handles all of the work needed to load
high-resolution images into your app. When creating new image objects,
you use the same name to request both the standard and the
high-resolution versions of your image. For example, if you have two
image files, named Button.png and Button#2x.png, you would use the
following code to request your button image:
UIImage *anImage = [UIImage imageNamed:#"Button"];
You do not need to explicitly load a retina image, #2x will be automatically appended to the image name if the device has a retina display.
Change your UIImage code to: myImage = [UIImage imageNamed:#"iPhoneBackground.jpg"];

Get picture from square AVCaptureVideoPreviewLayer

So what I am doing is creating a custom image picker, and I have a 320 X 320 AVCaptureVideoPreviewLayer that I am using, and when I take a picture, I want to get a UIImage of what is actually seen in the preview layer, but what I get when I take a photo with captureStillImageAsynchronouslyFromConnection: completionHandler: is a normal image with size 2448 X 3264. So what would be the best way to get make this image into a 320 x 320 square image like is seen in the preview layer without messing it up? Is there a Right Way™ to do this? Also, I am using AVLayerVideoGravityResizeAspectFill for the videoGravity property of AVCaptureVideoPreviewLayer, if that is relevant.
Have you tried to transform the image?
try using CGAffineTransformMakeScale(<#sx: CGFloat#>, <#sy: CGFloat#>) to scale the image down. Transforms can do magic! If you have ever taken linear algebra, you should recall your standard transformation matrices. I have not used this for images, so I am not sure if this would work well with the pixels.
you could also try
// grab the original image
UIImage *originalImage = [UIImage imageNamed:#"myImage.png"];
// scaling set to 2.0 makes the image 1/2 the size.
UIImage *scaledImage =
[UIImage imageWithCGImage:[originalImage CGImage]
scale:(originalImage.scale * 2.0)
orientation:(originalImage.imageOrientation)];
where you can change the scale factor

Resources