scale property of ALAssetRepresentation always return 1 - ios

I'm trying to get original image from ALAsset and find that the scale property of ALAssetRepresentation always returns 1.0. So I wonder is there a situation that the property will return other value like 2.0 ?
ALAssetRepresentation *assetRepresentation = [asset defaultRepresentation] ;
CGImageRef imgRef = assetRepresentation.fullResolutionImage ;
UIImage *image = [UIImage imageWithCGImage:imgRef] ;

After retina displays were introduced physical resolution was doubled but for API calls it was remained the same. So in some methods and functions (see UIGraphicsBeginImageContextWithOptions for example) was added additional argument 'scale'. I do not know why [ALAssetRepresentation scale] description is so poor
Returns the representation's scale.
but you can look at UIScreen.scale description
This value reflects the scale factor needed to convert from the
default logical coordinate space into the device coordinate space of
this screen. The default logical coordinate space is measured using
points. For standard-resolution displays, the scale factor is 1.0 and
one point equals one pixel. For Retina displays, the scale factor is
2.0 and one point is represented by four pixels.
I think [ALAssetRepresentation scale] should be 2.0 if you will run this code on device with retina display.

Related

UIImage Distortion from UIGraphicsBeginImageContext with larger files (pixel formats, codecs?)

I'm cropping UIImages with a UIBezierPath using UIGraphicsContext:
CGSize thumbnailSize = CGSizeMake(54.0f, 45.0f); // dimensions of UIBezierPath
UIGraphicsBeginImageContextWithOptions(thumbnailSize, NO, 0);
[path addClip];
[originalImage drawInRect:CGRectMake(0, originalImage.size.height/-3, thumbnailSize.width, originalImage.size.height)];
UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But for some reason my images are getting stretched vertically (everything looks slightly long and skinny), and this effect is stronger the bigger my originalImage is. I'm sure the originalImages are perfectly fine before I do these operations (I've checked)
My images are all 9:16 (say 72px wide by 128px tall) if that matters.
I've seen UIGraphics creates a bitmap with an "ARGB 32-bit integer pixel format using host-byte order"; and I'll admit a bit of ignorance when it comes to pixel formats, but felt this MAY be relevant because I'm not sure if that's the same pixel format I use to encode the picture data.
No idea how relevant this is but here is the FULL processing pipeline:
I'm capturing using AVFoundation and I set my photoSettings as
NSDictionary *photoSettings = #{AVVideoCodecKey : AVVideoCodecH264};
capturing using captureStillImageAsynchronouslyFromConnection:.. then turning it into NSData using [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer]; then downsizing into thumbnail by creating a CGDataProviderRefWithCFData and converting to CGImageRef using CGImageSourceCreateThumbnailAtIndex and getting a UIImage from that.
Later, I once again turn it into NSData using UIImageJPEGRepresentation(thumbnail, 0.7) so I can store. And finally when I'm ready to display I call my own method detailed on top [self maskImage:[UIImage imageWithData:imageData] toPath:_thumbnailPath] and display it on a UIImageView and set contentMode = UIViewContentModeScaleAspectFit.
If the method I'm using to mask the UIImage with the UIBezierPath is fine, I may end up explicitly setting the photoOutput settings with [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil] and the I can probably use something like how to convert a CVImageBufferRef to UIImage and change a lot of my code... but I really rather not do that unless completely necessary since, as I've mentioned, I really don't know much about video encoding / all these graphical, low level objects.
This line:
[originalImage drawInRect:CGRectMake(0, originalImage.size.height/-3, thumbnailSize.width, originalImage.size.height)];
is a problem. You are drawing originalImage but you specify the width of thumbnailSize.width and the height of originalImage. This messes up the image's aspect ratio.
You need a width and a height based on the same image size. Pick one as needed to maintain the proper aspect ratio.

Reading a image at 2x or 3x in iOS

I am fetching a image from my server based on the scale so I fetch something like :
http://myserver.com/image1.png
or http://myserver.com/image1#2x.png
However what I see is that once I initialize a image with the contents of http://myserver.com/image1#2x.png the scale on the UIImage says it is 1x and it gets rendered badly where I want it to be rendered, it renders it in full size.. instead of 1/2 the size with double the pixels.. how do I make this work correctly?
You can create a new UIImage with a scale factor of 2 using the code below:
UIImage * img = [UIImage imageNamed:#"myimagename"];
img = [UIImage imageWithCGImage:img.CGImage scale:2 orientation:img.imageOrientation];
To make the code device-independent, you should get the scale factor of the current device using the code below and replace the number 2 with it.
[UIScreen mainScreen].scale

Lanczos scale not working when scaleKey greater than some value

I have this code
CIImage * input_ciimage = [CIImage imageWithCGImage:self.CGImage];
CIImage * output_ciimage =
[[CIFilter filterWithName:#"CILanczosScaleTransform" keysAndValues:
kCIInputImageKey, input_ciimage,
kCIInputScaleKey, [NSNumber numberWithFloat:0.72], // [NSNumber numberWithFloat: 800.0 / self.size.width],
nil] outputImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef output_cgimage = [context createCGImage:output_ciimage
fromRect:[output_ciimage extent]];
UIImage *output_uiimage;
output_uiimage = [UIImage imageWithCGImage:output_cgimage
scale:1.0 orientation:self.imageOrientation];
CGImageRelease(output_cgimage);
return output_uiimage;
So, when scaleKey greater than some value then output_uiimage is black image.
In my case if value of key kCIInputScaleKey > #0.52 then result is black image. When i rotate image on 90 degree then i got the same result but value was 0.72 (not 0.52).
Whats wrong with library or mistake in my code?
I have iPhone 4, iOS 7.1.2, xCode 6.0 if needed.
That's what Apple said:
This scenario exposes a bug in Core Image. The bug occurs when rendering requires an intermediate buffer that has a dimension greater than the GPU texture limits (4096) AND the input image fits into these limits. This happens with any filter that is performing a convolution (blur, lanczos) on an input image that has width or height close to the GL texture limit.
Note: the render is succesful if the one of the dimensions of the input image is increased to 4097.
Replacing CILanczosScaleTransform with CIAffineTransform (lower quality) or resizing the image with CG are possible workarounds for the provided sample code.
I've updated Bug report after request from Apple's engineers. They answer:
We believe that the issue is with the Core Image Lanczos filter that
occurs at certain downsample scale factors. We hope to fix this issue
in the future.
The filter should work well with downsample that are power of 2 (i.e.
1/2, 1/4, 1/8). So, we would recommend limiting your downsample to
these values and then using AffineTransform to scale up or down
further if required.
We are now closing this bug report.

Objective-C: How to get properties of an object inside NSMutableArray?

I have this NSMutableArray of UIImage named camImages. Then I want to get the scale property of an image, but I can't seem to access that using
[[camImages objectAtIndex:i] scale] //doesn't return the desired scale property
[camImages.objectAtIndex:i].scale //doesn't work (Error: Property 'scale' not found on object of type 'id')
whereas it is possible to get the property if I have a single UIImage
UIImage *img;
img.scale //desired property
I am newbie to iOS & Objective-C, how can I get the desired property? Thanks in advance!
EDIT:
[[camImages objectAtIndex:i] scale] will return
NSDecimalNumberBehaviors
Scale
Returns the number of digits allowed after the decimal separator. (required)
(short)scale Return Value The number of digits allowed after the decimal separator.
whereas the desired scale is of CGFloat type:
UIImage
scale
The scale factor of the image. (read-only)
#property(nonatomic, readonly) CGFloat scale Discussion If you load an
image from a file whose name includes the #2x modifier, the scale is
set to 2.0. You can also specify an explicit scale factor when
initializing an image from a Core Graphics image. All other images are
assumed to have a scale factor of 1.0.
If you multiply the logical size of the image (stored in the size
property) by the value in this property, you get the dimensions of the
image in pixels.
Make your code easier to read and debug:
UIImage *image = camImages[i];
CGFloat scale = image.scale;
if you have two scale method with different signature, compiler may not able to choose the correct signature to use, so you have to tell compiler the type of the object so it can find the correct one
if you really want one line solution
[((UIImage *)[camImages objectAtIndex:i]) scale];
((UIImage *)[camImages objectAtIndex:i]).scale;
but use #rmaddy answer for readability

Get picture from square AVCaptureVideoPreviewLayer

So what I am doing is creating a custom image picker, and I have a 320 X 320 AVCaptureVideoPreviewLayer that I am using, and when I take a picture, I want to get a UIImage of what is actually seen in the preview layer, but what I get when I take a photo with captureStillImageAsynchronouslyFromConnection: completionHandler: is a normal image with size 2448 X 3264. So what would be the best way to get make this image into a 320 x 320 square image like is seen in the preview layer without messing it up? Is there a Right Way™ to do this? Also, I am using AVLayerVideoGravityResizeAspectFill for the videoGravity property of AVCaptureVideoPreviewLayer, if that is relevant.
Have you tried to transform the image?
try using CGAffineTransformMakeScale(<#sx: CGFloat#>, <#sy: CGFloat#>) to scale the image down. Transforms can do magic! If you have ever taken linear algebra, you should recall your standard transformation matrices. I have not used this for images, so I am not sure if this would work well with the pixels.
you could also try
// grab the original image
UIImage *originalImage = [UIImage imageNamed:#"myImage.png"];
// scaling set to 2.0 makes the image 1/2 the size.
UIImage *scaledImage =
[UIImage imageWithCGImage:[originalImage CGImage]
scale:(originalImage.scale * 2.0)
orientation:(originalImage.imageOrientation)];
where you can change the scale factor

Resources