UIImageJPEGRepresentation doen't keep my scale settings - ios

When I use initWithCGImage with a certain scale and then UIImageJPEGRepresentation to get data from this image, it seems the system doesn't keep my scale settings. Any idea why ?
Following is my code and the log I get :
ALAssetRepresentation *rep = [asset defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
UIImageOrientation orientation = [self orientationForAsset:asset];
// Scale the image
UIImage* scaledImage = [[UIImage alloc] initWithCGImage:iref scale:2. orientation:orientation];
NSLog (#"Scaled image size %#", NSStringFromCGSize(scaledImage.size));
// Get data from image
NSData* scaledImageData = UIImageJPEGRepresentation(scaledImage, 0.8);
// Check the image size of the data
UIImage* buildedImage = [UIImage imageWithData:scaledImageData];
NSLog (#"Data image size of %#", NSStringFromCGSize (buildedImage.size));
Gives log :
"Scaled image size {1944, 1296}"
"Data image size of {3888, 2592}"
That's really strange because the two images are supposed to be exactly the same.

You should use -[UIImage imageWithData:scale:] method.

Related

Depth map from AVDepthData different from HEIC file depth data in Photoshop

I am using the following code to extract depth map (by following Apple's own example):
- (nullable AVDepthData *)depthDataFromImageData:(nonnull NSData *)imageData orientation:(CGImagePropertyOrientation)orientation {
AVDepthData *depthData = nil;
CGImageSourceRef imageSource = CGImageSourceCreateWithData((CFDataRef)imageData, NULL);
if (imageSource) {
NSDictionary *auxDataDictionary = (__bridge NSDictionary *)CGImageSourceCopyAuxiliaryDataInfoAtIndex(imageSource, 0, kCGImageAuxiliaryDataTypeDisparity);
if (auxDataDictionary) {
depthData = [[AVDepthData depthDataFromDictionaryRepresentation:auxDataDictionary error:NULL] depthDataByApplyingExifOrientation:orientation];
}
CFRelease(imageSource);
}
return depthData;
}
And I call this from:
[[PHAssetResourceManager defaultManager] requestDataForAssetResource:[PHAssetResource assetResourcesForAsset:asset].firstObject options:nil dataReceivedHandler:^(NSData * _Nonnull data) {
AVDepthData *depthData = [self depthDataFromImageData:data orientation:[self CGImagePropertyOrientationForUIImageOrientation:pickedUiImageOrientation]];
CIImage *image = [CIImage imageWithDepthData:depthData];
UIImage *uiImage = [UIImage imageWithCIImage:image];
UIGraphicsBeginImageContext(uiImage.size);
[uiImage drawInRect:CGRectMake(0, 0, uiImage.size.width, uiImage.size.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *pngData = UIImagePNGRepresentation(newImage);
UIImage* pngImage = [UIImage imageWithData:pngData]; // rewrap
UIImageWriteToSavedPhotosAlbum(pngImage, nil, nil, nil);
} completionHandler:^(NSError * _Nullable error) {
}];
Here is the result: it's a low quality (and rotated but let's put orientation aside for now) image:
Then I've transferred the original HEIC file, opened in Photoshop, went to Channels, and selected depth map as below:
Here is the result:
It's a higher resolution/quality, correctly oriented depth map. Why is the code (actually Apple's own code at https://developer.apple.com/documentation/avfoundation/avdepthdata/2881221-depthdatafromdictionaryrepresent?language=objc) resulting in lower-quality result?
I've found the issue. Actually, it was hiding in plain sight. What is obtained from the +[AVDepthData depthDataFromDictionaryRepresentation:error:] method returns disparity data. I've converted it to depth using the following code:
if(depthData.depthDataType != kCVPixelFormatType_DepthFloat32){
depthData = [depthData depthDataByConvertingToDepthDataType:kCVPixelFormatType_DepthFloat32];
}
(Haven't tried but 16-bit Depth, kCVPixelFormatType_DepthFloat16, should also work well)
After converting disparity to depth, the image is exactly the same as in Photoshop. I should have woken up as I was using CGImageSourceCopyAuxiliaryDataInfoAtIndex(imageSource, 0, kCGImageAuxiliaryDataTypeDisparity); (note the "disparity" in the end) and Photoshop was clearly saying "depth map", converting disparity to depth (or just somehow reading as depth, I honestly don't know the physical encoding, maybe iOS was converting depth to disparity when I was copying the aux data in the first place) on the fly.
Side note: I've also solved the orientation issue by creating the image source directly from [PHAsset requestContentEditingInputWithOptions:completionHandler:] method and passing the contentEditingInput.fullSizeImageURL into CGImageSourceCreateWithURL method. It took care of the orientation.

Get thumbnail from ALAssetsRepresentation as NSData

I am writing NSData to a file and saving it in the device's app documents folder. For that, is it possible to get thumbnail from ALAssetsRepresentation object in NSData format. If so, any helpful links to that?
I couldn't find anything similar, other than getting CGImageRef from ALAssetsRepresentation. I don't want CGImageRef format as I have to use UIImageJPEGRepresentation or UIImagePNGRepresentation to convert it to NSData.
Try this one
GImageRef iref = [myasset thumbnail];
if (iref)
{
UIImage *theThumbnail = [UIImage imageWithCGImage:iref];
NSData *thumnailData = UIImagePNGRepresentation(theThumbnail);
}

How to get UIImage(ImageURL) height and width without converting to NSData

In my project i need to show the different sizes of images in zig-zag fashion. so, i converted the uiimages(url) which are coming from service to NSData and then i get the uiimage. my code is
NSURL *url = [NSURL URLWithString:[[_result objectAtIndex:i ] valueForKey:#"PImage"]];
NSData *data = [NSData dataWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData:data];
so i can get the image size(width and height), But my problem is according to the image size, i need to create UIView, this code is works fine for me, but it is taking too much of time(almost 25 sec) to load 8 images. i figured converting UIImage to NSData is taking time. Is there any way to get the image size(width and height) without converting it into NSData
Thanks for spending time for me.
You can get image properties without actually loading whole image data from disk using ImageIO framework:
#import ImageIO;
...
NSURL *imageURL = … // Init URL somehow
CGImageSourceRef imgSource = CGImageSourceCreateWithURL((__bridge CFURLRef)url, NULL);
NSDictionary* imageProps = (__bridge_transfer NSDictionary*) CGImageSourceCopyPropertiesAtIndex(imgSource, 0, NULL);
NSLog(#"%#", imageProps);
CFRelease(imgSource);
Image width and height will be stored in dictionary under PixelHeight and PixelWidth keys (tested with png image, may be other image formats will use different keys)
Instead of converting url to data and to UIImage, Use EGOImageView OR AsyncImageView. You can simply pass the URL to them. Again setFrame based on size of the image.

Crop image before show in a UIImageView

I'm my app I need to crop and image downloaded from internet. I download the image using this method:
- (void) loadImageFromWeb {
NSURL* url = [NSURL URLWithString:self.imageURL];
NSURLRequest* request = [NSURLRequest requestWithURL:url];
[NSURLConnection sendAsynchronousRequest:request
queue:[NSOperationQueue mainQueue]
completionHandler:^(NSURLResponse * response,
NSData * data,
NSError * error) {
if (!error){
UIImage* image = [[UIImage alloc] initWithData:data];
[self.imageViewEpisode setImage:image];
}
}];
}
How I can crop it?
Define a rect, this rect will be the crop area of your image.
CGRect croprect = CGRectMake(x,y,width, height);
CGImageRef subImage = CGImageCreateWithImageInRect (yourimage,croprect);
Here we have used CoreGraphics to create a sub image from your image.
Creates a bitmap image using the data contained within a subregion of an existing bitmap image.
CGImageRef CGImageCreateWithImageInRect (
CGImageRef image,
CGRect rect
);
Parameters
image
The image to extract the subimage from.
rect
A rectangle whose coordinates specify the area to create an image from.
Return Value
A CGImage object that specifies a subimage of the image. If the rect parameter defines an area that is not in the image, returns NULL.
Finally generate image,
UIImage *newImage = [UIImage imageWithCGImage:subImage];
There are multiple libraries that can do that for you. You can either pick one and work with it, or study a couple and understand how it's done.

how can I save a compressed JPEG from an ALAsset

I have an object ALAsset retrieved from ALAssetLibrary I want to extrapolate a compress JPEG in order to send it to a web services.
any suggestion where to start?
Edit:
I've found a way to get NSData out of the ALAsset
ALAssetRepresentation *rappresentation = [asset defaultRepresentation];
Byte *buffer = (Byte*)malloc(rappresentation.size);
NSUInteger buffered = [rappresentation getBytes:buffer fromOffset:0.0 length:rappresentation.size error:&err];
but I can't find a way to reduce the size of the image by resizing and compressing it.
My idea was to have something like:
UIImage *myImage = [UIImage imageWithData:data];
//resize image
NSData *compressedData = UIImageJPEGRepresentation(myImage, 0.5);
but, first of all, even without resizing, just using this two lines of code compressedData is bigger than data.
and second I'm not sure about what's the best way to resize the UIImage
You can use the
[theAsset thumbnail]
Or;
Compressing might result bigger files after some point, you need to resize the image:
+ (UIImage *)imageWithCGImage:(CGImageRef)cgImage scale:(CGFloat)scale orientation:(UIImageOrientation)orientation
It's easy to get the CGImage from an ALAssetRepresentation:
ALAssetRepresentation repr = [asset defaultRepresentation];
// use the asset representation's orientation and scale in order to set the UIImage
// up correctly
UIImage *image = [UIImage imageWithCGImage:[repr fullResolutionImage] scale:[repr scale] orientation:[repr orientation]];
// then do whatever you want with the UIImage instance

Resources