CGImageSourceCreateThumbnailAtIndex() giving different results in iOS11 and iOS12 - ios

CGImageRef thumbnailImage = NULL;
CGImageSourceRef imageSource = NULL;
CFDictionaryRef createOptions = NULL;
CFStringRef createKeys[3];
CFTypeRef createValues[3];
CFNumberRef thumbnailSize = 0;
UIImage * thumbnail;
NSData * squareData = UIImagePNGRepresentation(sourceImage);
NSData * thumbnailData = nil;
imageSource = CGImageSourceCreateWithData((__bridge CFDataRef)squareData,NULL);
if (imageSource)
{
thumbnailSize = CFNumberCreate(NULL, kCFNumberIntType, &imageSize);
if (thumbnailSize)
{
createKeys[0] = kCGImageSourceCreateThumbnailWithTransform;
createValues[0] = (CFTypeRef)kCFBooleanTrue;
createKeys[1] = kCGImageSourceCreateThumbnailFromImageIfAbsent;
createValues[1] = (CFTypeRef)kCFBooleanTrue;
createKeys[2] = kCGImageSourceThumbnailMaxPixelSize;
createValues[2] = (CFTypeRef)thumbnailSize;
createOptions = CFDictionaryCreate(NULL, (const void **) createKeys,
createValues, sizeof(createValues)/ sizeof(createValues[0]),
&kCFTypeDictionaryKeyCallBacks,
& kCFTypeDictionaryValueCallBacks);
if (createOptions)
{
thumbnailImage = CGImageSourceCreateThumbnailAtIndex(imageSource,0,createOptions);
if(thumbnailImage)
{
thumbnail = [UIImage imageWithCGImage:thumbnailImage];
if (thumbnail)
{
thumbnailData = UIImagePNGRepresentation(thumbnail);
}
}
}
}
}
Getting different thumbnailData.length value for the same image in iOS12. I am trying to create a thumbnail image using CGImageSourceCreateThumbnailAtIndex() and passing sourceImage as a parameter. Is it an iOS12 bug? Is there a workaround for it? I'm using iOS12 beta4.

The data size is different, but the resulting image is fine. They’ve clearly made some very modest changes to the algorithm. But there is no bug here.
Personally, I notice two changes:
In non-square images, the algorithm for determining the size of the thumbnail has obviously changed. E.g., with my sample 3500×2335px image, when I created a 100px thumbnail, it resulted in a 100×67px image in iOS 12.2, but was 100×66px in iOS 11.0.1.
In square images, the two iOS versions both generated suitably square thumbnails. Regarding the image, itself, I could not see much of any observable difference with the naked eye. In fact, I dropped this into Photoshop and analyzed the differences (where black == no difference), it first seemed to suggest no change at all:
Only when I started to really pixel-peep could I detect the very modest changes. The individual channels rarely differed by more than 1 or 2 (in these UInt8 values). Here is the same delta image, this time with the levels blown out so you can see the differences:
Bottom line, there clearly is some change to the algorithm, but I wouldn’t characterize it as a bug. It is just different, but it works fine.
In an unrelated observation, your code has some leaks. If a Core Foundation method has Create or Copy in the name, you are responsible for releasing it (or, in bridged types, transferring ownership to ARC, which isn’t an option here). The static analyzer, shift+command+B, is excellent at identifying these issues.
FWIW, here’s my rendition:
- (UIImage * _Nullable)resizedImage:(UIImage *)sourceImage to:(NSInteger)imageSize {
NSData *squareData = UIImagePNGRepresentation(sourceImage);
UIImage *thumbnail = nil;
CGImageSourceRef imageSource = CGImageSourceCreateWithData((CFDataRef)squareData, NULL);
if (imageSource) {
NSDictionary *createOptions = #{
(id)kCGImageSourceCreateThumbnailWithTransform: #true,
(id)kCGImageSourceCreateThumbnailFromImageIfAbsent: #true,
(id)kCGImageSourceThumbnailMaxPixelSize: #(imageSize)
};
CGImageRef thumbnailImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, (CFDictionaryRef)createOptions);
if (thumbnailImage) {
thumbnail = [UIImage imageWithCGImage:thumbnailImage];
if (thumbnail) {
NSData *data = UIImagePNGRepresentation(thumbnail);
// do something with `data` if you want
}
CFRelease(thumbnailImage);
}
CFRelease(imageSource);
}
return thumbnail;
}

Related

Depth map from AVDepthData different from HEIC file depth data in Photoshop

I am using the following code to extract depth map (by following Apple's own example):
- (nullable AVDepthData *)depthDataFromImageData:(nonnull NSData *)imageData orientation:(CGImagePropertyOrientation)orientation {
AVDepthData *depthData = nil;
CGImageSourceRef imageSource = CGImageSourceCreateWithData((CFDataRef)imageData, NULL);
if (imageSource) {
NSDictionary *auxDataDictionary = (__bridge NSDictionary *)CGImageSourceCopyAuxiliaryDataInfoAtIndex(imageSource, 0, kCGImageAuxiliaryDataTypeDisparity);
if (auxDataDictionary) {
depthData = [[AVDepthData depthDataFromDictionaryRepresentation:auxDataDictionary error:NULL] depthDataByApplyingExifOrientation:orientation];
}
CFRelease(imageSource);
}
return depthData;
}
And I call this from:
[[PHAssetResourceManager defaultManager] requestDataForAssetResource:[PHAssetResource assetResourcesForAsset:asset].firstObject options:nil dataReceivedHandler:^(NSData * _Nonnull data) {
AVDepthData *depthData = [self depthDataFromImageData:data orientation:[self CGImagePropertyOrientationForUIImageOrientation:pickedUiImageOrientation]];
CIImage *image = [CIImage imageWithDepthData:depthData];
UIImage *uiImage = [UIImage imageWithCIImage:image];
UIGraphicsBeginImageContext(uiImage.size);
[uiImage drawInRect:CGRectMake(0, 0, uiImage.size.width, uiImage.size.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *pngData = UIImagePNGRepresentation(newImage);
UIImage* pngImage = [UIImage imageWithData:pngData]; // rewrap
UIImageWriteToSavedPhotosAlbum(pngImage, nil, nil, nil);
} completionHandler:^(NSError * _Nullable error) {
}];
Here is the result: it's a low quality (and rotated but let's put orientation aside for now) image:
Then I've transferred the original HEIC file, opened in Photoshop, went to Channels, and selected depth map as below:
Here is the result:
It's a higher resolution/quality, correctly oriented depth map. Why is the code (actually Apple's own code at https://developer.apple.com/documentation/avfoundation/avdepthdata/2881221-depthdatafromdictionaryrepresent?language=objc) resulting in lower-quality result?
I've found the issue. Actually, it was hiding in plain sight. What is obtained from the +[AVDepthData depthDataFromDictionaryRepresentation:error:] method returns disparity data. I've converted it to depth using the following code:
if(depthData.depthDataType != kCVPixelFormatType_DepthFloat32){
depthData = [depthData depthDataByConvertingToDepthDataType:kCVPixelFormatType_DepthFloat32];
}
(Haven't tried but 16-bit Depth, kCVPixelFormatType_DepthFloat16, should also work well)
After converting disparity to depth, the image is exactly the same as in Photoshop. I should have woken up as I was using CGImageSourceCopyAuxiliaryDataInfoAtIndex(imageSource, 0, kCGImageAuxiliaryDataTypeDisparity); (note the "disparity" in the end) and Photoshop was clearly saying "depth map", converting disparity to depth (or just somehow reading as depth, I honestly don't know the physical encoding, maybe iOS was converting depth to disparity when I was copying the aux data in the first place) on the fly.
Side note: I've also solved the orientation issue by creating the image source directly from [PHAsset requestContentEditingInputWithOptions:completionHandler:] method and passing the contentEditingInput.fullSizeImageURL into CGImageSourceCreateWithURL method. It took care of the orientation.

How to save and retrieve UIImages using "imageWithData" maintaining proper image format?

I'm working on creating and storing OpenGL ES1 3D models, and want to include image files to be used as textures, within the same file as the 3D model data. I am having trouble loading the image data in a usable format. I'm using UIImageJPEGRepresentation to convert the image data and store it into an NSData. I then append it to a NSMutableData object, along with all the 3D data, and write it out to a file. The data seems to write and read without error, but I encounter problems when trying use the image data to create a "CGImageRef" which I use to generate the texture data for the 3D model. The image data seems to be in an unrecognized format after it is loaded from the file, because it generates the error "CGContextDrawImage: invalid context 0x0.” when I attempt to create the "CGImageRef". I suspect that the image data is gettin misaligned somehow, causing it to be rejected when attempting to create the "CGImageRef". I appreciate any help. I'm stumped at this point. All of the data sizes and offsets add up and look fine. Saves and loads happen without error. The image data just seems off a bit, but I don't know why.
Here's my code:
//======================================================
- (BOOL)save3DFile: (NSString *)filePath {
// load TEST IMAGE into UIIMAGE
UIImage *image = [UIImage imageNamed:#“testImage.jpg"];
// convert image to JPEG encoded NSDATA
NSData *imageData = UIImageJPEGRepresentation(image,1.0);
// Save length of imageData to global "imDataLen" to use later in “load3DFile”
imDataLen = [imageData length];
// TEST: this works fine for CGImageRef creation in “loadTexture”
// traceView.image=[UIImage imageWithData:[imageData subdataWithRange:NSMakeRange(0, imageDataLen)]];
// [self loadTexture];
// TEST: this also works fine for CGImageRef creation in “loadTexture”
// traceView.image=[UIImage imageWithData:txImData];
// [self loadTexture];
fvoh.fileVersion = FVO_VERSION;
fvoh.obVertDatLen = obVertDatLen;
fvoh.obFaceDatLen = obFaceDatLen;
fvoh.obNormDatLen = obNormDatLen;
fvoh.obTextDatLen = obTextDatLen;
fvoh.obCompCount = obCompCount;
fvoh.obVertCount = obVertCount;
fvoh.obElemCount = obElemCount;
fvoh.obElemSize = obElemSize;
fvoh.obElemType = obElemType;
NSMutableData *obSvData;
obSvData=[NSMutableData dataWithBytes:&fvoh length:(sizeof(fvoh))];
[obSvData appendBytes:obElem length:obFaceDatLen];
[obSvData appendBytes:mvElem length:obVertDatLen];
[obSvData appendBytes:mvNorm length:obNormDatLen];
[obSvData appendBytes:obText length:obTextDatLen];
[obSvData appendBytes:&ds length:(sizeof(ds))];
// next, we append image data, and write all data to a file
// seems to work fine, no errors, at this point
[obSvData appendBytes: imageData length:[imageData length]];
BOOL success=[obSvData writeToFile: filePath atomically:YES];
return success;
}
//======================================================
- (void) load3DFile:(NSString *)filePath {
NSData *fvoData;
NSUInteger offSet,fiLen,fhLen,dsLen;
[[FileList sharedFileList] setCurrFile:(NSString *)filePath];
fvoData=[NSData dataWithContentsOfFile:filePath];
fiLen=[fvoData length];
fhLen=sizeof(fvoh);
dsLen=sizeof(ds);
memcpy(&fvoh,[fvoData bytes],fhLen);offSet=fhLen;
//+++++++++++++++++++++++++++++++
obVertDatLen = fvoh.obVertDatLen;
obFaceDatLen = fvoh.obFaceDatLen;
obNormDatLen = fvoh.obNormDatLen;
obTextDatLen = fvoh.obTextDatLen;
obCompCount = fvoh.obCompCount;
obVertCount = fvoh.obVertCount;
obElemCount = fvoh.obElemCount;
obElemSize = fvoh.obElemSize;
obElemType = fvoh.obElemType;
//+++++++++++++++++++++++++++++++
memcpy(obElem, [fvoData bytes]+offSet,obFaceDatLen);offSet+=obFaceDatLen;
memcpy(mvElem, [fvoData bytes]+offSet,obVertDatLen);offSet+=obVertDatLen;
memcpy(mvNorm, [fvoData bytes]+offSet,obNormDatLen);offSet+=obNormDatLen;
memcpy(obText, [fvoData bytes]+offSet,obTextDatLen);offSet+=obTextDatLen;
memcpy(&ds, [fvoData bytes]+offSet,dsLen);offSet+=dsLen;
// the following seem to read the data into “imageData” just fine, no errors
// NSData *imageData = [fvoData subdataWithRange:NSMakeRange(offSet, imDataLen)];
// NSData *imageData = [fvoData subdataWithRange:NSMakeRange((fiLen-imDataLen), imDataLen)];
// NSData *imageData = [NSData dataWithBytes:[fvoData bytes]+offSet length: imDataLen];
NSData *imageData = [NSData dataWithBytes:[fvoData bytes]+(fiLen-imDataLen) length: imDataLen];
// but the contents of imageData seem to end up in an unexpected format, causing error:
// “CGContextDrawImage: invalid context 0x0.” during CGImageRef creation in “loadTexture”
traceView.image=[UIImage imageWithData:imageData];
[self loadTexture];
}
//======================================================
- (void)loadTexture {
CGImageRef image=[traceView.image].CGImage;
CGContextRef texContext;GLubyte* bytes=nil;GLsizei width,height;
if(image){
width=(GLsizei)CGImageGetWidth(image);
height=(GLsizei)CGImageGetHeight(image);
bytes=(GLubyte*) calloc(width*height*4,sizeof(GLubyte));
texContext=CGBitmapContextCreate(bytes,width,height,8,width*4,CGImageGetColorSpace(image),
kCGImageAlphaPremultipliedLast);
CGContextDrawImage(texContext,CGRectMake(0.0,0.0,(CGFloat)width,(CGFloat)height),image);
CGContextRelease(texContext);
}
if(bytes){
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,width,height,0,GL_RGBA,GL_UNSIGNED_BYTE,bytes);
free(bytes);
}
}
//======================================================
I failed to receive any answers to this question. I finally stumbled across the answer myself. When I execute the save3DFile code, instead of adding the image data to NSMutableData *obSvData, using 'appendBytes' as illustrated below:
[obSvData appendBytes: imageData length:[imageData length]];
I instead use 'appendData' as shown here:
[obSvData appendData: imageData];
where imageData was previously filled with the contents of a UIImage and converted to JPEG format in the process as follows:
NSData *imageData = UIImageJPEGRepresentation(image,1.0);
See the complete code listing above for context. Anyway, the using 'appendData' instead of 'appendBytes' made all the difference, and allowed me to store the image data in the same file along with all the other 3D model data (vertices, indices, normals, et cetera), reloading all that data without problem, and successfully create 3D models with textures from a single file.

image uploaded rotated other way

let image_data = UIImageJPEGRepresentation(self.imagetoadd.image!,0.0)
The image in ios, am using swift 3 to do this is being uploaded rotated.How can I solve such thing?
JPEG images usually contain an EXIF dictionary, here are stored a lot information about how the image was taken, image rotation is one of it.
UIImage instances keeps these information (if the original image has it) as well inside a specific property called imageOrientation.
As far as I remember this information is ripped of by using the method UIImageJPEGRepresentation.
To create a correct data instance with the above information you must use Core Graphics methods, or normalize the rotation before sending the image.
To normalize the image something like that should be enough:
CGImageRef cgRef = imageToSave.CGImage;
UIImage * fixImage = [[UIImage alloc] initWithCGImage:cgRef scale:imageToSave.scale orientation:UIImageOrientationUp];
To keep the rotation information:
CFURLRef url = (__bridge_retained CFURLRef)[NSURL fileURLWithPath:path];//Save data path
NSDictionary * metadataDictionary = [self imageMetadataForPath:pathToOriginalImage];
CFMutableDictionaryRef metadataImage = (__bridge_retained CFMutableDictionaryRef) metadata;
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(url, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(destination, image, metadataImage);
if (!CGImageDestinationFinalize(destination)) {
DLog(#"Failed to write image to %#", path);
}
Where the -imageMetadataForPath:
- (NSDictionary*) imageMetadataForPath:(NSString*) imagePath{
NSURL *imageURL = [NSURL fileURLWithPath:imagePath];
CGImageSourceRef mySourceRef = CGImageSourceCreateWithURL((__bridge CFURLRef)imageURL, NULL);
NSDictionary * dict = (NSDictionary *) CFBridgingRelease(CGImageSourceCopyPropertiesAtIndex(mySourceRef,0,NULL));
CFRelease(mySourceRef);
return dict;
}
This is a copy and paste from a project of mine, you probably need to do a huge refactoring, also because it is using manual memory management in core foundation and you are using SWIFT. Of course by using this last set of instructions, the backend code must be prepared to deal with image orientation too.
If you want to know more about rotation, here is a link.

Converting NSData to CGImage and then back to NSData makes the file too big

I have built a camera using AVFoundation.
Once my AVCaptureStillImageOutput has completed its captureStillImageAsynchronouslyFromConnection:completionHandler: method, I create a NSData object like this:
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
Once I have the NSData object, I would like to rotate the image -without- converting to a UIImage. I have found out that I can convert to a CGImage to do so.
After I have the imageData, I start the process of converting to CGImage, but I have found that the CGImageRef ends up being THIRTY times larger than the NSData object.
Here is the code I use to convert to CGImage from NSData:
CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData((__bridge CFDataRef)(imageData));
CGImageRef imageRef = CGImageCreateWithJPEGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault);
If I try to NSLog out the size of the image, it comes to 30 megabytes when the NSData was a 1.5-2 megabyte image!
size_t imageSize = CGImageGetBytesPerRow(imageRef) * CGImageGetHeight(imageRef);
NSLog(#"cgimage size = %zu",imageSize);
I thought that maybe when you go from NSData to CGImage, the image decompresses, and then maybe if I converted back to NSData, that it might go back to the right file size.
imageData = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
The above NSData has the same length as the CGImageRef object.
If I try to save the image, the image is a 30mb image that cannot be opened.
I am totally new to using CGImage, so I am not sure if I am converting from NSData to CGImage and back incorrectly, or if I need to call some method to decompress again.
Thanks in advance,
Will
I was doing some image manipulation and came across your question on SO. Seems like no one else came up with an answer, so here's my theory.
While it's theoretically possible to convert a CGImageRef back to NSData in the manner that you described, the data itself is invalid and not a real JPEG or PNG, as you discovered by it not being readable. So I don't think that the NSData.length is correct. You have to actually jump through a number of steps to recreate an NSData representation of a CGImageRef:
// incoming image data
NSData *image;
// create the image ref
CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData((__bridge CFDataRef) image);
CGImageRef imageRef = CGImageCreateWithJPEGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault);
// image metadata properties (EXIF, GPS, TIFF, etc)
NSDictionary *properties;
// create the new output data
CFMutableDataRef newImageData = CFDataCreateMutable(NULL, 0);
// my code assumes JPEG type since the input is from the iOS device camera
CFStringRef type = UTTypeCreatePreferredIdentifierForTag(kUTTagClassMIMEType, (__bridge CFStringRef) #"image/jpg", kUTTypeImage);
// create the destination
CGImageDestinationRef destination = CGImageDestinationCreateWithData(newImageData, type, 1, NULL);
// add the image to the destination
CGImageDestinationAddImage(destination, imageRef, (__bridge CFDictionaryRef) properties);
// finalize the write
CGImageDestinationFinalize(destination);
// memory cleanup
CGDataProviderRelease(imgDataProvider);
CGImageRelease(imageRef);
CFRelease(type);
CFRelease(destination);
NSData *newImage = (__bridge_transfer NSData *)newImageData;
With these steps, the newImage.length should be the same as image.length. I haven't tested since I actually do cropping between the input and the output, but based on the crop, the size is roughly what I expected (the output is roughly half the pixels of the input and thus the output length roughly half the size of the input length).
if somebody is looking for swift version of "Covert CGImage to Data" , Here it is :
extension CGImage {
var jpegData: Data? {
guard let mutableData = CFDataCreateMutable(nil, 0),
let destination = CGImageDestinationCreateWithData(mutableData, kUTTypeJPEG, 1, nil)
else {
return nil
}
CGImageDestinationAddImage(destination, self, nil)
guard CGImageDestinationFinalize(destination) else { return nil }
return mutableData as Data
}
}

Generating custom thumbnail from ALAssetRepresentation

My main problem is i need to obtain a thumbnail for an ALAsset object.
I tried a lot of solutions and searched stack overflow for days, all the solutions i found are not working for me due to these constraint:
I can't use the default thumbnail because it's too little;
I can't use the fullScreen or fullResolution image because i have a lot of images on screen;
I can't use UIImage or UIImageView for resizing because those loads
the fullResolution image
I can't load the image in memory, i'm working with 20Mpx images;
I need to create a 200x200 px version of the original asset to load on screen;
this is the last iteration of the code i came with:
#import <AssetsLibrary/ALAsset.h>
#import <ImageIO/ImageIO.h>
// ...
ALAsset *asset;
// ...
ALAssetRepresentation *assetRepresentation = [asset defaultRepresentation];
NSDictionary *thumbnailOptions = [NSDictionary dictionaryWithObjectsAndKeys:
(id)kCFBooleanTrue, kCGImageSourceCreateThumbnailWithTransform,
(id)kCFBooleanTrue, kCGImageSourceCreateThumbnailFromImageAlways,
(id)[NSNumber numberWithFloat:200], kCGImageSourceThumbnailMaxPixelSize,
nil];
CGImageRef generatedThumbnail = [assetRepresentation CGImageWithOptions:thumbnailOptions];
UIImage *thumbnailImage = [UIImage imageWithCGImage:generatedThumbnail];
problem is, the resulting CGImageRef is neither transformed by orientation, nor of the specified max pixel size;
I also tried to find a way of resizing using CGImageSource, but:
the asset url can't be used in the CGImageSourceCreateWithURL:;
i can't extract from ALAsset or ALAssetRepresentation a CGDataProviderRef to use with CGImageSourceCreateWithDataProvider:;
CGImageSourceCreateWithData: requires me to store the fullResolution or fullscreen asset in memory in order to work.
Am i missing something?
Is there another way of obtaining a custom thumbnail from ALAsset or ALAssetRepresentation that i'm missing?
Thanks in advance.
You can use CGImageSourceCreateThumbnailAtIndex to create a small image from a potentially-large image source. You can load your image from disk using the ALAssetRepresentation's getBytes:fromOffset:length:error: method, and use that to create a CGImageSourceRef.
Then you just need to pass the kCGImageSourceThumbnailMaxPixelSize and kCGImageSourceCreateThumbnailFromImageAlways options to CGImageSourceCreateThumbnailAtIndex with the image source you've created, and it will create a smaller version for you without loading the huge version into memory.
I've written a blog post and gist with this technique fleshed out in full.
There is a problem with this approach mentioned by Jesse Rusak. Your app will be crashed with the following stack if asset is too large:
0 CoreGraphics 0x2f602f1c x_malloc + 16
1 libsystem_malloc.dylib 0x39fadd63 malloc + 52
2 CoreGraphics 0x2f62413f CGDataProviderCopyData + 178
3 ImageIO 0x302e27b7 CGImageReadCreateWithProvider + 156
4 ImageIO 0x302e2699 CGImageSourceCreateWithDataProvider + 180
...
Link Register Analysis:
Symbol: malloc + 52
Description: We have determined that the link register (lr) is very likely to contain the return address of frame #0's calling function, and have inserted it into the crashing thread's backtrace as frame #1 to aid in analysis. This determination was made by applying a heuristic to determine whether the crashing function was likely to have created a new stack frame at the time of the crash.
Type: 1
It is very easy to simulate the crash. Let's read data from ALAssetRepresentation in getAssetBytesCallback with a small chunks. The particular size of the chunk is not important. The only thing which matters is calling callback about 20 times.
static size_t getAssetBytesCallback(void *info, void *buffer, off_t position, size_t count) {
static int i = 0; ++i;
ALAssetRepresentation *rep = (__bridge id)info;
NSError *error = nil;
NSLog(#"%d: off:%lld len:%zu", i, position, count);
const size_t countRead = [rep getBytes:(uint8_t *)buffer fromOffset:position length:128 error:&error];
return countRead;
}
Here are tail lines of the log
2014-03-21 11:21:14.250 MRCloudApp[3461:1303] 20: off:2432 len:2156064
MRCloudApp(3461,0x701000) malloc: *** mach_vm_map(size=217636864) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
I introduced a counter to prevent this crash. You can see my fix below:
typedef struct {
void *assetRepresentation;
int decodingIterationCount;
} ThumbnailDecodingContext;
static const int kThumbnailDecodingContextMaxIterationCount = 16;
static size_t getAssetBytesCallback(void *info, void *buffer, off_t position, size_t count) {
ThumbnailDecodingContext *decodingContext = (ThumbnailDecodingContext *)info;
ALAssetRepresentation *assetRepresentation = (__bridge ALAssetRepresentation *)decodingContext->assetRepresentation;
if (decodingContext->decodingIterationCount == kThumbnailDecodingContextMaxIterationCount) {
NSLog(#"WARNING: Image %# is too large for thumbnail extraction.", [assetRepresentation url]);
return 0;
}
++decodingContext->decodingIterationCount;
NSError *error = nil;
size_t countRead = [assetRepresentation getBytes:(uint8_t *)buffer fromOffset:position length:count error:&error];
if (countRead == 0 || error != nil) {
NSLog(#"ERROR: Failed to decode image %#: %#", [assetRepresentation url], error);
return 0;
}
return countRead;
}
- (UIImage *)thumbnailForAsset:(ALAsset *)asset maxPixelSize:(CGFloat)size {
NSParameterAssert(asset);
NSParameterAssert(size > 0);
ALAssetRepresentation *representation = [asset defaultRepresentation];
if (!representation) {
return nil;
}
CGDataProviderDirectCallbacks callbacks = {
.version = 0,
.getBytePointer = NULL,
.releaseBytePointer = NULL,
.getBytesAtPosition = getAssetBytesCallback,
.releaseInfo = NULL
};
ThumbnailDecodingContext decodingContext = {
.assetRepresentation = (__bridge void *)representation,
.decodingIterationCount = 0
};
CGDataProviderRef provider = CGDataProviderCreateDirect((void *)&decodingContext, [representation size], &callbacks);
NSParameterAssert(provider);
if (!provider) {
return nil;
}
CGImageSourceRef source = CGImageSourceCreateWithDataProvider(provider, NULL);
NSParameterAssert(source);
if (!source) {
CGDataProviderRelease(provider);
return nil;
}
CGImageRef imageRef = CGImageSourceCreateThumbnailAtIndex(source, 0, (__bridge CFDictionaryRef) #{(NSString *)kCGImageSourceCreateThumbnailFromImageAlways : #YES,
(NSString *)kCGImageSourceThumbnailMaxPixelSize : [NSNumber numberWithFloat:size],
(NSString *)kCGImageSourceCreateThumbnailWithTransform : #YES});
UIImage *image = nil;
if (imageRef) {
image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
}
CFRelease(source);
CGDataProviderRelease(provider);
return image;
}

Resources