My question is quite straightforward, I want to get the orientation of an image, but I don't want to use [UIImage imageWithData:] because it consumes memory and is potentially slow. So, what would be the solution? The images are saved in the app's documents folder rather than ALAssetsLibrary.
PS: happy new year guys!
You need to use lower level Quartz functions. Read in the first 2K or 4K of the image into a NSData object, then pass that data to an incremental image creator, and ask it for the orientation. If you don't get it, read in a larger chunk. JPGs almost always have the metadata in the first 2K of data (maybe 4K, been a while since I wrote this code):
{
CGImageSourceRef imageSourcRef = CGImageSourceCreateIncremental(NULL);
CGImageSourceUpdateData(imageSourcRef, (__bridge CFDataRef)data, NO);
CFDictionaryRef dict = CGImageSourceCopyPropertiesAtIndex(imageSourcRef, 0, NULL);
if(dict) {
//CFShow(dict);
self.properties = CFBridgingRelease(dict);
if(!self.orientation) {
self.orientation = [[self.properties objectForKey:#"Orientation"] integerValue];
}
}
CFRelease(imageSourcRef);
}
Related
What is the fastest way on iOS 7+ to convert raw bytes of BGRA / UIImage data to a CVPixelBufferRef? The bytes are 4 bytes per pixel in BGRA order.
Is there any chance of a direct cast here vs. copying data into a secondary storage?
I've considered CVPixelBufferCreateWithBytes but I have a hunch it is copying memory...
You have to use CVPixelBufferCreate because CVPixelBufferCreateWithBytes will not allow fast conversion to an OpenGL texture using the Core Video texture cache. I'm not sure why this is the case, but that's the way things are at least as of iOS 8. I tested this with the profiler, and CVPixelBufferCreateWithBytes causes a texSubImage2D call to be made every time a Core Video texture is accessed from the cache.
CVPixelBufferCreate will do funny things if the width is not a multiple of 16, so if you plan on doing CPU operations on CVPixelBufferGetBaseAddress, and you want it to be laid out like a CGImage or CGBitmapContext, you will need to pad your width higher until it is a multiple of 16, or make sure you use the CVPixelBufferGetRowBytes and pass that to any CGBitmapContext you create.
I tested all combinations of dimensions of width and height from 16 to 2048, and as long as they were padded to the next highest multiple of 16, the memory was laid out properly.
+ (NSInteger) alignmentForPixelBufferDimension:(NSInteger)dim
{
static const NSInteger modValue = 16;
NSInteger mod = dim % modValue;
return (mod == 0 ? dim : (dim + (modValue - mod)));
}
+ (NSDictionary*) pixelBufferSurfaceAttributesOfSize:(CGSize)size
{
return #{ (NSString*)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA),
(NSString*)kCVPixelBufferWidthKey: #(size.width),
(NSString*)kCVPixelBufferHeightKey: #(size.height),
(NSString*)kCVPixelBufferBytesPerRowAlignmentKey: #(size.width * 4),
(NSString*)kCVPixelBufferExtendedPixelsLeftKey: #(0),
(NSString*)kCVPixelBufferExtendedPixelsRightKey: #(0),
(NSString*)kCVPixelBufferExtendedPixelsTopKey: #(0),
(NSString*)kCVPixelBufferExtendedPixelsBottomKey: #(0),
(NSString*)kCVPixelBufferPlaneAlignmentKey: #(0),
(NSString*)kCVPixelBufferCGImageCompatibilityKey: #(YES),
(NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey: #(YES),
(NSString*)kCVPixelBufferOpenGLESCompatibilityKey: #(YES),
(NSString*)kCVPixelBufferIOSurfacePropertiesKey: #{ #"IOSurfaceCGBitmapContextCompatibility": #(YES), #"IOSurfaceOpenGLESFBOCompatibility": #(YES), #"IOSurfaceOpenGLESTextureCompatibility": #(YES) } };
}
Interestingly enough, if you ask for a texture from the Core Video cache with dimensions smaller than the padded dimensions, it will return a texture immediately. Somehow underneath it is able to reference the original texture, but with a smaller width and height.
To sum up, you cannot wrap existing memory with a CVPixelBufferRef using CVPixelBufferCreateWithBytes and use the Core Video texture cache efficiently. You must use CVPixelBufferCreate and use CVPixelBufferGetBaseAddress.
I'm currently working on a project that must generate a collage of a 9000x6000 pixels resolution, generated from 15 photos. The problem that I'm facing is that when I finish drawing I'm getting an empty image (those 15 images are not being drawn in the context).
This problem is only present on devices with 512MB of RAM like iPhone 4/4S or iPad 2 and I think that this is a problem caused by the system because it cannot allocate enough memory for this app. When I run this line: UIGraphicsBeginImageContextWithOptions(outputSize, opaque, 1.0f); the app's memory usage raises by 216MB and the total memory usage gets to ~240MB RAM.
The thing that I cannot understand is why on Earth the images that I'm trying to draw within the for loop are not being rendered always on the currentContext? I emphasized the word always, because only once in 30 tests the images were rendered (without changing any line of code).
Question nr. 2: If this is a problem caused by the system because it cannot allocate enough memory, is there any other way to generate this image, like a CGContextRef backed by a file output stream, so that it won't keep the image in the memory?
This is the code:
CGSize outputSize = CGSizeMake(9000, 6000);
BOOL opaque = YES;
UIGraphicsBeginImageContextWithOptions(outputSize, opaque, 1.0f);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(currentContext, [UIColor blackColor].CGColor);
CGContextFillRect(currentContext, CGRectMake(0, 0, outputSize.width, outputSize.height));
for (NSUInteger i = 0; i < strongSelf.images.count; i++)
{
#autoreleasepool
{
AGAutoCollageImageData *imageData = (AGAutoCollageImageData *)strongSelf.layout.images[i];
CGRect destinationRect = CGRectMake(floorf(imageData.destinationRectangle.origin.x * scaleXRatio),
floorf(imageData.destinationRectangle.origin.y * scaleYRatio),
floorf(imageData.destinationRectangle.size.width * scaleXRatio),
floorf(imageData.destinationRectangle.size.height * scaleYRatio));
CGRect sourceRect = imageData.sourceRectangle;
// Draw clipped image
CGImageRef clippedImageRef = CGImageCreateWithImageInRect(((ALAsset *)strongSelf.images[i]).defaultRepresentation.fullScreenImage, sourceRect);
CGContextDrawImage(currentContext, drawRect, clippedImageRef);
CGImageRelease(clippedImageRef);
}
}
// Pull the image from our context
strongSelf.result = UIGraphicsGetImageFromCurrentImageContext();
// Pop the context
UIGraphicsEndImageContext();
P.S: The console doesn't show anything but 'memory warnings', which are expected to see.
Sound like a cool project.
Tactic: try also releasing imageData at the end of every loop (explicitly, after releasing the clippedImageRef)
Strategic:
If you do need to support such "low" RAM requirements with such "high" input, maybe you should consider 2 alternative options:
Compress (obviously): even minimal, naked to the eye, JPEG compression can go a long way.
Split: never "really" merge the image. Have an arrayed datastructure which represents a BigImage. make utilities for the presentation logic.
From iOS6, Apple has given the provision to use native YUV to CIImage through this call
initWithCVPixelBuffer:options:
In the core Image Programming guide, they have mentioned about this feature
Take advantage of the support for YUV image in iOS 6.0 and later.
Camera pixel buffers are natively YUV but most image processing
algorithms expect RBGA data. There is a cost to converting between the
two. Core Image supports reading YUB from CVPixelBuffer objects and
applying the appropriate color transform.
options = #{ (id)kCVPixelBufferPixelFormatTypeKey :
#(kCVPixelFormatType_420YpCvCr88iPlanarFullRange) };
But, I am unable to use it properly. I have a raw YUV data. So, this is what i did
void *YUV[3] = {data[0], data[1], data[2]};
size_t planeWidth[3] = {width, width/2, width/2};
size_t planeHeight[3] = {height, height/2, height/2};
size_t planeBytesPerRow[3] = {stride, stride/2, stride/2};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn ret = CVPixelBufferCreateWithPlanarBytes(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_420YpCbCr8PlanarFullRange,
nil,
width*height*1.5,
3,
YUV,
planeWidth,
planeHeight,
planeBytesPerRow,
nil,
nil, nil, &pixelBuffer);
NSDict *opt = #{ (id)kCVPixelBufferPixelFormatTypeKey :
#(kCVPixelFormatType_420YpCbCr8PlanarFullRange) };
CIImage *image = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:opt];
I am getting nil for image. Anyy idea what I am missing.
EDIT:
I added lock and unlock base address before call. Also, I dumped the data of pixelbuffer to ensure pixellbuffer propely hold the data. It looks like something wrong with the init call only. Still CIImage object is returning nil.
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
CIImage *image = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:opt];
CVPixelBufferUnlockBaseAddress(pixelBuffer,0);
There should be error message in console: initWithCVPixelBuffer failed because the CVPixelBufferRef is not IOSurface backed. See Apple's Technical Q&A QA1781 for how to create an IOSurface-backed CVPixelBuffer.
Calling CVPixelBufferCreateWithBytes() or CVPixelBufferCreateWithPlanarBytes() will result in CVPixelBuffers that are not IOSurface-backed...
...To do that, you must specify kCVPixelBufferIOSurfacePropertiesKey in the pixelBufferAttributes dictionary when creating the pixel buffer using CVPixelBufferCreate().
NSDictionary *pixelBufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
[NSDictionary dictionary], (id)kCVPixelBufferIOSurfacePropertiesKey,
nil];
// you may add other keys as appropriate, e.g. kCVPixelBufferPixelFormatTypeKey, kCVPixelBufferWidthKey, kCVPixelBufferHeightKey, etc.
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(... (CFDictionaryRef)pixelBufferAttributes, &pixelBuffer);
Alternatively, you can make IOSurface-backed CVPixelBuffers using CVPixelBufferPoolCreatePixelBuffer() from an existing pixel buffer pool, if the pixelBufferAttributes dictionary provided to CVPixelBufferPoolCreate() includes kCVPixelBufferIOSurfacePropertiesKey.
I am working on a similar problem and kept finding that same quote from Apple without any further information on how to work in a YUV color space. I came upon the following:
By default, Core Image assumes that processing nodes are 128 bits-per-pixel, linear light, premultiplied RGBA floating-point values that use the GenericRGB color space. You can specify a different working color space by providing a Quartz 2D CGColorSpace object. Note that the working color space must be RGB-based. If you have YUV data as input (or other data that is not RGB-based), you can use ColorSync functions to convert to the working color space. (See Quartz 2D Programming Guide for information on creating and using CGColorspace objects.)
With 8-bit YUV 4:2:2 sources, Core Image can process 240 HD layers per gigabyte. Eight-bit YUV is the native color format for video source such as DV, MPEG, uncompressed D1, and JPEG. You need to convert YUV color spaces to an RGB color space for Core Image.
I note that there are no YUV color spaces, only Gray and RGB; and their calibrated cousins. I'm not sure how to convert the color space yet, but will certainly report here if I find out.
In an iOS app, I have an NSData object that is a JPEG file that needs to be resized to a given resolution (2048x2048) and needs the JPEG quality set to 75%. These need to be set while retaining EXIF data in the file. The photo is not in the camera roll -- it was pulled over the network from a DSLR camera and is just temporarily stored with the app. If the image takes a trip through UIImage, the EXIF data is lost. How can I perform a resize and set quality without losing EXIF? Or is there a way to strip the EXIF data before the conversion and a way to add it back when done?
You may try CGImageSourceCreateThumbnailAtIndex and CGImageSourceCopyPropertiesAtIndex to resize a NSData object that is a jpeg image without losing EXIF.
I'm inspired from Problem setting exif data for an image
The following is my code for the same purpose.
Cheers
+ (NSData *)JPEGRepresentationSavedMetadataWithImage:(NSData *)imageData compressionQuality:(CGFloat)compressionQuality maxSize:(CGFloat)maxSize
{
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)imageData, NULL);
CFDictionaryRef options = (__bridge CFDictionaryRef)#{(id)kCGImageSourceCreateThumbnailWithTransform: (id)kCFBooleanTrue,
(id)kCGImageSourceCreateThumbnailFromImageIfAbsent: (id)kCFBooleanTrue,
(id)kCGImageSourceThumbnailMaxPixelSize: [NSNumber numberWithDouble: maxSize], // The maximum width and height in pixels of a thumbnail
(id)kCGImageDestinationLossyCompressionQuality: [NSNumber numberWithDouble:compressionQuality]};
CGImageRef thumbnail = CGImageSourceCreateThumbnailAtIndex(source, 0, options); // Create scaled image
CFStringRef UTI = kUTTypeJPEG;
NSMutableData *destData = [NSMutableData data];
CGImageDestinationRef destination = CGImageDestinationCreateWithData((__bridge CFMutableDataRef)destData, UTI, 1, NULL);
if (!destination) {
NSLog(#"Failed to create image destination");
}
CGImageDestinationAddImage(destination, thumbnail, CGImageSourceCopyPropertiesAtIndex(source, 0, NULL)); // copy all metadata in source to destination
if (!CGImageDestinationFinalize(destination)) {
NSLog(#"Failed to create data from image destination");
}
CFRelease(destination);
CFRelease(source);
CFRelease(thumbnail);
return [destData copy];
}
I faced the same problem, now I can upload files with EXIF data, also you can compress photo if need it, this solved the issue for me:
// Get your image.
NSURL *url = #"http://somewebsite.com/path/to/some/image.jpg";
UIImage *loImgPhoto = [NSData dataWithContentsOfURL:url];
// Get your metadata (includes the EXIF data).
CGImageSourceRef loImageOriginalSource = CGImageSourceCreateWithData(( CFDataRef) loDataFotoOriginal, NULL);
NSDictionary *loDicMetadata = (__bridge NSDictionary *) CGImageSourceCopyPropertiesAtIndex(loImageOriginalSource, 0, NULL);
// Set your compression quality (0.0 to 1.0).
NSMutableDictionary *loDicMutableMetadata = [loDicMetadata mutableCopy];
[loDicMutableMetadata setObject:#(lfCompressionQualityValue) forKey:(__bridge NSString *)kCGImageDestinationLossyCompressionQuality];
// Create an image destination.
NSMutableData *loNewImageDataWithExif = [NSMutableData data];
CGImageDestinationRef loImgDestination = CGImageDestinationCreateWithData((__bridge CFMutableDataRef)loNewImageDataWithExif, CGImageSourceGetType(loImageOriginalSource), 1, NULL);
// Add your image to the destination.
CGImageDestinationAddImage(loImgDestination, loImgPhoto.CGImage, (__bridge CFDictionaryRef) loDicMutableMetadata);
// Finalize the destination.
if (CGImageDestinationFinalize(loImgDestination))
{
NSLog(#"Successful image creation.");
// process the image rendering, adjustment data creation and finalize the asset edit.
//Upload photo with EXIF metadata
[self myUploadMethod:loNewImageDataWithExif];
}
else
{
NSLog(#"Error -> failed to finalize the image.");
}
CFRelease(loImageOriginalSource);
CFRelease(loImgDestination);
Upvoted Greener Chen's answer but this is how I implemented it in Swift 3.
Few comments:
My function takes a jpeg buffer and return a jpeg buffer. It also checks whether any resizing is required (checking max dimension of source) and simply returns the input buffer if there's nothing to do here. You might want to adapt to your use case
I don't think you can set a destination property like kCGImageDestinationLossyCompressionQuality to CGImageSourceCreateThumbnailAtIndex() which is a CGImageSource function - but never tried it.
Please note that compression is added to the destination image by adding the relevant key (kCGImageDestinationLossyCompressionQuality)to the retained set of source metadata as an option for CGImageDestinationAddImage
What confused me initially was that although we're providing the full source image metadata to the resized destination image, CGImageDestinationAddImage() is smart enough to ignore any dimension data (W+H) of the source image and replace it automatically with the correct, resized image, dimensions. So PixelHeight,PixelWidth("root" metadata) & PixelXDimension & PixelYDimension (EXIF) are not carried over from the source and are set properly to the resized image dimensions.
class func resizeImage(imageData: Data, maxResolution: Int, compression: CGFloat) -> Data? {
// create image source from jpeg data
if let myImageSource = CGImageSourceCreateWithData(imageData as CFData, nil) {
// get source properties so we retain metadata (EXIF) for the downsized image
if var metaData = CGImageSourceCopyPropertiesAtIndex(myImageSource,0, nil) as? [String:Any],
let width = metaData[kCGImagePropertyPixelWidth as String] as? Int, let height = metaData[kCGImagePropertyPixelHeight as String] as? Int {
let srcMaxResolution = max(width, height)
// if max resolution is exceeded, then scale image to new resolution
if srcMaxResolution >= maxResolution {
let scaleOptions = [ kCGImageSourceThumbnailMaxPixelSize as String : maxResolution,
kCGImageSourceCreateThumbnailFromImageAlways as String : true] as [String : Any]
if let scaledImage = CGImageSourceCreateThumbnailAtIndex(myImageSource, 0, scaleOptions as CFDictionary) {
// add compression ratio to desitnation options
metaData[kCGImageDestinationLossyCompressionQuality as String] = compression
//create new jpeg
let newImageData = NSMutableData()
if let cgImageDestination = CGImageDestinationCreateWithData(newImageData, "public.jpeg" as CFString, 1, nil) {
CGImageDestinationAddImage(cgImageDestination, scaledImage, metaData as CFDictionary)
CGImageDestinationFinalize(cgImageDestination)
return newImageData as Data
}
}
}
}
}
return nil
}
}
You can use a utility like ExifTool to strip and restore the EXIF. This is a cross-platform command-line utility. Below are the appropriate commands to do what you want.
To remove the EXIF:
exiftool -exif= image.jpg
To restore the EXIF again after editing the image:
exiftool -tagsfromfile image.jpg_original -exif image.jpg
In this example I have taken advantage of the "_original" backup automatically generated by ExifTool.
I'm working on a photo app for iPhone/iPod.
I'd like to get the raw data from a large image in an iPhone app and perform some pixel manipulation on it and write it back to the disk/gallery.
So far I've been converting the UIImage obtained from image picker to unsigned char pointers using the following technique:
CGImageRef imageBuff = [imageBuffer CGImage];//imageBuffer is an UIImage *
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(imageBuff));
unsigned char *input_image = (unsigned char *)CFDataGetBytePtr(pixelData);
//height & width represents the dimensions of the input image
unsigned char *resultant = (unsigned char *)malloc(height*4*width);
for (int i=0; i<height;i++)
{
for (int j=0; j<4*width; j+=4)
{
resultant[i*4*width+4*(j/4)+0] = input_image[i*4*width+4*(j/4)];
resultant[i*4*width+4*(j/4)+1] = input_image[i*4*width+4*(j/4)+1];
resultant[i*4*width+4*(j/4)+2] = input_image[i*4*width+4*(j/4)+2];
resultant[i*4*width+4*(j/4)+3] = 255;
}
}
CFRelease(pixelData);
I'm doing all operations on resultant and writing it back to disk in the original resolution using:
NSData* data = UIImagePNGRepresentation(image);
[data writeToFile:path atomically:YES];
I'd like to know:
is the transformation actually lossless?
if there's a 20-22 MP image at hand... is it wise to do this operation in a background thread? (chances of crashing etc... I'd like to know the best practice for doing this).
is there a better method for implementing this (getting the pixel data is a necessity here)?
Yes the method is lossless but i am not sure about 20-22 MP images. I think the Iphone is not at all a suitable choice if i want to edit that big images!
I have been successful in capturing and editing image upto 22 MP using this technique.
Tested this on an iPhone 4s and it worked fine. However, some of the effects I'm using required Core Image filters. It seems like CIFilters do not support more than 16 MP images. the filters will return a blank image if used on an image >16MP.
I'd still like people to comment on lossless large image editing strategies in iOS.