iOS lossless image editing - ios

I'm working on a photo app for iPhone/iPod.
I'd like to get the raw data from a large image in an iPhone app and perform some pixel manipulation on it and write it back to the disk/gallery.
So far I've been converting the UIImage obtained from image picker to unsigned char pointers using the following technique:
CGImageRef imageBuff = [imageBuffer CGImage];//imageBuffer is an UIImage *
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(imageBuff));
unsigned char *input_image = (unsigned char *)CFDataGetBytePtr(pixelData);
//height & width represents the dimensions of the input image
unsigned char *resultant = (unsigned char *)malloc(height*4*width);
for (int i=0; i<height;i++)
{
for (int j=0; j<4*width; j+=4)
{
resultant[i*4*width+4*(j/4)+0] = input_image[i*4*width+4*(j/4)];
resultant[i*4*width+4*(j/4)+1] = input_image[i*4*width+4*(j/4)+1];
resultant[i*4*width+4*(j/4)+2] = input_image[i*4*width+4*(j/4)+2];
resultant[i*4*width+4*(j/4)+3] = 255;
}
}
CFRelease(pixelData);
I'm doing all operations on resultant and writing it back to disk in the original resolution using:
NSData* data = UIImagePNGRepresentation(image);
[data writeToFile:path atomically:YES];
I'd like to know:
is the transformation actually lossless?
if there's a 20-22 MP image at hand... is it wise to do this operation in a background thread? (chances of crashing etc... I'd like to know the best practice for doing this).
is there a better method for implementing this (getting the pixel data is a necessity here)?

Yes the method is lossless but i am not sure about 20-22 MP images. I think the Iphone is not at all a suitable choice if i want to edit that big images!

I have been successful in capturing and editing image upto 22 MP using this technique.
Tested this on an iPhone 4s and it worked fine. However, some of the effects I'm using required Core Image filters. It seems like CIFilters do not support more than 16 MP images. the filters will return a blank image if used on an image >16MP.
I'd still like people to comment on lossless large image editing strategies in iOS.

Related

Issue in computing the histogram of an image using vImageHistogramCalculation

I have used vImageHistogramCalculation in my current application for calculating the histogram of image and I am getting EXC_BAD_ACCESS in some cases.I went through the cases as follows-
- (void)histogramForImage:(UIImage *)image {
vImage_Buffer inBuffer;
CGImageRef img = image.CGImage;
CGDataProviderRef inProvider = CGImageGetDataProvider(img);
CFDataRef inBitmapData = CGDataProviderCopyData(inProvider);
inBuffer.width = CGImageGetWidth(img);
inBuffer.height = CGImageGetHeight(img);
inBuffer.rowBytes = CGImageGetBytesPerRow(img);
inBuffer.data = (void*)CFDataGetBytePtr(inBitmapData);
vImagePixelCount histogram[4][8] = {{0}};
vImagePixelCount *histogramPointers[4] = { &histogram[0][0], &histogram[1][0], &histogram[2][0], &histogram[3][0] };
vImage_Error error = vImageHistogramCalculation_ARGBFFFF(&inBuffer, histogramPointers, 8, 0, 255, kvImageNoFlags);
if (error) {
NSLog(#"error %ld", error);
}
CGDataProviderRelease(inProvider);
}
I used the image from iPhone Camera Roll in PNG form and manually put in bundle in the above code and it works fine.
I used the same code for the image from iPhone Camera Roll in JPG format then I am getting EXC_BAD_ACCESS error.
I tried to get image from Camera Roll using Photos Framework and then passed to the same code then also I am getting EXC_BAD_ACCESS error.
What I actually need is to find the histogram of all the images of iPhone Camera Roll.So I am unable to find out why the above code is working fine for one image format and failing for other.Is there any other reason for the crash?
EDIT 1- what I came to its not about image format its working fine for some JPG images too but still its crashing in some cases.How should I figure that out?
Reference:
https://developer.apple.com/library/mac/documentation/Performance/Reference/vImage_histogram/#//apple_ref/c/func/vImageHistogramCalculation_ARGBFFFF
Compute the histogram of an image using vImageHistogramCalculation
vImageHistogramCalculation_ARGBFFFF is for four-channel floating-point data. Chances are extremely high that the data you are getting out of the data provider is RGB or RGBA 8-bit integer data. Check with the CGImageRef for the storage format of the image data.
If you want a specific data format out of a CGImageRef, you can call vImageBuffer_InitWithCGImage().

Cimg in embedded hardware

I load a jpg into embedded system memory on an Stm32 board with assembly code via .incbin and copy the data to an alternate buffer via std::copy. The image is displayed on an attached lcd screen and is decompressed with picoimage and all is well. I wish to apply image effects beforehand and I use CImg which seems to be small and portable. Compared to others I simply have to place the header in working directory and I have a grayscale code below; however, I have to same issue as when I attempted to alter the code by hand the screen appears black. I can't seem to find a proper fix for it. Are their any suggestions. For some reason I feel as though CImg is not aware it is a jpg file and opts to load and operate on the whole compressed data. Is their a work around?
CImg<uint8_t> image(_buffer,_panel->getWidth(),_panel->getHeight(),1,1,true);
int width = image.width();
int height = image.height();
//int depth = image.depth();
//New grayscale images.
//CImg<unsigned char> gray1(width,height,depth,1);
//CImg<unsigned char> gray2(width,height,depth,1);
unsigned char r,g,b;
unsigned char gr1 = 0;
unsigned char gr2 = 0;
/* Convert RGB image to grayscale image */
for(int i=0;i<width;i++){
for(int j=0;j<height;j++){
//Return a pointer to a located pixel value.
r = image(i,j,0,0); // First channel RED
g = image(i,j,0,1); // Second channel GREEN
b = image(i,j,0,2); // Third channel BLUE
//PAL and NTSC
//Y = 0.299*R + 0.587*G + 0.114*B
gr1 = round(0.299*((double)r) + 0.587*((double)g) + 0.114*((double)b));
//HDTV
//Y = 0.2126*R + 0.7152*G + 0.0722*B
gr2 = round(0.2126*((double)r) + 0.7152*((double)g) + 0.0722*((double)b));
image(i,j,0,0) = gr1;
//image(i,j,0,0) = gr2;
}
}
cimg does not decompress jpeg images itself, it uses your system jpeg library. If you are using a debian-derivative, for example, you'll need apt-get install libjpeg-turbo8-dev before compiling. Have a look through cimg and make sure it's picking up the headers and linking correctly.

Fastest way on iOS 7+ to get CVPixelBufferRef from BGRA bytes

What is the fastest way on iOS 7+ to convert raw bytes of BGRA / UIImage data to a CVPixelBufferRef? The bytes are 4 bytes per pixel in BGRA order.
Is there any chance of a direct cast here vs. copying data into a secondary storage?
I've considered CVPixelBufferCreateWithBytes but I have a hunch it is copying memory...
You have to use CVPixelBufferCreate because CVPixelBufferCreateWithBytes will not allow fast conversion to an OpenGL texture using the Core Video texture cache. I'm not sure why this is the case, but that's the way things are at least as of iOS 8. I tested this with the profiler, and CVPixelBufferCreateWithBytes causes a texSubImage2D call to be made every time a Core Video texture is accessed from the cache.
CVPixelBufferCreate will do funny things if the width is not a multiple of 16, so if you plan on doing CPU operations on CVPixelBufferGetBaseAddress, and you want it to be laid out like a CGImage or CGBitmapContext, you will need to pad your width higher until it is a multiple of 16, or make sure you use the CVPixelBufferGetRowBytes and pass that to any CGBitmapContext you create.
I tested all combinations of dimensions of width and height from 16 to 2048, and as long as they were padded to the next highest multiple of 16, the memory was laid out properly.
+ (NSInteger) alignmentForPixelBufferDimension:(NSInteger)dim
{
static const NSInteger modValue = 16;
NSInteger mod = dim % modValue;
return (mod == 0 ? dim : (dim + (modValue - mod)));
}
+ (NSDictionary*) pixelBufferSurfaceAttributesOfSize:(CGSize)size
{
return #{ (NSString*)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA),
(NSString*)kCVPixelBufferWidthKey: #(size.width),
(NSString*)kCVPixelBufferHeightKey: #(size.height),
(NSString*)kCVPixelBufferBytesPerRowAlignmentKey: #(size.width * 4),
(NSString*)kCVPixelBufferExtendedPixelsLeftKey: #(0),
(NSString*)kCVPixelBufferExtendedPixelsRightKey: #(0),
(NSString*)kCVPixelBufferExtendedPixelsTopKey: #(0),
(NSString*)kCVPixelBufferExtendedPixelsBottomKey: #(0),
(NSString*)kCVPixelBufferPlaneAlignmentKey: #(0),
(NSString*)kCVPixelBufferCGImageCompatibilityKey: #(YES),
(NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey: #(YES),
(NSString*)kCVPixelBufferOpenGLESCompatibilityKey: #(YES),
(NSString*)kCVPixelBufferIOSurfacePropertiesKey: #{ #"IOSurfaceCGBitmapContextCompatibility": #(YES), #"IOSurfaceOpenGLESFBOCompatibility": #(YES), #"IOSurfaceOpenGLESTextureCompatibility": #(YES) } };
}
Interestingly enough, if you ask for a texture from the Core Video cache with dimensions smaller than the padded dimensions, it will return a texture immediately. Somehow underneath it is able to reference the original texture, but with a smaller width and height.
To sum up, you cannot wrap existing memory with a CVPixelBufferRef using CVPixelBufferCreateWithBytes and use the Core Video texture cache efficiently. You must use CVPixelBufferCreate and use CVPixelBufferGetBaseAddress.

How to get Image Orientation without loading the UIImage?

My question is quite straightforward, I want to get the orientation of an image, but I don't want to use [UIImage imageWithData:] because it consumes memory and is potentially slow. So, what would be the solution? The images are saved in the app's documents folder rather than ALAssetsLibrary.
PS: happy new year guys!
You need to use lower level Quartz functions. Read in the first 2K or 4K of the image into a NSData object, then pass that data to an incremental image creator, and ask it for the orientation. If you don't get it, read in a larger chunk. JPGs almost always have the metadata in the first 2K of data (maybe 4K, been a while since I wrote this code):
{
CGImageSourceRef imageSourcRef = CGImageSourceCreateIncremental(NULL);
CGImageSourceUpdateData(imageSourcRef, (__bridge CFDataRef)data, NO);
CFDictionaryRef dict = CGImageSourceCopyPropertiesAtIndex(imageSourcRef, 0, NULL);
if(dict) {
//CFShow(dict);
self.properties = CFBridgingRelease(dict);
if(!self.orientation) {
self.orientation = [[self.properties objectForKey:#"Orientation"] integerValue];
}
}
CFRelease(imageSourcRef);
}

iOS6 : How to use the conversion feature of YUV to RGB from cvPixelBufferref to CIImage

From iOS6, Apple has given the provision to use native YUV to CIImage through this call
initWithCVPixelBuffer:options:
In the core Image Programming guide, they have mentioned about this feature
Take advantage of the support for YUV image in iOS 6.0 and later.
Camera pixel buffers are natively YUV but most image processing
algorithms expect RBGA data. There is a cost to converting between the
two. Core Image supports reading YUB from CVPixelBuffer objects and
applying the appropriate color transform.
options = #{ (id)kCVPixelBufferPixelFormatTypeKey :
#(kCVPixelFormatType_420YpCvCr88iPlanarFullRange) };
But, I am unable to use it properly. I have a raw YUV data. So, this is what i did
void *YUV[3] = {data[0], data[1], data[2]};
size_t planeWidth[3] = {width, width/2, width/2};
size_t planeHeight[3] = {height, height/2, height/2};
size_t planeBytesPerRow[3] = {stride, stride/2, stride/2};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn ret = CVPixelBufferCreateWithPlanarBytes(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_420YpCbCr8PlanarFullRange,
nil,
width*height*1.5,
3,
YUV,
planeWidth,
planeHeight,
planeBytesPerRow,
nil,
nil, nil, &pixelBuffer);
NSDict *opt = #{ (id)kCVPixelBufferPixelFormatTypeKey :
#(kCVPixelFormatType_420YpCbCr8PlanarFullRange) };
CIImage *image = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:opt];
I am getting nil for image. Anyy idea what I am missing.
EDIT:
I added lock and unlock base address before call. Also, I dumped the data of pixelbuffer to ensure pixellbuffer propely hold the data. It looks like something wrong with the init call only. Still CIImage object is returning nil.
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
CIImage *image = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:opt];
CVPixelBufferUnlockBaseAddress(pixelBuffer,0);
There should be error message in console: initWithCVPixelBuffer failed because the CVPixelBufferRef is not IOSurface backed. See Apple's Technical Q&A QA1781 for how to create an IOSurface-backed CVPixelBuffer.
Calling CVPixelBufferCreateWithBytes() or CVPixelBufferCreateWithPlanarBytes() will result in CVPixelBuffers that are not IOSurface-backed...
...To do that, you must specify kCVPixelBufferIOSurfacePropertiesKey in the pixelBufferAttributes dictionary when creating the pixel buffer using CVPixelBufferCreate().
NSDictionary *pixelBufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
[NSDictionary dictionary], (id)kCVPixelBufferIOSurfacePropertiesKey,
nil];
// you may add other keys as appropriate, e.g. kCVPixelBufferPixelFormatTypeKey, kCVPixelBufferWidthKey, kCVPixelBufferHeightKey, etc.
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(... (CFDictionaryRef)pixelBufferAttributes, &pixelBuffer);
Alternatively, you can make IOSurface-backed CVPixelBuffers using CVPixelBufferPoolCreatePixelBuffer() from an existing pixel buffer pool, if the pixelBufferAttributes dictionary provided to CVPixelBufferPoolCreate() includes kCVPixelBufferIOSurfacePropertiesKey.
I am working on a similar problem and kept finding that same quote from Apple without any further information on how to work in a YUV color space. I came upon the following:
By default, Core Image assumes that processing nodes are 128 bits-per-pixel, linear light, premultiplied RGBA floating-point values that use the GenericRGB color space. You can specify a different working color space by providing a Quartz 2D CGColorSpace object. Note that the working color space must be RGB-based. If you have YUV data as input (or other data that is not RGB-based), you can use ColorSync functions to convert to the working color space. (See Quartz 2D Programming Guide for information on creating and using CGColorspace objects.)
With 8-bit YUV 4:2:2 sources, Core Image can process 240 HD layers per gigabyte. Eight-bit YUV is the native color format for video source such as DV, MPEG, uncompressed D1, and JPEG. You need to convert YUV color spaces to an RGB color space for Core Image.
I note that there are no YUV color spaces, only Gray and RGB; and their calibrated cousins. I'm not sure how to convert the color space yet, but will certainly report here if I find out.

Resources