Error when accessing raw pixel data of resized image - ios

I have some problems with access to camera images (or even images from photalbum).
After resizing the UIImage (i tested several different resize methods, they all lead to the same error) I want to access every individual pixel for handing over to a complex algorithm.
The problem is, that there is often a bytesPerRow value that doesn't match to the image size (eg. width*4) when accessing raw pixel data with CGImageGetDataProvider -> resulting in a EXC_BAD_ACCESS error.
Maybe we have an iOS bug here…
Nonetheless, here is the code:
// UIImage capturedImage from Camera
CGImageRef capturedImageRef = capturedImage.CGImage;
// getting bits per component from capturedImage
size_t bitsPerComponentOfCapturedImage = CGImageGetBitsPerComponent(capturedImageRef);
CGImageAlphaInfo alphaInfoOfCapturedImage = CGImageGetAlphaInfo(capturedImageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// calculate new size from interface data.
// with respect to aspect ratio
// ...
// newWidth = XYZ;
// newHeight = XYZ;
CGContextRef context = CGBitmapContextCreate(NULL, newWidth, newHeight, bitsPerComponentOfCapturedImage,0 , colorSpace, alphaInfoOfCapturedImage);
// I also tried to make use getBytesPerRow for CGBitmapContextCreate resulting in the same error
// if image was rotated
if(capturedImage.imageOrientation == UIImageOrientationRight) {
CGContextRotateCTM(context, -M_PI_2);
CGContextTranslateCTM(context, -newHeight, 0.0f);
}
// draw on new context with new size
CGContextDrawImage(context, CGRectMake(0, 0, newWidth, newHeight), capturedImage.CGImage);
CGImageRef scaledImage=CGBitmapContextCreateImage(context);
// release
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
theImage = [UIImage imageWithCGImage: scaledImage];
CGImageRelease(scaledImage);
After that, I want to access the scaled image by
CGImageRef imageRef = theImage.CGImage;
NSData *data = (NSData *) CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
unsigned char *pixels = (unsigned char *)[data bytes];
// create a new image from the modified pixel data
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
size_t bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
size_t bytesPerRow = CGImageGetBytesPerRow(imageRef);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, [data length], NULL);
NSLog(#"bytesPerRow: %f ", (float)bytesPerRow);
NSLog(#"Image width: %f ", (float)width);
NSLog(#"Image height: %f ", (float)height);
// manipulate the individual pixels
for(int i = 0; i < [data length]; i += 4) {
// accessing (float) pixels[i];
// accessing (float) pixels[i+1];
// accessing (float) pixels[i+2];
}
So for example when I access an image with 511x768 Pixel and scale that down to 290x436 I get following output:
Image width: 290.000000
Image height: 436.000000
bitsPerComponent: 8.000000
bitsPerPixel: 32.000000
bytesPerRow: 1184.000000
and you can clearly see that the bytesPerRow (althought choosen automatically by cocoa) does not match to the image width.
I would love to see any help
Using iOS SDK 4.3 on Xcode 4

You are ignoring possible line-padding, hence receiving invalid results. Add the following code and replace your loop;
size_t bytesPerPixel = bitsPerPixel / bitsPerComponent;
//calculate the padding just to see what is happening
size_t padding = bytesPerRow - (width * bytesPerPixel);
size_t offset = 0;
// manipulate the individual pixels
while (offset < [data length])
{
for (size_t x=0; x < width; x += bytesPerPixel)
{
// accessing (float) pixels[offset+x];
// accessing (float) pixels[offset+x+1];
// accessing (float) pixels[offset+x+2];
}
offset += bytesPerRow;
};
Addendum: Underlying reasoning for the row-padding is optimizing the memory access for individual rows to 32bit boundaries. It is indeed very common and is done for optimization purposes.

Related

Create CVPixelBuffer with pixels data, but the final image is distorted

I get pixels by OpenGLES method(glReadPixels) or other way, then create CVPixelBuffer (with or without CGImage) for video recording, but the final picture is distorted. This happens on iPhone6 when I test on iPhone 5c, 5s and 6.
It looks like:
Here is the code:
CGSize viewSize=self.glView.bounds.size;
NSInteger myDataLength = viewSize.width * viewSize.height * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, viewSize.width, viewSize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < viewSize.height; y++)
{
for(int x = 0; x < viewSize.width* 4; x++)
{
buffer2[(int)((viewSize.height-1 - y) * viewSize.width * 4 + x)] = buffer[(int)(y * 4 * viewSize.width + x)];
}
}
free(buffer);
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * viewSize.width;
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGImageRef imageRef = CGImageCreate(viewSize.width , viewSize.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
//UIImage *photo = [UIImage imageWithCGImage:imageRef];
int width = CGImageGetWidth(imageRef);
int height = CGImageGetHeight(imageRef);
CVPixelBufferRef pixelBuffer = NULL;
CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, _recorder.pixelBufferAdaptor.pixelBufferPool, &pixelBuffer);
NSAssert((status == kCVReturnSuccess && pixelBuffer != NULL), #"create pixel buffer failed.");
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pixelBuffer);
NSParameterAssert(pxdata != NULL);//CGContextRef
CGContextRef context = CGBitmapContextCreate(pxdata,
width,
height,
CGImageGetBitsPerComponent(imageRef),
CGImageGetBytesPerRow(imageRef),
colorSpaceRef,
kCGImageAlphaPremultipliedLast);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CGColorSpaceRelease(colorSpaceRef);
CGContextRelease(context);
CGImageRelease(imageRef);
free(buffer2);
//CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
// ...
CVPixelBufferRelease(pixelBuffer);
NOTE - this answer relates the overall problem with the image and not to the specific code.
This sort of problem is usually the 'stride' and relates to the memory layout used to hold the image where each row of pixels are not packed tightly together
As an example the source image may be 240 pixels wide.
The CMPixelBuffer may allocate 320 pixels for each rows where the first 240 pixels hold the image and the extra 80 pixels are padding.
In this case the width is 240 pixels, the stride is 320 pixels.
Strides usually mean you have to copy over each row of pixels one in a loop
Use this size everywhere in the code
int width_16 = (int)yourImage.size.width - (int)yourImage.size.width%16;
int height_ = (int)(yourImage.size.height/yourImage.size.width * width_16) ;
CGSize video_size_ = CGSizeMake(width_16, height_);
I had the same problem and I think the solution is following:
Try to change CGImageGetBytesPerRow(imageRef) to CVPixelBufferGetBytesPerRow(pxbuffer) in CGBitmapContextCreate call. The reason is that your context is backed with the raw data of pixel buffer you have created, not CGImage you are drawing. CVPixelBuffer's bytes per row count may be greater than bytes per pixel * pixel buffer width.

Fastest and most effiecient way to find out non-transparent pixel of UIImage on iOS

I want to ask about image processing mechanism. I develop an iOS app which using OpenGLES for hand-writing on a view. I have a function save that convert a view with all drawing to an Image and save to Photo Library.
I can properly convert content of view to image easily using below code
(Note: The following code is not the problem. Its purpose is just to convert content of view to image and it worked perfect, but I show here for reference)
// Get the size of the backing CAEAGLLayer
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != &UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
} else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
The problem is I want to determine if the view has any drawing or not. If no drawing -> can't save because saving a blank image is useless so my thinking is to check if image has any non-transparent pixel or not
My solution
Convert my drawing view to Image (its pixels have alpha channel)
Check if the Image has any non-zero alpha channel pixel
If yes, user properly draws something -> can Save
If no, user not draws anything or user erases everything -> not Save
I know the BruteForce algorithim to go through all pixels but it seems the worst way and just be implemented if there is no other efficient ways
So is there any efficient way to check it
I found that the BruteForce algorithm is not slower as I though. It just take about less than 200 miliseconds to go through all pixel datas of an image has size of iPad Pro as well as iPad mini 2
So I though using BruteForce is acceptable
Following is code to check
CGImageRef imageRef = [selfImage CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
float total = width * height * 4;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(total, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef tempContext = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(tempContext, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(tempContext);
// Now your rawData contains the image data in the RGBA8888 pixel format
BOOL empty = YES;
for (int i = 0 ; i < total ;) {
CGFloat alpha = ((CGFloat) rawData[i + 3] ) / 255.0f;
// CGFloat red = ((CGFloat) rawData[i] ) / alpha;
// CGFloat green = ((CGFloat) rawData[i + 1] ) / alpha;
// CGFloat blue = ((CGFloat) rawData[i + 2] ) / alpha;
i += bytesPerPixel;
if (alpha != 0) {
empty = NO;
break;
}
}
if (empty) {
//Do something
} else {
//Do other thing
}
If is there any improvement or other effiecient algorithms, please post here, I really appreciate

AIR Native extension and UIViews

I'm making an experiment with a native extension to access device camera on iOS. The goal is make a stream of a UIView on a BitmapData on AS3.
mView=[[UIView alloc]initWithFrame:[UIScreen mainScreen].bounds];
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:mSession];
previewLayer.frame = mView.bounds;
[mView.layer addSublayer:previewLayer];
That's the part of the code where add a sub layer with the camera preview on the UIView, then, in the ANE Controller:
FREObject drawViewToBitmap(FREContext ctx, void* funcData, uint32_t argc, FREObject argv[]) {
// grab the AS3 bitmapData object for writing to
FREBitmapData bmd;
int32_t _id;
//get CCapture object that contains the camera interface...
CCapture* cap;
FREGetObjectAsInt32(argv[0], &_id);
cap = active_cams[_id];
//When start's to capture
if(cap && captureCheckNewFrame(cap))
{
UIView* myView = getView(cap); //<--- Here I get the mView from the code above
FREAcquireBitmapData(argv[1], &bmd);
// Draw the UIView to a UIImage object. myView is a UIView object
// that exists somewhere in our code. It can be any view.
UIGraphicsBeginImageContext(myView.bounds.size);
[myView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Now we'll pull the raw pixels values out of the image data
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Pixel color values will be written here
unsigned char *rawData = (unsigned char*)malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Pixels are now in rawData in the format RGBA8888
// We'll now loop over each pixel write them into the AS3 bitmapData memory
int x, y;
// There may be extra pixels in each row due to the value of
// bmd.lineStride32, we'll skip over those as needed
int offset = bmd.lineStride32 - bmd.width;
int offset2 = bytesPerRow - bmd.width*4;
int byteIndex = 0;
uint32_t *bmdPixels = bmd.bits32;
// NOTE: In this example we are assuming that our AS3 bitmapData and our
// native UIView are the same dimensions to keep things simple.
for(y=0; y<bmd.height; y++) {
for(x=0; x<bmd.width; x++, bmdPixels ++, byteIndex += 4) {
// Values are currently in RGBA8888, so each colour
// value is currently a separate number
int red = (rawData[byteIndex]);
int green = (rawData[byteIndex + 1]);
int blue = (rawData[byteIndex + 2]);
int alpha = (rawData[byteIndex + 3]);
// Combine values into ARGB32
* bmdPixels = (alpha << 24) | (red << 16) | (green << 8) | blue;
}
bmdPixels += offset;
byteIndex += offset2;
}
// free the the memory we allocated
free(rawData);
// Tell Flash which region of the bitmapData changes (all of it here)
FREInvalidateBitmapDataRect(argv[0], 0, 0, bmd.width, bmd.height);
// Release our control over the bitmapData
FREReleaseBitmapData(argv[0]);
}
return NULL;
The problem is on this line: image = UIGraphicsGetImageFromCurrentImageContext();, image is with width/height = 0 and the rest of the code fails in the line "int red = (rawData[byteIndex]);", anyone knows where can be the problem?
The function drawViewToBitmap it's a code from Tyler Egeto that I trying to use with an ANE from github/inspirit to stream an UIView screen sized instead of the big Still Image that is a lot expensive to resize in AS3 side.
Thank's!

OpenGL ES: Screenshot size is different in retina and non-retina devices

I am new in opengl es 2.0 development. The UIImage I got from screenshot looks good on non-retina devices (iphone 4 and ipad), but when I got screenshot from retina devices it seems enlarged. Here is the code I used.
-(UIImage *) glToUIImage {
CGSize size = self.view.frame.size;
// the reason I set the height and width up-side-down is because my
// screenshot captured in landscape mode.
int image_height = (int)size.width;
int image_width = (int)size.height;
NSInteger myDataLength = image_width * image_height * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, image_width, image_height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < image_height; y++)
{
for(int x = 0; x < image_width * 4; x++)
{
buffer2[(image_height - 1 - y) * image_width * 4 + x] = buffer[y * 4 * image_width + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * image_width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(image_width, image_height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
// screenshot function, combined my opengl image with background image and
// saved into Photos.
-(UIImage*)screenshot
{
UIImage *image = [self glToUIImage];
CGRect pos = CGRectMake(0, 0, image.size.width, image.size.height);
UIGraphicsBeginImageContext(image.size);
[image drawInRect:pos];
[self.background.image drawInRect:pos];
UIImage* final = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// final picture I saved into Photos.
return final;
}
Function is working, but the opengl image only shows part in retina devices, how to solve this problem. Thanks !!!
Your code assumes the view size is equal to pixels but it isn't. It is points. You need to convert to actual pixel size per device. UIScreen has a scale property for this.

OpenGL ES 2.0 Losing Image Quality

I followed along with this tutorial (http://www.bit-101.com/blog/?p=1861) and noticed that upon saving the same image multiple times, the quality slowly degraded.
Aside from the memory leaks, what's going wrong here? It should be pulling 4 bytes (rgba) for each pixel. Where's the loss if each pixel is accounted for?
----------------- EDIT -----------------
I'm saving a new image from the pixel data each time there's a vertex position transformation, then load this altered image into my texture buffer, and reset the vertex/index buffers. That way I can keep my changes persistent and ultimately make a less choppy warp. See my other SO question: OpenGL ES 2.0 Vertex Transformation Algorithms
----------------- EDIT -----------------
Before
After
Here's the code from the tutorial:
-(UIImage *) glToUIImage {
NSInteger myDataLength = 320 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < 480; y++)
{
for(int x = 0; x < 320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
-(void)captureToPhotoAlbum {
UIImage *image = [self glToUIImage];
UIImageWriteToSavedPhotosAlbum(image, self, nil, nil);
}
Every time you render the altered image, it is (necessartly) being resampled — that is, converted to a bitmap where the original pixels (texels) align with the screen grid other than on a 1:1 basis. This is necessarily lossy, in that you have lost some of the detail of the original image, so you will get worse results if you distort that image again, compared to transforming the original image with different parameters.

Resources