AIR Native extension and UIViews - ios

I'm making an experiment with a native extension to access device camera on iOS. The goal is make a stream of a UIView on a BitmapData on AS3.
mView=[[UIView alloc]initWithFrame:[UIScreen mainScreen].bounds];
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:mSession];
previewLayer.frame = mView.bounds;
[mView.layer addSublayer:previewLayer];
That's the part of the code where add a sub layer with the camera preview on the UIView, then, in the ANE Controller:
FREObject drawViewToBitmap(FREContext ctx, void* funcData, uint32_t argc, FREObject argv[]) {
// grab the AS3 bitmapData object for writing to
FREBitmapData bmd;
int32_t _id;
//get CCapture object that contains the camera interface...
CCapture* cap;
FREGetObjectAsInt32(argv[0], &_id);
cap = active_cams[_id];
//When start's to capture
if(cap && captureCheckNewFrame(cap))
{
UIView* myView = getView(cap); //<--- Here I get the mView from the code above
FREAcquireBitmapData(argv[1], &bmd);
// Draw the UIView to a UIImage object. myView is a UIView object
// that exists somewhere in our code. It can be any view.
UIGraphicsBeginImageContext(myView.bounds.size);
[myView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Now we'll pull the raw pixels values out of the image data
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Pixel color values will be written here
unsigned char *rawData = (unsigned char*)malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Pixels are now in rawData in the format RGBA8888
// We'll now loop over each pixel write them into the AS3 bitmapData memory
int x, y;
// There may be extra pixels in each row due to the value of
// bmd.lineStride32, we'll skip over those as needed
int offset = bmd.lineStride32 - bmd.width;
int offset2 = bytesPerRow - bmd.width*4;
int byteIndex = 0;
uint32_t *bmdPixels = bmd.bits32;
// NOTE: In this example we are assuming that our AS3 bitmapData and our
// native UIView are the same dimensions to keep things simple.
for(y=0; y<bmd.height; y++) {
for(x=0; x<bmd.width; x++, bmdPixels ++, byteIndex += 4) {
// Values are currently in RGBA8888, so each colour
// value is currently a separate number
int red = (rawData[byteIndex]);
int green = (rawData[byteIndex + 1]);
int blue = (rawData[byteIndex + 2]);
int alpha = (rawData[byteIndex + 3]);
// Combine values into ARGB32
* bmdPixels = (alpha << 24) | (red << 16) | (green << 8) | blue;
}
bmdPixels += offset;
byteIndex += offset2;
}
// free the the memory we allocated
free(rawData);
// Tell Flash which region of the bitmapData changes (all of it here)
FREInvalidateBitmapDataRect(argv[0], 0, 0, bmd.width, bmd.height);
// Release our control over the bitmapData
FREReleaseBitmapData(argv[0]);
}
return NULL;
The problem is on this line: image = UIGraphicsGetImageFromCurrentImageContext();, image is with width/height = 0 and the rest of the code fails in the line "int red = (rawData[byteIndex]);", anyone knows where can be the problem?
The function drawViewToBitmap it's a code from Tyler Egeto that I trying to use with an ANE from github/inspirit to stream an UIView screen sized instead of the big Still Image that is a lot expensive to resize in AS3 side.
Thank's!

Related

Fastest and most effiecient way to find out non-transparent pixel of UIImage on iOS

I want to ask about image processing mechanism. I develop an iOS app which using OpenGLES for hand-writing on a view. I have a function save that convert a view with all drawing to an Image and save to Photo Library.
I can properly convert content of view to image easily using below code
(Note: The following code is not the problem. Its purpose is just to convert content of view to image and it worked perfect, but I show here for reference)
// Get the size of the backing CAEAGLLayer
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != &UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
} else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
The problem is I want to determine if the view has any drawing or not. If no drawing -> can't save because saving a blank image is useless so my thinking is to check if image has any non-transparent pixel or not
My solution
Convert my drawing view to Image (its pixels have alpha channel)
Check if the Image has any non-zero alpha channel pixel
If yes, user properly draws something -> can Save
If no, user not draws anything or user erases everything -> not Save
I know the BruteForce algorithim to go through all pixels but it seems the worst way and just be implemented if there is no other efficient ways
So is there any efficient way to check it
I found that the BruteForce algorithm is not slower as I though. It just take about less than 200 miliseconds to go through all pixel datas of an image has size of iPad Pro as well as iPad mini 2
So I though using BruteForce is acceptable
Following is code to check
CGImageRef imageRef = [selfImage CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
float total = width * height * 4;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(total, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef tempContext = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(tempContext, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(tempContext);
// Now your rawData contains the image data in the RGBA8888 pixel format
BOOL empty = YES;
for (int i = 0 ; i < total ;) {
CGFloat alpha = ((CGFloat) rawData[i + 3] ) / 255.0f;
// CGFloat red = ((CGFloat) rawData[i] ) / alpha;
// CGFloat green = ((CGFloat) rawData[i + 1] ) / alpha;
// CGFloat blue = ((CGFloat) rawData[i + 2] ) / alpha;
i += bytesPerPixel;
if (alpha != 0) {
empty = NO;
break;
}
}
if (empty) {
//Do something
} else {
//Do other thing
}
If is there any improvement or other effiecient algorithms, please post here, I really appreciate

How do I display a self-made bitmap at native resolution in iOS8

I'm developing a universal app for iOS which will dynamically generate it's own full-screen bitmaps (a pointer to 32-bit pixel data in a byte buffer). It reacts to touch events and needs to do the drawing in a responsive way as the user touches (e.g. zooms/pans). At the start of my app, I can see that the display is scaled by 2x on my retina iPad and iPod Touch. My code currently creates and displays bitmaps correctly, but at 1/2 the native resolution of the display. I can see the native resolution using the nativeBounds of the view, but I would like to create and display my bitmaps at native resolution without any scaling. I tried changing the transform scale in the drawRect() method, but it didn't work correctly. Below is my drawRect code:
- (void)drawRect:(CGRect)rect
{
UInt32 * pixels;
pixels = (UInt32 *)[TheFile thePointer];
NSUInteger width = [TheFile iScreenWidth];
NSUInteger height = [TheFile iScreenHeight];
NSUInteger borderX = [TheFile iBorderX];
NSUInteger borderY = [TheFile iBorderY];
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = UIGraphicsGetCurrentContext();
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(pixels, width-borderX*2, height-borderY*2, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Big);
CGImageRef myimage = CGBitmapContextCreateImage(gtx);
CGContextSetInterpolationQuality(gtx, kCGInterpolationNone); // does this speed it up?
// Create a rect to display
CGRect imageRect = CGRectMake(borderX, borderY, width - borderX*2, height - borderY * 2);
// Need to repaint the background that would show through (black)
if (borderX != 0 || borderY != 0)
{
[[UIColor blackColor] set];
UIRectFill(rect);
}
// Transform image (flip right side up)
CGContextTranslateCTM(context, 0, height);
CGContextScaleCTM(context, 1.0, -1.0);
// Draw the image
CGContextDrawImage(context, imageRect, myimage); //image.CGImage);
CGColorSpaceRelease(colorSpace);
CGContextRelease(gtx);
CGImageRelease(myimage);
} /* drawRect() */
Edit: The answer below fixes both the performance issue by using a UIImageView, and the scaling issue by setting the proper display scale in the initialization of the UIImage. When the UIImage scale matches the display scale, then it will display bitmaps at 1:1 with the native resolution of the device.
The problem of your code is that after the result image is created, the code draws the image into current graphic context configured for drawRect:. It's CPU that draws the image. That's why it takes 70ms. Rendering the image with an UIImageView or set it as the contents of a layer is not handled by CPU, in this way, it's handled by GPU. GPU is good at things like this, so it's much faster in this case. Since drawRect: causes Core Animation to create a backing bitmap which is useless in this case, you should create the image without drawRect::
- (UIImage *)drawBackground {
UIImage *output;
UInt32 * pixels;
pixels = (UInt32 *)[TheFile thePointer];
NSUInteger borderX = [TheFile iBorderX];
NSUInteger borderY = [TheFile iBorderY];
NSUInteger width = [TheFile iScreenWidth] - 2 * borderX;
NSUInteger height = [TheFile iScreenHeight] - 2 * borderY;
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGFloat scale = [UIScreen mainScreen].scale;
// create bitmap graphic context
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(pixels,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrder32Big);
// create image
CGImageRef myimage = CGBitmapContextCreateImage(gtx);
output = [UIImage imageWithCGImage:myimage
scale:[UIScreen mainScreen].scale
orientation:UIImageOrientationUp];
// clean up
CGColorSpaceRelease(colorSpace);
CGContextRelease(gtx);
CGImageRelease(myimage);
return output;
}
when user triggers an event, suppose you use a gesture recognizer:
- (IBAction)handleTap:(UITapGestureRecognizer *)tap {
UIImage *background = [self drawBackground];
// when background view is an UIImageView
self.backgroundView.image = background;
// self.backgroundView should have already set up in viewDidLoad
// based on your code snippet, you may need to configure background color
// self.backgroundView.backgroundColor = [UIColor blackColor];
// do other configuration if needed...
// when background view is an UIView or subclass of UIView
self.backgroundView.layer.contents = (id)background.CGImage;
// not like UIImageView, the size of background view must exactly equals
// to the size of background image. Otherwise the image will be scaled.
}
I wrote this function in a test project:
static UIImage *drawBackground() {
// allocate bitmap buffer
CGFloat scale = [UIScreen mainScreen].scale;
CGRect screenBounds = [UIScreen mainScreen].bounds;
NSUInteger borderWidth = 1 * 2; // width of border is 1 pixel
NSUInteger width = scale * CGRectGetWidth(screenBounds) - borderWidth;
NSUInteger height = scale * CGRectGetHeight(screenBounds) - borderWidth;
NSUInteger bytesPerPixel = 4;
// test resolution begin
// tested on a iPhone 4 (320 x 480 points, 640 x 960 pixels), iOS 7.1
// the image is rendered by an UIImageView which covers whole screen.
// the content mode of UIImageView is center, which doesn't cause scaling.
width = scale * 310;
height = scale * 240;
// test resolution end
UInt32 *pixels = malloc((size_t)(width * height * bytesPerPixel));
// manipulate bitmap buffer
NSUInteger count = width * height;
unsigned char *byte = (unsigned char *)pixels;
for (int i = 0; i < count; i = i + 1) {
byte[0] = 100;
byte = byte + 1;
byte[0] = 100;
byte = byte + 1;
byte[0] = 0;
byte = byte + 2;
}
// create bitmap grahpic context
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(pixels,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrder32Big);
// create image
CGImageRef myimage = CGBitmapContextCreateImage(gtx);
UIImage *output = [UIImage imageWithCGImage:myimage
scale:scale
orientation:UIImageOrientationUp];
// clean up
CGColorSpaceRelease(colorSpace);
CGContextRelease(gtx);
CGImageRelease(myimage);
free(pixels);
return output;
}
I tested it on a iPhone 4 device. Seems ok to me. This is the screenshot:

How to get the pixel color values of custom image inside imageview ios?

I know similar question have been asked before.
What I want is to get the RGB pixel value of the Image Inside the Imageview, so it can be any image that pixel values we want to get.
This is what I have used to get the point where the image is clicked.
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
if ([touch tapCount] == 2) {
//self.imageView.image = nil;
return;
}
CGPoint lastPoint = [touch locationInView:self.imageViewGallery];
NSLog(#"%f",lastPoint.x);
NSLog(#"%f",lastPoint.y);
}
And To get the Image I have Pasted this code.
+ (NSArray*)getRGBAsFromImage:(UIImage *)image atX:(int)xx andY:(int)yy count:(int)count
{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
/** It requires to get the image into your data buffer, so how can we get the `ImageViewImage` **/
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
I am new to ios so please explain and It and suggesting some tutorial will be great.
Use this example article. It is talking about a color picker using images. You can understand required info very easily from it. Helped me in my app. Let me know if any help/suggestion needed ..:)
EDIT:
update your getPixelColorAtLocation: like this. It will give you correct color then.
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
CGImageRef inImage = self.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
/** Extra Added code for Resized Images ****/
float xscale = w / self.frame.size.width;
float yscale = h / self.frame.size.height;
point.x = point.x * xscale;
point.y = point.y * yscale;
/** ****************************************/
/** Extra Code Added for Resolution ***********/
CGFloat x = 1.0;
if ([self.image respondsToSelector:#selector(scale)]) x = self.image.scale;
/*********************************************/
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
// int offset = 4*((w*round(point.y))+round(point.x));
int offset = 4*((w*round(point.y))+round(point.x))*x; //Replacement for Resolution
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
NSLog(#"offset: %i colors: RGB A %i %i %i %i",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data) { free(data); }
return color;
}
Let me know this fix does not work .. :)
Here is the GitHub for my code. Use it to implement picker image. Let me know if more info needed
Just use this method, it works for me:
- (UIColor*) getPixelColorAtLocation:(CGPoint)point
{
UIColor* color = nil;
CGImageRef inImage;
inImage = imgZoneWheel.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((w*round(point.y))+round(point.x));
alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
//CGContextRelease(cgctx);
// Free image data memory for the context
if (data) { free(data); }
return color;
}
Just use like this:
UIColor *color = [self getPixelColorAtLocation:lastPoint];
If you try to get color from image through CGBitmapContextGetData and set, for example, in background of view, this will be different colors in iPhone 6 and later. In iPhone 5 everything will be ok) More information about this Getting the right colors in your iOS app
This solution through UIImage give you right color:
- (UIColor *)getColorFromImage:(UIImage *)image pixelPoint:(CGPoint)pixelPoint {
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], CGRectMake(pixelPoint.x, pixelPoint.y, 1.f, 1.f));
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return [UIColor colorWithPatternImage:croppedImage];
}
You want to know how to get the image from an image view?
UIImageView has a property, image. Simply use that property.

I am trying to create a partial Grayscale image

I am trying to create a partial gray scale image in which i am reading each and every pixel in that image and replacing the pixel data to gray color, and if the pixel color matches the desired color i restrict it to be applied so that the specific pixel color doesn't change.i don't know where i am going wrong it changes the whole image to gray scale and rotates the image 90 degrees. can some one help me out with this issue thanks in advance.
-(UIImage *) toPartialGrayscale{
const int RED = 1;
const int GREEN = 2;
const int BLUE = 3;
initialR=255.0;
initialG=0.0;
initialB=0.0;//218-112-214//0-191-255
float r;
float g;
float b;
tollerance=50;
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, originalImageView.image.size.width * scale, originalImageView.image.size.height * scale);
int width = imageRect.size.width;
int height = imageRect.size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [originalImageView.image CGImage]);
for(int y = 0; y < height; y++)
{
for(int x = 0; x < width; x++)
{
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint8_t gray = (uint8_t) ((30 * rgbaPixel[RED] + 59 * rgbaPixel[GREEN] + 11 * rgbaPixel[BLUE]) / 100);
// set the pixels to grayi
r= initialR-rgbaPixel[RED];
g= initialG-rgbaPixel[GREEN];
b= initialB-rgbaPixel[BLUE];
if ((r<tollerance&&r>-tollerance)&&(g<tollerance&&g>-tollerance)&&(b<tollerance&&b>-tollerance))
{
rgbaPixel[RED] = (uint8_t)r;
rgbaPixel[GREEN] = (uint8_t)g;
rgbaPixel[BLUE] = (uint8_t)b;
}
else
{
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
}
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image
scale:scale
orientation:UIImageOrientationUp];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
}
This is the code i am using any kind of help will be appreciated thanks again in advance.
Hopefully the orientation piece is easy enough to resolve by playing with the UIImageOrientationUp constant that you're passing in when you create the final image. Try left or right until you get what you need.
As for the threshold not working, can you verify that your "tollerance" really is behaving like you expect. Change it to 255 and see if the entire image retains it's color (it should). If it's still grey, then you know that your conditional statement is where the problem lies.

Error when accessing raw pixel data of resized image

I have some problems with access to camera images (or even images from photalbum).
After resizing the UIImage (i tested several different resize methods, they all lead to the same error) I want to access every individual pixel for handing over to a complex algorithm.
The problem is, that there is often a bytesPerRow value that doesn't match to the image size (eg. width*4) when accessing raw pixel data with CGImageGetDataProvider -> resulting in a EXC_BAD_ACCESS error.
Maybe we have an iOS bug hereā€¦
Nonetheless, here is the code:
// UIImage capturedImage from Camera
CGImageRef capturedImageRef = capturedImage.CGImage;
// getting bits per component from capturedImage
size_t bitsPerComponentOfCapturedImage = CGImageGetBitsPerComponent(capturedImageRef);
CGImageAlphaInfo alphaInfoOfCapturedImage = CGImageGetAlphaInfo(capturedImageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// calculate new size from interface data.
// with respect to aspect ratio
// ...
// newWidth = XYZ;
// newHeight = XYZ;
CGContextRef context = CGBitmapContextCreate(NULL, newWidth, newHeight, bitsPerComponentOfCapturedImage,0 , colorSpace, alphaInfoOfCapturedImage);
// I also tried to make use getBytesPerRow for CGBitmapContextCreate resulting in the same error
// if image was rotated
if(capturedImage.imageOrientation == UIImageOrientationRight) {
CGContextRotateCTM(context, -M_PI_2);
CGContextTranslateCTM(context, -newHeight, 0.0f);
}
// draw on new context with new size
CGContextDrawImage(context, CGRectMake(0, 0, newWidth, newHeight), capturedImage.CGImage);
CGImageRef scaledImage=CGBitmapContextCreateImage(context);
// release
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
theImage = [UIImage imageWithCGImage: scaledImage];
CGImageRelease(scaledImage);
After that, I want to access the scaled image by
CGImageRef imageRef = theImage.CGImage;
NSData *data = (NSData *) CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
unsigned char *pixels = (unsigned char *)[data bytes];
// create a new image from the modified pixel data
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
size_t bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
size_t bytesPerRow = CGImageGetBytesPerRow(imageRef);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, [data length], NULL);
NSLog(#"bytesPerRow: %f ", (float)bytesPerRow);
NSLog(#"Image width: %f ", (float)width);
NSLog(#"Image height: %f ", (float)height);
// manipulate the individual pixels
for(int i = 0; i < [data length]; i += 4) {
// accessing (float) pixels[i];
// accessing (float) pixels[i+1];
// accessing (float) pixels[i+2];
}
So for example when I access an image with 511x768 Pixel and scale that down to 290x436 I get following output:
Image width: 290.000000
Image height: 436.000000
bitsPerComponent: 8.000000
bitsPerPixel: 32.000000
bytesPerRow: 1184.000000
and you can clearly see that the bytesPerRow (althought choosen automatically by cocoa) does not match to the image width.
I would love to see any help
Using iOS SDK 4.3 on Xcode 4
You are ignoring possible line-padding, hence receiving invalid results. Add the following code and replace your loop;
size_t bytesPerPixel = bitsPerPixel / bitsPerComponent;
//calculate the padding just to see what is happening
size_t padding = bytesPerRow - (width * bytesPerPixel);
size_t offset = 0;
// manipulate the individual pixels
while (offset < [data length])
{
for (size_t x=0; x < width; x += bytesPerPixel)
{
// accessing (float) pixels[offset+x];
// accessing (float) pixels[offset+x+1];
// accessing (float) pixels[offset+x+2];
}
offset += bytesPerRow;
};
Addendum: Underlying reasoning for the row-padding is optimizing the memory access for individual rows to 32bit boundaries. It is indeed very common and is done for optimization purposes.

Resources