Is this UIImage data reader thread safe? - ios

Or this code can be executed in a background thread safely?
CGImageRef cgImage;
CGContextRef context;
CGColorSpaceRef colorSpace;
// Sets the CoreGraphic Image to work on it.
cgImage = [uiImage CGImage];
// Sets the image's size.
_width = CGImageGetWidth(cgImage);
_height = CGImageGetHeight(cgImage);
// Extracts the pixel informations and place it into the data.
colorSpace = CGColorSpaceCreateDeviceRGB();
_data = malloc(_width * _height * 4);
context = CGBitmapContextCreate(_data, _width, _height, 8, 4 * _width, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
// Adjusts position and invert the image.
// The OpenGL uses the image data upside-down compared commom image files.
CGContextTranslateCTM(context, 0, _height);
CGContextScaleCTM(context, 1.0, -1.0);
// Clears and ReDraw the image into the context.
CGContextClearRect(context, CGRectMake(0, 0, _width, _height));
CGContextDrawImage(context, CGRectMake(0, 0, _width, _height), cgImage);
// Releases the context.
CGContextRelease(context);
How to acheive the same result, if not?
(My problem is that I can't see my OpenGL textures based on the output buffer of this method, if it runs in the background)

I think you might have trouble with running this code on a separate thread from GL's like this. Even if it would work you might encounter half drawn images/textures. You could avoid this by creating a double buffer:
Your "_data" should be allocated only once and should hold 2 raw image data buffers. Then just create 2 pointers defined as foreground and background buffer (void *fg = _data[0], void *bg = _data[1] to begin with). Now when your method collects data from CGImage to bg just swap the pointers (then void *fg = _data[1], void *bg = _data[0] or the other way around)
Now your GL thread should fill your texture with data on fg (same thread as drawing).
Also you might need some locking mechanisms:
Before you push data to texture you should lock "buffer swap" and
unlock it after the push.
You will probably want to know if the
buffer has been swapped and only push fg data to texture in such
case.
Also note that if you call GL methods on more then 1 thread you will have trouble in most cases.

That looks OK to me, assuming that uiImage, _width, _height and _data aren't being manipulated from another thread at the same time. (Assuming you're using iOS 4 and above.)
Are you uploading the texture to OpenGL on the background thread? If so, that's probably the problem (since a given OpenGL context should only be accessed from a single thread at a time).

As long as you don't access UIKit (or similar frameworks) (directly or indirectly) and as long as you don't access the variables in your code from multiple threads, it's OK.

Related

fast method to get RGB data from UIImage (photo library)

I would like to get a data array containing the RGB representation of a picture stored in the photo library (an ALAsset) on iOS (ios8 sdk).
I already tried this method :
get the a CGImage from ALAsset with [ALAssetRepresentation fullScreenImage]
draw the CGImage to a CGContext.
That method works, I get a pointer to rgb data, but this is really slow (there are 2 conversions). The final goal is to load the image quickly in a OpenGL texture.
My code to get an image from Photo library
ALAsset* currentPhotoAsset = (ALAsset*) [self.photoAssetList objectAtIndex:_currentPhotoAssetIndex];
ALAssetRepresentation *representation = [currentPhotoAsset defaultRepresentation];
//-> REALLY SLOW
UIImage *currentPhoto = [UIImage imageWithCGImage:[representation fullScreenImage]];
My code to draw on the CGContext :
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * textureWidth;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(textureData, textureWidth, textureHeight,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
//--> THAT'S REALLY SLOW
CGContextDrawImage(context, CGRectMake(0, 0, textureWidth, textureHeight), cgimage);
CGContextRelease(context);
There is not much you can do but I if you find a way I would be happy to hear about it.
The thing is you need to decompress the image (jpg, png...) which is usually done by creating a CGImage (UIImage is just a wrapper around it). But then you are not allowed to get the data pointer directly from the CGImage but you need to copy them (the really slow draw call). Though then again if the target size and format are the same as the source this operation should be quite fast since the data should more or less simply be copied. On the other hand if your textureWidth and textureHeight are different then the image dimensions those pixels need to be interpolated and this function can become even a few times slower.
The only way out of this I see is to get some library to directly decompress the image from file and get the data pointer of that image. But I never had a performance issue for loading image textures (use a background thread).
Anyway if you are not doing something similar already how I use this is to get the image size, then find the POT (power of two) width and height that fills the image size. Then I create an empty texture with those POT dimensions and call sub image to pass the original image data to the texture. I use a custom texture class to handle this which also contains (generates) texture coordinates so the correct part of the texture is drawn to the frame buffer. Then this class is extended to support atlasing which is generally what you want to do when dealing with many images (textures).
I hope this info helps you in any way...

Remove UIBezirePath Lines drawn On UIImage

I am trying to erase lines drawn on UIImage. I have successfully erased lines drawn on empty canvas.
What would be the trick of erasing lines drawn on UIImage. Below are some things which I have tried but unable to get correct eraser effect.
use touch point and get RGB of image at that point and used that colour stroke.
colorwithpatternimage is too slow.
Kindly suggest any better solution
What I usually do is draw the image to an offscreen buffer (say a CGBitmapContext, for example), draw the Bezier curves over it, and copy the result to the screen.
To remove one of the Beziers, I draw the image to the offscreen buffer, draw all the Bezier curves, except the one (or ones) I don't want, and then copy the result to the screen.
This also has the advantage that it avoids flicker that can be caused by erasing an element that's already onscreen. And it works properly if the curves overlap, whereas drawing with the image as a pattern would likely erase any overlap points.
EDIT: Here's some pseudo-code (never compiled - just from memory) to demonstrate what I mean:
-(UIImage*)drawImageToOffscreenBuffer:(UIImage*)inputImage
{
CGBitmapContextRef offscreen = CGBitmapContextCreate(...[inputImage width], [inputImage height]...);
CGImageRef cgImage = [inputImage CGImage];
CGRect bounds = CGRectMake (0, 0, [inputImage width], [inputImage height]);
CGContextDrawImage (offscreen, bounds, cgImage);
// Now iterate through the Beziers you want to draw
for (i = 0; i < numBeziers; i++)
{
if (drawBezier(i))
{
CGContextMoveToPoint(offscreen, ...);
CGContextAddCurveToPoint(offscreen, ...); // fill in your bezier info here
}
}
// Put result into a CGImage
size_t rowBytes = CGBitmapContextGetBytesPerRow(offscreen);
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, CGBitmapContextGetData(offscreen), rowBytes * [inputImage height], NULL);
CGColorSpaceRef colorSpace = CGBitmapContextGetColorSpace(offscreen);
CGImageRef cgResult = CGImageCreate([inputImage width], [inputImage height], ..., dataProvider, NULL, false, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(rgbColorSpace);
// Make a UIImage out of that CGImage
UIImage* result = [UIImage imageWithCGImage:cgResult];
// Can't remember if you need to release the cgResult here? I think so
CGImageRelease(cgResult);
return result;
}

GLKView snapshot method: null return val, getting an error

I can't figure out how to use the GLKView:snapshot method.
I'm using a GLKView to render some OpenGL stuff. It all works; seems like I have it all set up correctly.
But, when I try to do a snapshot, it fails: I get a null return value, and the following log message:
Error: CGImageCreate: invalid image size: 0 x 0.
Seems like this would mean the view itself is invalid for some reason, but it's not -- everything is working, aside from this.
I've looked at a few code samples, and I'm not doing anything different.
So... anyone seen this before? Ideas?
I never figured out the above problem; however, I found an excellent workaround. I found this chunk which just reads the render buffer and saves it to a UIImage. Problem solved!
- (UIImage*)snapshotRenderBuffer {
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
NSInteger dataLength = backingWidth * backingHeight * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(0.0f, 0.0f, backingWidth, backingHeight, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(
backingWidth, backingHeight, 8, 32, backingWidth * 4, colorspace,
kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipLast,
ref, NULL, true, kCGRenderingIntentDefault);
// (sayeth abd)
// This creates a context with the device pixel dimensions -- not points.
// To be compatible with all devices, you're meant to keep everything as points and a scale factor; but,
// this gives us a scaled down image for purposes of saving. So, keep everything in device resolution,
// and worry about it later...
UIGraphicsBeginImageContextWithOptions(CGSizeMake(backingWidth, backingHeight), NO, 0.0f);
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, backingWidth, backingHeight), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
return image;
}
Maybe this doesn't apply in your case, but the docs for GLKView:snapshot say:
Never call this method inside your drawing function.

iOS Performance Tuning: fastest way to get pixel color for large images

There are a number of questions/answers regarding how to get the pixel color of an image for a given point. However, all of these answers are really slow (100-500ms) for large images (even as small as 1000 x 1300, for example).
Most of the code samples out there draw to an image context. All of them take time when the actual draw takes place:
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage)
Examining this in Instruments reveals that the draw is being done by copying the data from the source image:
I have even tried a different means of getting at the data, hoping that getting to the bytes themselves would actually prove much more efficient.
NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = CGImageCreateWithImageInRect(self.CGImage,
CGRectMake(pointX * self.scale,
pointY * self.scale,
1.0f,
1.0f));
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef data = CGDataProviderCopyData(provider);
CGImageRelease(cgImage);
UInt8* buffer = (UInt8*)CFDataGetBytePtr(data);
CGFloat red = (float)buffer[0] / 255.0f;
CGFloat green = (float)buffer[1] / 255.0f;
CGFloat blue = (float)buffer[2] / 255.0f;
CGFloat alpha = (float)buffer[3] / 255.0f;
CFRelease(data);
UIColor *pixelColor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
return pixelColor;
This method takes it's time on the data copy:
CFDataRef data = CGDataProviderCopyData(provider);
It would appear that it too is reading the data from disk, instead of the CGImage instance I am creating:
Now, this method, in some informal testing does perform better, but it is still not as fast I want it to be. Does anyone know of an even faster way of getting the underlying pixel data???
If it's possible for you to draw this image to the screen via OpenGL ES, you can get extremely fast random access to the underlying pixels in iOS 5.0 via the texture caches introduced in that version. They allow for direct memory access to the underlying BGRA pixel data stored in an OpenGL ES texture (where your image would be residing), and you could pick out any pixel from that texture almost instantaneously.
I use this to read back the raw pixel data of even large (2048x2048) images, and the read times are at worst in the range of 10-20 ms to pull down all of those pixels. Again, random access to a single pixel there takes almost no time, because you're just reading from a location in a byte array.
Of course, this means that you'll have to parse and upload your particular image to OpenGL ES, which will involve the same reading from disk and interactions with Core Graphics (if going through a UIImage) that you'd see if you tried to read pixel data from a random PNG on disk, but it sounds like you just need to render once and sample from it multiple times. If so, OpenGL ES and the texture caches on iOS 5.0 would be the absolute fastest way to read back this pixel data for something also displayed onscreen.
I encapsulate these processes in the GPUImagePicture (image upload) and GPUImageRawData (fast raw data access) classes within my open source GPUImage framework, if you want to see how something like that might work.
I have yet to find a way to get access to the drawn (in frame buffer) pixels. The fastest method I've measured is:
Indicate you want the image to be cached by specifying kCGImageSourceShouldCache when creating it.
(optional) Precache the image by forcing it to render.
Draw the image a 1x1 bitmap context.
The cost of this method is the cached bitmap, which may have a lifetime as long as the CGImage it is associated with. The code ends up looking something like this:
Create image w/ ShouldCache flag
NSDictionary *options = #{ (id)kCGImageSourceShouldCache: #(YES) };
CGImageSourceRef imageSource = CGImageSourceCreateWithData((__bridge CFDataRef)imageData, NULL);
CGImageRef cgimage = CGImageSourceCreateImageAtIndex(imageSource, 0, (__bridge CFDictionaryRef)options);
UIImage *image = [UIImage imageWithCGImage:cgimage];
CGImageRelease(cgimage);
Precache image
UIGraphicsBeginImageContext(CGSizeMake(1, 1));
[image drawAtPoint:CGPointZero];
UIGraphicsEndImageContext();
Draw image to a 1x1 bitmap context
unsigned char pixelData[] = { 0, 0, 0, 0 };
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixelData, 1, 1, 8, 4, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGImageRef cgimage = image.CGImage;
int imageWidth = CGImageGetWidth(cgimage);
int imageHeight = CGImageGetHeight(cgimage);
CGContextDrawImage(context, CGRectMake(-testPoint.x, testPoint.y - imageHeight, imageWidth, imageHeight), cgimage);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
pixelData has the R, G, B, and A values of the pixel at testPoint.
A CGImage context is possibly nearly empty and contains no actual pixel data until you try to read the first pixel or draw it, so trying to speed up getting pixels from an image might not get you anywhere. There's nothing to get yet.
Are you trying to read pixels from a PNG file? You could try going directly after the file and mmap'ing it and decoding the PNG format yourself. It will still take awhile to pull the data from storage.
- (BOOL)isWallPixel: (UIImage *)image: (int) x :(int) y {
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* data = CFDataGetBytePtr(pixelData);
int pixelInfo = ((image.size.width * y) + x ) * 4; // The image is png
//UInt8 red = data[pixelInfo]; // If you need this info, enable it
//UInt8 green = data[(pixelInfo + 1)]; // If you need this info, enable it
//UInt8 blue = data[pixelInfo + 2]; // If you need this info, enable it
UInt8 alpha = data[pixelInfo + 3]; // I need only this info for my maze game
CFRelease(pixelData);
//UIColor* color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f]; // The pixel color info
if (alpha) return YES;
else return NO;
}

Converting Images from camera buffer iOS. Capture still image using AVFoundation

I'm using this well known sample code from Apple to convert camera buffer still images into UIImages.
-(UIImage*) getUIImageFromBuffer:(CMSampleBufferRef) imageSampleBuffer{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);
if (imageBuffer==NULL) {
NSLog(#"No buffer");
}
// Lock the base address of the pixel buffer
if((CVPixelBufferLockBaseAddress(imageBuffer, 0))==kCVReturnSuccess){
NSLog(#"Buffer locked successfully");
}
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
NSLog(#"bytes per row %zu",bytesPerRow );
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
NSLog(#"width %zu",width);
size_t height = CVPixelBufferGetHeight(imageBuffer);
NSLog(#"height %zu",height);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image= [UIImage imageWithCGImage:quartzImage scale:SCALE_IMAGE_RATIO orientation:UIImageOrientationRight];
// Release the Quartz image
CGImageRelease(quartzImage);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
return (image );}
The problem is that usually the image that you obtain is 90° rotated. Using the method +imageWithCGImage:scale:orientation I'm able to rotate it, but before getting into this method I was trying to rotate and scale the image using the CTM function, before passing it to a UIImage. the problem was that CTM transformation didn't affect the image.
I'm asking myself why... is that because I'm locking the buffer? or because the context is created with the image inside, so the changes will affect only the further mod?
Thank you
The answer is that it affects only further modifications, and it has nothing to deal with the buffer locking.
As you can read from this answer mod to the context are applied by time Vertical flip of CGContext

Resources