Looking for a simple pixel drawing method in ios (iphone, ipad) - ios

I have a simple drawing issue. I have prepared a 2 dimensional array which has an animated wave motion. The array is updated every 1/10th of a second (this can be changed by the user). After the array is updated I want to display it as a 2 dimensional image with each array value as a pixel with color range from 0 to 255.
Any pointers on how to do this most efficiently...
Appreciate any help on this...
KAS

If it's just a greyscale then the following (coded as I type, probably worth checking for errors) should work:
CGDataProviderRef dataProvider =
CGDataProviderCreateWithData(NULL, pointerToYourData, width*height, NULL);
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceGray();
CGImageRef inputImage = CGImageCreate( width, height,
8, 8, width,
colourSpace,
kCGBitmapByteOrderDefault,
dataProvider,
NULL, NO,
kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(colourSpace);
UIImage *image = [UIImage imageWithCGImage:inputImage];
CGImageRelease(inputImage);
someImageView.image = image;
That'd be for a one-shot display, assuming you didn't want to write a custom UIView subclass (which is worth the effort only if performance is a problem, probably).
My understanding from the docs is that the data provider can be created just once for the lifetime of your C buffer. I don't think that's true of the image, but if you created a CGBitmapContext to wrap your buffer rather than a provider and an image, that would safely persist and you could use CGBitmapContextCreateImage to get a CGImageRef to be moving on with. It's probably worth benchmarking both ways around if it's an issue.
EDIT: so the alternative way around would be:
// get a context from your C buffer; this is now something
// CoreGraphics could draw to...
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context =
CGBitmapContextCreate(pointerToYourData,
width, height,
8, width,
colourSpace,
kCGBitmapByteOrderDefault);
CGColorSpaceRelease(colourSpace);
// get an image of the context, which is something
// CoreGraphics can draw from...
CGImageRef image = CGBitmapContextCreateImage(context);
/* wrap in a UIImage, push to a UIImageView, as before, remember
to clean up 'image' */
CoreGraphics copies things about very lazily, so neither of these solutions should be as costly as the multiple steps imply.

Related

Is there a way to read data from CGImage without internal caching?

I am fighting with an internal caching (about 90 MB for 15 mp image ) in CGContextDrawImage/CGDataProviderCopyData functions.
Here is the stack-trace in profiler:
In all cases, IOSurface is created as a "cache", and isn't cleaned after #autoreleasepool is drained. This leaves a very few chances for an app to survive.
Caching doesn't depend on image size: I tried to render 512x512, as well as 4500x512 and 4500x2500 (full-size) image chunks.
I use #autoreleasepool, CFGetRetainCount returns 1 for all CG-objects before cleaning them.
The code which manipulates the data:
+ (void)render11:(CIImage*)ciImage fromRect:(CGRect)roi toBitmap:(unsigned char*)bitmap {
#autoreleasepool
{
int w = CGRectGetWidth(roi), h = CGRectGetHeight(roi);
CIContext* ciContext = [CIContext contextWithOptions:nil];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef cgContext = CGBitmapContextCreate(bitmap, w, h,
8, w*4, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGImageRef cgImage = [ciContext createCGImage:ciImage
fromRect:roi
format:kCIFormatRGBA8
colorSpace:colorSpace
deferred:YES];
CGContextDrawImage(cgContext, CGRectMake(0, 0, w, h), cgImage);
assert( CFGetRetainCount(cgImage) == 1 );
CGColorSpaceRelease(colorSpace);
CGContextRelease(cgContext);
CGImageRelease(cgImage);
}
}
What I know about IOSurface: it's from the previously private framework IOSurface.
CIContext has a function render: ... toIOSurface:.
I've created my IOSurfaceRef and passed it to this function, and the internal implementation still creates its own surface, and doesn't clean it.
So, do you know (or assume):
1. Are there other ways to read CGImage's data buffer except
CGContextDrawImage/CGDataProviderCopyData ?
2. Is there a way to disable caching at render?
3. Why does the caching happen?
4. Can I use some lower-level (while non-private) API to manually clean up system memory?
Any suggestions are welcome.
To answer your second question,
Is there a way to disable caching at render?
setting the environment variable CI_SURFACE_CACHE_CAPACITY to 0 will more-or-less disable the CIContext surface cache. Moreover, you can specify a custom (approximate) cache limit by setting that variable to a given value in bytes. For example, setting CI_SURFACE_CACHE_CAPACITY to 2147483648 specifies a 2 GiB surface cache limit.
Note it appears that all of a process's CIContext instances share a single surface cache. It does not appear to be possible to use separate caches per CIContext.
If you just need to manipulate CIImage data, may consider to use CIImageProcessorKernel to put data into CPU or GPU calculation without extracting them.
I notice that
[ciContext
render:image toBitmap:bitmap rowBytes: w*4 bounds:image.extent format:kCIFormatRGBA8 colorSpace:colorSpace];
There is no such 90M cache. Maybe it's what you want.

Getting pixel data from CGImageRef contains extra bytes?

I'm looking at optimizing a routine that fetches the pixel data from a CGImage. The way this currently is done (very inefficiently) is to create a new CGContext, draw the CGImage into the context, and then get the data from the context.
I have the following optimized routine to handle this:
CGImageRef imageRef = image.CGImage;
uint8_t *pixelData = NULL;
CGDataProviderRef imageDataProvider = CGImageGetDataProvider(imageRef);
CFDataRef imageData = CGDataProviderCopyData(imageDataProvider);
pixelData = (uint8_t *)malloc(CFDataGetLength(imageData));
CFDataGetBytes(imageData, CFRangeMake(0, CFDataGetLength(imageData)), pixelData);
CFRelease(imageData);
This almost works. After viewing and comparing the hex dump of the pixel data obtained through both methods, I found that in the above case, there are 8 bytes of 0's every 6360 bytes. Otherwise, the data is identical. e.g.
And here is the comparison with the unoptimized version:
After the 8 bytes of 0's, the correct pixel data continues. Anyone know why this is happening?
UPDATE:
Here is the routine I am optimizing (the snipped code is just getting size info, and other non-important things; the relevant bit being the pixel data returned):
CGContextRef context = NULL;
CGImageRef imageRef = image.CGImage;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst;
// ... SNIP ...
context = CGBitmapContextCreate(...);
CGColorSpaceRelease(colorSpace);
// ... SNIP ...
CGContextDrawImage(context, rect, imageRef);
uint8_t *pixelData = (uint8_t *)CGBitmapContextGetData(context);
CGContextRelease(context);
Obviously this is an excessive amount of work just to get the underlying pixel data. Creating a context, then drawing into it. The first routine is between 5 - 10 times as fast. But as I pointed out, the pixel data returned by both routines are almost identical, except for the insertion of 8 zero-byte values every 6360 bytes in the optimized code (highlighted in the images).
Otherwise, everything else is the same -- color values, byte order, etc.
The bitmap data has padding at the end of each row of pixels, to round the number of bytes per row up to a larger value. (In this case, a multiple of 16 bytes.)
This padding is added to make it faster to process and draw the image.
You should use CGImageGetBytesPerRow() to find out how many bytes each row takes. Don't assume that it's the same as CGImageGetWidth() * CGImageGetBitsPerPixel() / 8; the bytes per row may be larger.
Keep in mind that the data behind an arbitrary CGImage may not be in the format that you expect. You cannot assume that all images are 32-bit-per-pixel ARGB with no padding. You should either use the CG functions to figure out what format the data might be, or redraw the image into a bitmap context that's in the exact format you expect. The latter is typically much easier -- let CG do the conversions for you.
(You don't show what parameters you're passing to CGBitmapContextCreate. Are you calculating an exact bytesPerRow or are you passing in 0? If you pass in 0, CG may add padding for you, and you may find that drawing into the context is faster.)

How to release CGImageRef if required to return it?

I have a method to resize a CGImageRef and return CGImageRef. The issue is the last few lines, where I need to somehow release but return it after. Any ideas? Thanks
-(CGImageRef)resizeImage:(CGImageRef *)anImage width:(CGFloat)width height:(CGFloat)height
{
CGImageRef imageRef = *anImage;
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(imageRef), 4 * width, CGImageGetColorSpace(imageRef), alphaInfo);
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
CGContextRelease(bitmap);
CGImageRelease(ref); //issue here
return ref;
}
The Cocoa memory management naming policy states, that you own an object that is created from methods whose name begin with alloc, copy or new.
This rules are also respected by the Clang Static Analyzer.
Note that there are slightly different conventions for Core Foundation. Details can be found in Apple's Advanced Memory Management Programming Guide.
I modified your above method to conform to that naming conventions. I also removed the asterisk when passing in anImage, as CGImageRef is already a pointer. (Or was this on purpose?).
Note that you own the returned CGImage and have to CGImageRelease it later.
-(CGImageRef)newResizedImageWithImage:(CGImageRef)anImage width:(CGFloat)width height:(CGFloat)height
{
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(anImage);
if (alphaInfo == kCGImageAlphaNone)
{
alphaInfo = kCGImageAlphaNoneSkipLast;
}
CGContextRef bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(anImage), 4 * width, CGImageGetColorSpace(anImage), alphaInfo);
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), anImage);
CGImageRef image = CGBitmapContextCreateImage(bitmap);
CGContextRelease(bitmap);
return image;
}
You could also operate on the pointer anImage (after removing the asterisk, like #weichsel suggested) and return void.
Still, you should read your code and think about the questions:
Who owns anImage? (clearly not your method, as it does neither retain nor copy it)
What happens if it is released by the owner while you're in your method? (or other things that might happen to it while your code runs)
What happens to it after your method finishes? (aka: did you remember to release it in the calling code)
So, I would strongly encourage you not to mix CoreFoundation, which works with functions, pointers and "classic" data-structures, and Foundation, which works with objects and messages.
If you want to operate on CF-structures, you should write a C-function which does it. If you want to operate on Foundation-objects, you should write (sub-)classes with methods. If you want to mix both or provide a bridge, you should know exactly what you are doing and write wrapper-classes which expose a Foundation-API and handle all the CF-stuff internally (thus leaving it to you when to release structures).

Variable size of CGContext

I'm currently using a UIGraphicsBeginImageContext(resultingImageSize); to create an image.
But when I call this function, I don't know exactly the width of resultingImageSize.
Indeed, I developed some kind of video processing which consume lots of memory, and I cannot process first then draw after: I must draw during the video process.
If I set, for example UIGraphicsBeginImageContext(CGSizeMake(300, 400));, the drawn part over 400 is lost.
So is there a solution to set a variable size of CGContext, or resize a CGContext with very few memory consume?
I found a solution by creating a new larger Context each time it must be resized. Here's the magic function:
void MPResizeContextWithNewSize(CGContextRef *c, CGSize s) {
size_t bitsPerComponents = CGBitmapContextGetBitsPerComponent(*c);
size_t numberOfComponents = CGBitmapContextGetBitsPerPixel(*c) / bitsPerComponents;
CGContextRef newContext = CGBitmapContextCreate(NULL, s.width, s.height, bitsPerComponents, sizeof(UInt8)*s.width*numberOfComponents,
CGBitmapContextGetColorSpace(*c), CGBitmapContextGetBitmapInfo(*c));
// Copying context content
CGImageRef im = CGBitmapContextCreateImage(*c);
CGContextDrawImage(newContext, CGRectMake(0, 0, CGBitmapContextGetWidth(*c), CGBitmapContextGetHeight(*c)), im);
CGImageRelease(im);
CGContextRelease(*c);
*c = newContext;
}
I wonder if it could be optimized, for example with memcpy, as suggested here. I tried but it makes my code crash.

ipad, sdk, CGBitmapContextCreate

right now im working on an application that accepts cgimageref, process the pixels of the image and returns the processed cgimageref. To check how it works i simply wrote a code to pass the cgimageref and expects the same image to be returned with out being processed. But the problem is that im not getting back the exact image. the resulting image color is completely changed. If this is not the right way to do this then plz suggest me a better way to do this. Here is the code
- (CGImageRef) invertImageColor :(CGImageRef) imageRef {
CFDataRef dataRef = CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(dataRef);
// my editing code goes here but for testing this part is omitted. i will it later
//when this issue is solved.
CGContextRef ctx = CGBitmapContextCreate(m_PixelBuf,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
CGImageGetBitsPerComponent(imageRef),
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedFirst );
CGImageRef newImageRef = CGBitmapContextCreateImage (ctx);
CGContextRelease(ctx);
return newImageRef;}
You're assuming that the input image has its alpha premultiplied and stored before the color components. Don't assume that. Get the image's bitmap info and pass that to CGBitmapContextCreate.
Note that CGBitmapContext doesn't work with all possible pixel formats. If your input image is in a pixel format that CGBitmapContext doesn't like, you're just going to need to use a separate buffer and draw the input image into the context.

Resources