right now im working on an application that accepts cgimageref, process the pixels of the image and returns the processed cgimageref. To check how it works i simply wrote a code to pass the cgimageref and expects the same image to be returned with out being processed. But the problem is that im not getting back the exact image. the resulting image color is completely changed. If this is not the right way to do this then plz suggest me a better way to do this. Here is the code
- (CGImageRef) invertImageColor :(CGImageRef) imageRef {
CFDataRef dataRef = CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(dataRef);
// my editing code goes here but for testing this part is omitted. i will it later
//when this issue is solved.
CGContextRef ctx = CGBitmapContextCreate(m_PixelBuf,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
CGImageGetBitsPerComponent(imageRef),
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedFirst );
CGImageRef newImageRef = CGBitmapContextCreateImage (ctx);
CGContextRelease(ctx);
return newImageRef;}
You're assuming that the input image has its alpha premultiplied and stored before the color components. Don't assume that. Get the image's bitmap info and pass that to CGBitmapContextCreate.
Note that CGBitmapContext doesn't work with all possible pixel formats. If your input image is in a pixel format that CGBitmapContext doesn't like, you're just going to need to use a separate buffer and draw the input image into the context.
Related
sorry, for this question, I know there is a similar question, but I can not get the answer to work. Probably some dumb error on my side ;-)
I want to overlay two images with Alpha on iOS. The images taken from two videos, read by an AssetReader and stored in two CVPixelBuffer. I know that the Alpha channel is not stored in the video, so I get it from a third file. All data looks fine. The Problem is the overlay, if I do it onscreen with [CIContext drawImage] everything is fine !
But if I do it offscreen because the format of the video is not identical to the screen format, I can not get it to work:
1. drawImage does work, but only on-screen
2. render:toCVPixelBuffer works, but ignores Alpha
3. CGContextDrawImage seems to do nothing at all (not even an error message)
So can somebody give me an idea what is wrong:
Init:
...(a lot of code before)
Setup color space and bitmap context
if(outputContext)
{
CGContextRelease(outputContext);
CGColorSpaceRelease(outputColorSpace);
}
outputColorSpace = CGColorSpaceCreateDeviceRGB();
outputContext = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(pixelBuffer), videoFormatSize.width, videoFormatSize.height, 8, CVPixelBufferGetBytesPerRow(pixelBuffer), outputColorSpace,(CGBitmapInfo) kCGBitmapByteOrderDefault |kCGImageAlphaPremultipliedFirst);
...
(a lot code after)
Drawing:
CIImage *backImageFromSample;
CGImageRef frontImageFromSample;
CVImageBufferRef nextImageBuffer = myPixelBufferArray[0];
CMSampleBufferRef sampleBuffer = NULL;
CMSampleTimingInfo timingInfo;
//draw the frame
CGRect toRect;
toRect.origin.x = 0;
toRect.origin.y = 0;
toRect.size = videoFormatSize;
//background image always full size, this part seems to work
if(drawBack)
{
CVPixelBufferLockBaseAddress( backImageBuffer, kCVPixelBufferLock_ReadOnly );
backImageFromSample = [CIImage imageWithCVPixelBuffer:backImageBuffer];
[coreImageContext render:backImageFromSample toCVPixelBuffer:nextImageBuffer bounds:toRect colorSpace:rgbSpace];
CVPixelBufferUnlockBaseAddress( backImageBuffer, kCVPixelBufferLock_ReadOnly );
}
else
[self clearBuffer:nextImageBuffer];
//Front image doesn't seem to do anything
if(drawFront)
{
unsigned long int numBytes = CVPixelBufferGetBytesPerRow(frontImageBuffer)*CVPixelBufferGetHeight(frontImageBuffer);
CVPixelBufferLockBaseAddress( frontImageBuffer, kCVPixelBufferLock_ReadOnly );
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, CVPixelBufferGetBaseAddress(frontImageBuffer), numBytes, NULL);
frontImageFromSample = CGImageCreate (CVPixelBufferGetWidth(frontImageBuffer) , CVPixelBufferGetHeight(frontImageBuffer), 8, 32, CVPixelBufferGetBytesPerRow(frontImageBuffer), outputColorSpace, (CGBitmapInfo) kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst, provider, NULL, NO, kCGRenderingIntentDefault);
CGContextDrawImage ( outputContext, inrect, frontImageFromSample);
CVPixelBufferUnlockBaseAddress( frontImageBuffer, kCVPixelBufferLock_ReadOnly );
CGImageRelease(frontImageFromSample);
}
Any ideas anyone ?
So obviously I should stop to ask questions on stackflow. Every time I do that after hours of debugging I find the answer myself shortly afterwards. Sorry for that. The problem is in the initialisation, you can't do CVPixelBufferGetBaseAddress without locking the adresss first O_o. The adress gets NULL and this seems to be allowed, with the action then beeing not to do anything. So the correct code is:
if(outputContext)
{
CGContextRelease(outputContext);
CGColorSpaceRelease(outputColorSpace);
}
CVPixelBufferLockBaseAddress(pixelBuffer);
outputColorSpace = CGColorSpaceCreateDeviceRGB();
outputContext = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(pixelBuffer), videoFormatSize.width, videoFormatSize.height, 8, CVPixelBufferGetBytesPerRow(pixelBuffer), outputColorSpace,(CGBitmapInfo) kCGBitmapByteOrderDefault |kCGImageAlphaPremultipliedFirst);
CVPixelBufferUnlockBaseAddress(pixelBuffer);
I'm looking at optimizing a routine that fetches the pixel data from a CGImage. The way this currently is done (very inefficiently) is to create a new CGContext, draw the CGImage into the context, and then get the data from the context.
I have the following optimized routine to handle this:
CGImageRef imageRef = image.CGImage;
uint8_t *pixelData = NULL;
CGDataProviderRef imageDataProvider = CGImageGetDataProvider(imageRef);
CFDataRef imageData = CGDataProviderCopyData(imageDataProvider);
pixelData = (uint8_t *)malloc(CFDataGetLength(imageData));
CFDataGetBytes(imageData, CFRangeMake(0, CFDataGetLength(imageData)), pixelData);
CFRelease(imageData);
This almost works. After viewing and comparing the hex dump of the pixel data obtained through both methods, I found that in the above case, there are 8 bytes of 0's every 6360 bytes. Otherwise, the data is identical. e.g.
And here is the comparison with the unoptimized version:
After the 8 bytes of 0's, the correct pixel data continues. Anyone know why this is happening?
UPDATE:
Here is the routine I am optimizing (the snipped code is just getting size info, and other non-important things; the relevant bit being the pixel data returned):
CGContextRef context = NULL;
CGImageRef imageRef = image.CGImage;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst;
// ... SNIP ...
context = CGBitmapContextCreate(...);
CGColorSpaceRelease(colorSpace);
// ... SNIP ...
CGContextDrawImage(context, rect, imageRef);
uint8_t *pixelData = (uint8_t *)CGBitmapContextGetData(context);
CGContextRelease(context);
Obviously this is an excessive amount of work just to get the underlying pixel data. Creating a context, then drawing into it. The first routine is between 5 - 10 times as fast. But as I pointed out, the pixel data returned by both routines are almost identical, except for the insertion of 8 zero-byte values every 6360 bytes in the optimized code (highlighted in the images).
Otherwise, everything else is the same -- color values, byte order, etc.
The bitmap data has padding at the end of each row of pixels, to round the number of bytes per row up to a larger value. (In this case, a multiple of 16 bytes.)
This padding is added to make it faster to process and draw the image.
You should use CGImageGetBytesPerRow() to find out how many bytes each row takes. Don't assume that it's the same as CGImageGetWidth() * CGImageGetBitsPerPixel() / 8; the bytes per row may be larger.
Keep in mind that the data behind an arbitrary CGImage may not be in the format that you expect. You cannot assume that all images are 32-bit-per-pixel ARGB with no padding. You should either use the CG functions to figure out what format the data might be, or redraw the image into a bitmap context that's in the exact format you expect. The latter is typically much easier -- let CG do the conversions for you.
(You don't show what parameters you're passing to CGBitmapContextCreate. Are you calculating an exact bytesPerRow or are you passing in 0? If you pass in 0, CG may add padding for you, and you may find that drawing into the context is faster.)
I have a method to resize a CGImageRef and return CGImageRef. The issue is the last few lines, where I need to somehow release but return it after. Any ideas? Thanks
-(CGImageRef)resizeImage:(CGImageRef *)anImage width:(CGFloat)width height:(CGFloat)height
{
CGImageRef imageRef = *anImage;
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(imageRef), 4 * width, CGImageGetColorSpace(imageRef), alphaInfo);
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
CGContextRelease(bitmap);
CGImageRelease(ref); //issue here
return ref;
}
The Cocoa memory management naming policy states, that you own an object that is created from methods whose name begin with alloc, copy or new.
This rules are also respected by the Clang Static Analyzer.
Note that there are slightly different conventions for Core Foundation. Details can be found in Apple's Advanced Memory Management Programming Guide.
I modified your above method to conform to that naming conventions. I also removed the asterisk when passing in anImage, as CGImageRef is already a pointer. (Or was this on purpose?).
Note that you own the returned CGImage and have to CGImageRelease it later.
-(CGImageRef)newResizedImageWithImage:(CGImageRef)anImage width:(CGFloat)width height:(CGFloat)height
{
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(anImage);
if (alphaInfo == kCGImageAlphaNone)
{
alphaInfo = kCGImageAlphaNoneSkipLast;
}
CGContextRef bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(anImage), 4 * width, CGImageGetColorSpace(anImage), alphaInfo);
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), anImage);
CGImageRef image = CGBitmapContextCreateImage(bitmap);
CGContextRelease(bitmap);
return image;
}
You could also operate on the pointer anImage (after removing the asterisk, like #weichsel suggested) and return void.
Still, you should read your code and think about the questions:
Who owns anImage? (clearly not your method, as it does neither retain nor copy it)
What happens if it is released by the owner while you're in your method? (or other things that might happen to it while your code runs)
What happens to it after your method finishes? (aka: did you remember to release it in the calling code)
So, I would strongly encourage you not to mix CoreFoundation, which works with functions, pointers and "classic" data-structures, and Foundation, which works with objects and messages.
If you want to operate on CF-structures, you should write a C-function which does it. If you want to operate on Foundation-objects, you should write (sub-)classes with methods. If you want to mix both or provide a bridge, you should know exactly what you are doing and write wrapper-classes which expose a Foundation-API and handle all the CF-stuff internally (thus leaving it to you when to release structures).
I'm looking to use CATiledLayer to display a huge PNG image on an iOS device. For this to work, I need to split the larger image into tiles (at 100%, 50%, 25% and 12.5%) on the client (creating tiles at the server side is not an option).
I can see that there are libraries such as libjpeg-turbo that may work, however these are for JPEGs and I need to work with PNGs.
Does anyone know of a way that I can take a large PNG (~20Mb) and generate tiles from it on the device?
Any pointers or suggestions would be appreciated!
Thank you!
You can use the built-in Core Graphics CGDataProviderCreateWithFilename and CGImageCreateWithPNGDataProvider APIs to open the image, then create each of the tiles by doing something like:
const CGSize tileSize = CGSizeMake(256, 256);
const CGPoint tileOrigin = ...; // Calculate using current column, row, and tile size.
const CGRect tileFrame = CGRectMake(-tileOrigin.x, -tileOrigin.y, imageSize.width, imageSize.height);
UIGraphicsBeginImageContextWithOptions(tileSize, YES, 1);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, tileFrame, image.CGImage);
UIImage *tileImage = UIGraphicsGetImageFromCurrentImageContext();
[UIImagePNGRepresentation(tileImage) writeToFile:tilePath atomically:YES];
UIGraphicsEndImageContext();
You may also want to look at the related sample projects (Large Image Downsizing, and PhotoScroller) referenced under the UIScrollView Class Reference.
I have a simple drawing issue. I have prepared a 2 dimensional array which has an animated wave motion. The array is updated every 1/10th of a second (this can be changed by the user). After the array is updated I want to display it as a 2 dimensional image with each array value as a pixel with color range from 0 to 255.
Any pointers on how to do this most efficiently...
Appreciate any help on this...
KAS
If it's just a greyscale then the following (coded as I type, probably worth checking for errors) should work:
CGDataProviderRef dataProvider =
CGDataProviderCreateWithData(NULL, pointerToYourData, width*height, NULL);
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceGray();
CGImageRef inputImage = CGImageCreate( width, height,
8, 8, width,
colourSpace,
kCGBitmapByteOrderDefault,
dataProvider,
NULL, NO,
kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(colourSpace);
UIImage *image = [UIImage imageWithCGImage:inputImage];
CGImageRelease(inputImage);
someImageView.image = image;
That'd be for a one-shot display, assuming you didn't want to write a custom UIView subclass (which is worth the effort only if performance is a problem, probably).
My understanding from the docs is that the data provider can be created just once for the lifetime of your C buffer. I don't think that's true of the image, but if you created a CGBitmapContext to wrap your buffer rather than a provider and an image, that would safely persist and you could use CGBitmapContextCreateImage to get a CGImageRef to be moving on with. It's probably worth benchmarking both ways around if it's an issue.
EDIT: so the alternative way around would be:
// get a context from your C buffer; this is now something
// CoreGraphics could draw to...
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context =
CGBitmapContextCreate(pointerToYourData,
width, height,
8, width,
colourSpace,
kCGBitmapByteOrderDefault);
CGColorSpaceRelease(colourSpace);
// get an image of the context, which is something
// CoreGraphics can draw from...
CGImageRef image = CGBitmapContextCreateImage(context);
/* wrap in a UIImage, push to a UIImageView, as before, remember
to clean up 'image' */
CoreGraphics copies things about very lazily, so neither of these solutions should be as costly as the multiple steps imply.