Create a sub-image from context - ios

I was wondering if there was a way to create a CGImage corresponding to a rectangle inside the context?
What I am doing right now:
I am using CGBitmapContextCreateImage to create a CGImage from a context. Then, I use CGImageCreateWithImageInRect to extract that sub-image.
Anil

Try this:
static CGImageRef createImageWithSectionOfBitmapContext(CGContextRef bigContext,
size_t x, size_t y, size_t width, size_t height)
{
uint8_t *data = CGBitmapContextGetData(bigContext);
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(bigContext);
size_t bytesPerPixel = CGBitmapContextGetBitsPerPixel(bigContext) / 8;
data += x * bytesPerPixel + y * bytesPerRow;
CGContextRef smallContext = CGBitmapContextCreate(data,
width, height,
CGBitmapContextGetBitsPerComponent(bigContext), bytesPerRow,
CGBitmapContextGetColorSpace(bigContext),
CGBitmapContextGetBitmapInfo(bigContext));
CGImageRef image = CGBitmapContextCreateImage(smallContext);
CGContextRelease(smallContext);
return image;
}
or this:
static CGImageRef createImageWithSectionOfBitmapContext(CGContextRef bigContext,
size_t x, size_t y, size_t width, size_t height)
{
uint8_t *data = CGBitmapContextGetData(bigContext);
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(bigContext);
size_t bytesPerPixel = CGBitmapContextGetBitsPerPixel(bigContext) / 8;
data += x * bytesPerPixel + y * bytesPerRow;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, data,
height * bytesPerRow, NULL);
CGImageRef image = CGImageCreate(width, height,
CGBitmapContextGetBitsPerComponent(bigContext),
CGBitmapContextGetBitsPerPixel(bigContext),
CGBitmapContextGetBytesPerRow(bigContext),
CGBitmapContextGetColorSpace(bigContext),
CGBitmapContextGetBitmapInfo(bigContext),
provider, NULL, NO, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
return image;
}

You can create a cropped image as follows as mentioned here,
For eg:-
UIImage *image = //original image
CGRect rect = //cropped rect
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);
UIImage *img = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
You need to get the CGImage from context to use the above code to crop it. You can use CGBitmapContextCreateImage as mentioned in question. Here is the documentation.

You could create your CGBitmapContext with a buffer that you allocated, and create a CGImage from scratch using the same buffer. With the context and the image sharing a buffer, you can draw into the context and then create a CGImage with that section of the master image.
Note that if you draw into the same context afterward, the cropped image may actually pick up the changes (depending on just how much shared-referencing-instead-of-copying is going on internally). Depending on what you're doing, you may or may not find this desirable.

Related

Create CVPixelBuffer with pixels data, but the final image is distorted

I get pixels by OpenGLES method(glReadPixels) or other way, then create CVPixelBuffer (with or without CGImage) for video recording, but the final picture is distorted. This happens on iPhone6 when I test on iPhone 5c, 5s and 6.
It looks like:
Here is the code:
CGSize viewSize=self.glView.bounds.size;
NSInteger myDataLength = viewSize.width * viewSize.height * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, viewSize.width, viewSize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < viewSize.height; y++)
{
for(int x = 0; x < viewSize.width* 4; x++)
{
buffer2[(int)((viewSize.height-1 - y) * viewSize.width * 4 + x)] = buffer[(int)(y * 4 * viewSize.width + x)];
}
}
free(buffer);
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * viewSize.width;
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGImageRef imageRef = CGImageCreate(viewSize.width , viewSize.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
//UIImage *photo = [UIImage imageWithCGImage:imageRef];
int width = CGImageGetWidth(imageRef);
int height = CGImageGetHeight(imageRef);
CVPixelBufferRef pixelBuffer = NULL;
CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, _recorder.pixelBufferAdaptor.pixelBufferPool, &pixelBuffer);
NSAssert((status == kCVReturnSuccess && pixelBuffer != NULL), #"create pixel buffer failed.");
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pixelBuffer);
NSParameterAssert(pxdata != NULL);//CGContextRef
CGContextRef context = CGBitmapContextCreate(pxdata,
width,
height,
CGImageGetBitsPerComponent(imageRef),
CGImageGetBytesPerRow(imageRef),
colorSpaceRef,
kCGImageAlphaPremultipliedLast);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CGColorSpaceRelease(colorSpaceRef);
CGContextRelease(context);
CGImageRelease(imageRef);
free(buffer2);
//CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
// ...
CVPixelBufferRelease(pixelBuffer);
NOTE - this answer relates the overall problem with the image and not to the specific code.
This sort of problem is usually the 'stride' and relates to the memory layout used to hold the image where each row of pixels are not packed tightly together
As an example the source image may be 240 pixels wide.
The CMPixelBuffer may allocate 320 pixels for each rows where the first 240 pixels hold the image and the extra 80 pixels are padding.
In this case the width is 240 pixels, the stride is 320 pixels.
Strides usually mean you have to copy over each row of pixels one in a loop
Use this size everywhere in the code
int width_16 = (int)yourImage.size.width - (int)yourImage.size.width%16;
int height_ = (int)(yourImage.size.height/yourImage.size.width * width_16) ;
CGSize video_size_ = CGSizeMake(width_16, height_);
I had the same problem and I think the solution is following:
Try to change CGImageGetBytesPerRow(imageRef) to CVPixelBufferGetBytesPerRow(pxbuffer) in CGBitmapContextCreate call. The reason is that your context is backed with the raw data of pixel buffer you have created, not CGImage you are drawing. CVPixelBuffer's bytes per row count may be greater than bytes per pixel * pixel buffer width.

Why is UIImage from imageWithCGImage breaking in arm64?

Below is some code that converts a UIImage to a CGImage, makes some changes to the CGImage, then converts it back to a UIImage.
This works if my architectures include arm6 and arm7 only. If I add arm64, the UIImage returned at the end is null.
There are some hard coded numbers in the code, which makes me think that is the problem. But I am not sure how to programmatically determine these values.
Here's the code, minus some details in the middle:
CGImageRef imageRef = [anImage CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Raw data malloc'd
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context =
CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
...
// Change the alpha or color of pixels based on certain criteria
...
CGImageRef ref = CGBitmapContextCreateImage(context);
free(rawData);
CGContextRelease(context);
image = [UIImage imageWithCGImage:ref];
CFRelease(ref);
Any thoughts on what's happening here?

Freeing raw image data after creating a UIImage from it, corrupts image

I am taking a UIImage, breaking it down to the raw Pixel Data like so
CGImageRef imageRef = self.image.CGImage;
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
_rawData = (UInt8 *)calloc(height * width * 4, sizeof(UInt8));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(_rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
I then edit a couple pixels in the _rawData array with different colors, then re-create the UIImage like this from the edited _rawData pixel data like so. (In this, I am just changing the second pixel in the image to be red)
size_t width = CGImageGetWidth(_image.CGImage);
NSUInteger pixel = 1; // second pixel
NSUInteger position = pixel*4;
NSUInteger redIndex = position;
NSUInteger greenIndex = position+1;
NSUInteger blueIndex = position+2;
NSUInteger alphaIndex = position+3;
_rawData[redIndex] = 255;
_rawData[greenIndex] = 0;
_rawData[blueIndex] = 0;
_rawData[alphaIndex] = 255;
size_t height = CGImageGetHeight(_image.CGImage);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 32;
size_t bytesPerRow = 4*width;
size_t length = height*width*4;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, _rawData, length, NULL);
CGImageRef newImageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, bitmapInfo, provider, NULL, NO, kCGRenderingIntentDefault);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
CGImageRelease(newImageRef);
My problem begins here: I now have a new UIImage that has the second pixel changed to red, but I have a memory leak. I need to free the _rawData that has been calloc'd. Whenever I call
free(_rawData);
even though its after i've already created my "newImage". That image I just created is corrupted when I show it on screen. I thought that CGImageCreate() would create a new object in memory so then I could free the old memory. Is that not true?
What in the world I am doing wrong?

Why doesn't CGImage data survive a round trip?

I have a freehand drawing view (users can draw lines with their finger). I only use a few colors, so I have a compression algorithm that I wrote (want to send it over a local network to another iPad) but I can't seem to get the data out of the graphics context accurately, even with this simple test:
//Get the data
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0f);
CGContextRef c = UIGraphicsGetCurrentContext();
[self.layer renderInContext:c];
baseImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef imageRef = baseImageView.image.CGImage;
NSData *dataToUse = (NSData *)CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
//Reuse the data
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)dataToUse);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef test = CGImageCreate(width,height,8,32,4*width,colorSpace,
kCGBitmapByteOrder32Big|kCGImageAlphaPremultipliedLast,provider,NULL,false,
kCGRenderingIntentDefault); //I get width and height from another part of the program
imageView.image = [UIImage imageWithCGImage:test];
I simply copied out the data from one CGImage and tried to insert it into another. However, the result is garbage and not only that, for some reason it comes out as BGRA when I copy the data, but CGImageCreate wants RGBA. Where am I going wrong with this round-trip test?
Looks like the answer is that it's not enough to just get the data provider. You need to actually render the image into a bitmap context and take the data from there. Revised way:
//Get the data
CGImageRef imageRef = baseImageView.image.CGImage;
size_t height = CGImageGetHeight(imageRef);
size_t width = CGImageGetWidth(imageRef);
size_t bufferLength = width * height * 4;
unsigned char *rawData = (unsigned char *)malloc(bufferLength);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(bitmapData, width, height, 8,
4*width, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
NSData *dataToUse = [NSData dataWithBytes:rawData length:bufferLength];
//Later free(rawData);
Using the data is still the same.

how to perform calculations on image raw data with CoreGraphics?

I'm trying to create specific custom filter effects on image for iOS. So far, I've been trying to get raw data using CGBitmapContextCreate. However, I don't really have an idea of how to modify my rawData. I hope to perform calculations on it. I hope to effect pixel by pixel with the rawData but I have no idea how to manipulate it.
I also don't know how I can draw my bitmap context to a UIImage, so I can render the finished product on an UIImageView.
Could somebody give me some pointers to how I might be able to achieve that?
Here's my code so far:
// First get the image into your data buffer
CGImageRef imageRef = imageView.image.CGImage;
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
//perform calculations on rawData? or context? not sure!! i hope to effect pixel by pixel.
//am i doing this correctly?
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//set the imageview with the new image
[imageView setImage:newImage];
CGContextRelease(context);
You are almost there.
You can perform your transactions on the RGB and Alpha channels via:
for (NSUInteger i = 0 ; i < rawDataSpace ; i+=4) {
rawData[i+0] = ...
rawData[i+1] = ...
rawData[i+2] = ...
rawData[i+3] = ... // = Alpha channel

Resources