I'm trying to make an adobe native extension h.264 file encoder for iOS. I have the encoder part working. It run fine from an xcode test project. The problem is that when i try to run it from the ane file it doesn't work.
My code to add frames converted from a bitmapData into a CGImage:
//convert first argument in a bitmapData
FREObject objectBitmapData = argv[0];
FREBitmapData bitmapData;
FREAcquireBitmapData( objectBitmapData, &bitmapData );
CGImageRef theImage = getCGImageRefFromBitmapData(bitmapData);
[videoRecorder addFrame:theImage];
In this case the CGImageRef has data, but when i try to open the video, it only show a black screen.
When i test it from an xcode project it also save a black screen video, but if i create the CGImage from a UIImage file, and then modify this CGImage and pass it to the addFrame, it work fine.
My guess is that the CGImageRef theImage is not created right.
The code i'm using to create the CGImageRef is this: https://stackoverflow.com/a/8528969/800836
Why the CGImage is not working fine when it is create using the CGImageCreate?
Thanks!
In case someone has the same problem, my solution was to create a CGImageRef with 0 bytes per row:
CGBitmapInfo bitmapInfo = kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, 1024, 768, 8, /*bytes per row*/0, colorSpace, bitmapInfo);
// create image from context
CGImageRef tmpImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
Then copy the pixel data to this tmpImage and then create a new one based on this image:
CGImageRef getCGImageRefFromRawData(FREBitmapData bitmapData) {
CGImageRef abgrImageRef = tmpImage;
CFDataRef abgrData = CGDataProviderCopyData(CGImageGetDataProvider(abgrImageRef));
UInt8 *pixelData = (UInt8 *) CFDataGetBytePtr(abgrData);
int length = CFDataGetLength(abgrData);
uint32_t* input = bitmapData.bits32;
int index2 = 0;
for (int index = 0; index < length; index+= 4) {
pixelData[index] = (input[index2]>>0) & 0xFF;
pixelData[index+1] = (input[index2]>>8) & 0xFF;
pixelData[index+2] = (input[index2]>>16) & 0xFF;
pixelData[index+3] = (input[index2]>>24) & 0xFF;
index2++;
}
// grab the bgra image info
size_t width = CGImageGetWidth(abgrImageRef);
size_t height = CGImageGetHeight(abgrImageRef);
size_t bitsPerComponent = CGImageGetBitsPerComponent(abgrImageRef);
size_t bitsPerPixel = CGImageGetBitsPerPixel(abgrImageRef);
size_t bytesPerRow = CGImageGetBytesPerRow(abgrImageRef);
CGColorSpaceRef colorspace = CGImageGetColorSpace(abgrImageRef);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(abgrImageRef);
// create the argb image
CFDataRef argbData = CFDataCreate(NULL, pixelData, length);
CGDataProviderRef provider = CGDataProviderCreateWithCFData(argbData);
CGImageRef argbImageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorspace, bitmapInfo, provider, NULL, true, kCGRenderingIntentDefault);
// release what we can
CFRelease(abgrData);
CFRelease(argbData);
CGDataProviderRelease(provider);
return argbImageRef;
}
Related
I get pixels by OpenGLES method(glReadPixels) or other way, then create CVPixelBuffer (with or without CGImage) for video recording, but the final picture is distorted. This happens on iPhone6 when I test on iPhone 5c, 5s and 6.
It looks like:
Here is the code:
CGSize viewSize=self.glView.bounds.size;
NSInteger myDataLength = viewSize.width * viewSize.height * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, viewSize.width, viewSize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < viewSize.height; y++)
{
for(int x = 0; x < viewSize.width* 4; x++)
{
buffer2[(int)((viewSize.height-1 - y) * viewSize.width * 4 + x)] = buffer[(int)(y * 4 * viewSize.width + x)];
}
}
free(buffer);
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * viewSize.width;
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGImageRef imageRef = CGImageCreate(viewSize.width , viewSize.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
//UIImage *photo = [UIImage imageWithCGImage:imageRef];
int width = CGImageGetWidth(imageRef);
int height = CGImageGetHeight(imageRef);
CVPixelBufferRef pixelBuffer = NULL;
CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, _recorder.pixelBufferAdaptor.pixelBufferPool, &pixelBuffer);
NSAssert((status == kCVReturnSuccess && pixelBuffer != NULL), #"create pixel buffer failed.");
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pixelBuffer);
NSParameterAssert(pxdata != NULL);//CGContextRef
CGContextRef context = CGBitmapContextCreate(pxdata,
width,
height,
CGImageGetBitsPerComponent(imageRef),
CGImageGetBytesPerRow(imageRef),
colorSpaceRef,
kCGImageAlphaPremultipliedLast);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CGColorSpaceRelease(colorSpaceRef);
CGContextRelease(context);
CGImageRelease(imageRef);
free(buffer2);
//CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
// ...
CVPixelBufferRelease(pixelBuffer);
NOTE - this answer relates the overall problem with the image and not to the specific code.
This sort of problem is usually the 'stride' and relates to the memory layout used to hold the image where each row of pixels are not packed tightly together
As an example the source image may be 240 pixels wide.
The CMPixelBuffer may allocate 320 pixels for each rows where the first 240 pixels hold the image and the extra 80 pixels are padding.
In this case the width is 240 pixels, the stride is 320 pixels.
Strides usually mean you have to copy over each row of pixels one in a loop
Use this size everywhere in the code
int width_16 = (int)yourImage.size.width - (int)yourImage.size.width%16;
int height_ = (int)(yourImage.size.height/yourImage.size.width * width_16) ;
CGSize video_size_ = CGSizeMake(width_16, height_);
I had the same problem and I think the solution is following:
Try to change CGImageGetBytesPerRow(imageRef) to CVPixelBufferGetBytesPerRow(pxbuffer) in CGBitmapContextCreate call. The reason is that your context is backed with the raw data of pixel buffer you have created, not CGImage you are drawing. CVPixelBuffer's bytes per row count may be greater than bytes per pixel * pixel buffer width.
ios-converting-uiimage-to-rgba8-bitmaps-and-back is a very good article that I found which describes how we can deal with bitmap buffer and UIImages. This article deals with RGBA32/ RGBA8 bitmap images. The bitmap images are created with char * buffer with size width * height * 4. ie, each pixel information will have 4 bytes of data, 1 byte each for storing red, green, blue and alpha respectively. On creating the bitmap image the bitmap info 'kCGImageAlphaPremultipliedLast' is given. CGColorSpaceCreateDeviceRGB() is used for converting back the bitmapbuffer to UIImage. By changing the bitmap info we can also deal with RGBA 24 images.
I need to deal with RGBA 5551 bitmap images. red, green and blue colors are given 5 bits to represent the respetive colors and 1 bit for storing alpha value. If we are creating such a bitmap, how we can allocate buffer for a char * bitmap. Is it possible to convert into a UIImage data type?. Any help will be appreciated.
Here the BITS_PER_COMPONENT is 5 and BITS_PER_COMPONENT is 16. With this code I have successfully. kCGBitmapByteOrder16Little indicates the byte order of the char buffer.
size_t bufferLength = width * height * 2;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
size_t bitsPerComponent = BITS_PER_COMPONENT;
size_t bitsPerPixel = BITS_PER_PIXEL;
size_t bytesPerRow = 2 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
if(colorSpaceRef == NULL) {
NSLog(#"Error allocating color space");
CGDataProviderRelease(provider);
return nil;
}
CGBitmapInfo bitmapInfo = kCGBitmapByteOrder16Little | kCGImageAlphaNoneSkipFirst;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider, // data provider
NULL, // decode
YES, // should interpolate
renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
if(pixels == NULL) {
NSLog(#"Error: Memory not allocated for bitmap");
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
return nil;
}
CGContextRef context = CGBitmapContextCreate(pixels,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);
you need to create 16 bit image, so provide bpp as 16, bpc as 5, below will be the code:
size_t width = CGImageGetWidth(screenShotImageRef);
size_t height = CGImageGetHeight(screenShotImageRef);
size_t bytesPerRow = width * (bpc == 5 ? 2 : 4);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContext = CGBitmapContextCreate(buffer(provide buffer where you want to write image into), width, height, bpc, bytesPerRow, colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrderDefault);
CGContextDrawImage(bitmapContext, CGRectMake(0, 0, width, height), screenShotImageRef);
CGContextRelease(bitmapContext);
CGColorSpaceRelease(colorSpace);
I am taking a UIImage, breaking it down to the raw Pixel Data like so
CGImageRef imageRef = self.image.CGImage;
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
_rawData = (UInt8 *)calloc(height * width * 4, sizeof(UInt8));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(_rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
I then edit a couple pixels in the _rawData array with different colors, then re-create the UIImage like this from the edited _rawData pixel data like so. (In this, I am just changing the second pixel in the image to be red)
size_t width = CGImageGetWidth(_image.CGImage);
NSUInteger pixel = 1; // second pixel
NSUInteger position = pixel*4;
NSUInteger redIndex = position;
NSUInteger greenIndex = position+1;
NSUInteger blueIndex = position+2;
NSUInteger alphaIndex = position+3;
_rawData[redIndex] = 255;
_rawData[greenIndex] = 0;
_rawData[blueIndex] = 0;
_rawData[alphaIndex] = 255;
size_t height = CGImageGetHeight(_image.CGImage);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 32;
size_t bytesPerRow = 4*width;
size_t length = height*width*4;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, _rawData, length, NULL);
CGImageRef newImageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, bitmapInfo, provider, NULL, NO, kCGRenderingIntentDefault);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
CGImageRelease(newImageRef);
My problem begins here: I now have a new UIImage that has the second pixel changed to red, but I have a memory leak. I need to free the _rawData that has been calloc'd. Whenever I call
free(_rawData);
even though its after i've already created my "newImage". That image I just created is corrupted when I show it on screen. I thought that CGImageCreate() would create a new object in memory so then I could free the old memory. Is that not true?
What in the world I am doing wrong?
I have a PNG (complete with alpha channel) that I'm looking to composite onto a CGContextRef using CGContextDrawImage. I'd like the RBG channels to be composited, but I'd also like for the source images alpha channel to be copied over as well.
Ultimately I'll be passing the final CGContextRef (in the form of a CGImageRef) to GLKit where I'm hoping to manipulate the alpha channel for colour tinting purposes using a fragment shader.
Unfortunately I'm running into issues when it comes to creating my texture atlas using Core Graphics. It appears that the final CGImageRef fails to copy over the alpha channel from my source image and is non-transparent. I've attached my current compositing code, and a copy of my test image below:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
UInt8 * m_PixelBuf = malloc(sizeof(UInt8) * atlasSize.height * atlasSize.width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * atlasSize.width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(m_PixelBuf,
atlasSize.width,
atlasSize.height,
bitsPerComponent,
bytesPerRow,
colorSpace
kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(x, y, image.size.width, image.size.height), image.CGImage);
CGImageRef imgRef = CGBitmapContextCreateImage(context);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
Where do you people find this procedures of using CGBitmapContextCreate as this is one of the most common issues: kCGImageAlphaPremultipliedFirst will set the alpha to 1 and PREMULTIPLY RGB with the alpha value.
If you are using Xcode pleas command-click kCGImageAlphaPremultipliedFirst and find an appropriate replacement such as kCGImageAlphaLast.
Adding an example of using alpha as last channel:
+ (UIImage *)generateRadialGradient {
int size = 256;
uint8_t *buffer = malloc(size*size*4);
memset(buffer, 255, size*size*4);
for(int i=0; i<size; i++) {
for(int j=0; j<size; j++) {
float x = ((float)i/(float)size)*2.0f - 1.0f;
float y = ((float)j/(float)size)*2.0f - 1.0f;
float relativeRadius = x*x + y*y;
if(relativeRadius >= 0.0 && relativeRadius < 1.0) { buffer[(i*size + j)*4 + 3] = (uint8_t)((1.0-sqrt(relativeRadius))*255.0); }
else buffer[(i*size + j)*4 + 3] = 0;
}
}
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
buffer,
size*size*4,
NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4*size;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Big|kCGImageAlphaLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(size,
size,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider,
NULL,
NO,
renderingIntent);
/*I get the current dimensions displayed here */
return [UIImage imageWithCGImage:imageRef];
}
So this code creates a radial gradient from code. The inner part is full opaque while it gets transparent when it gets further from center.
We could also use kCGImageAlphaFirst which results in yellowish gradient. Alpha is always at 1 and only the first (red) channel is being decreased. The result is being white in the middle and as the red channel is decreased the yellow color starts showing.
I was wondering if there was a way to create a CGImage corresponding to a rectangle inside the context?
What I am doing right now:
I am using CGBitmapContextCreateImage to create a CGImage from a context. Then, I use CGImageCreateWithImageInRect to extract that sub-image.
Anil
Try this:
static CGImageRef createImageWithSectionOfBitmapContext(CGContextRef bigContext,
size_t x, size_t y, size_t width, size_t height)
{
uint8_t *data = CGBitmapContextGetData(bigContext);
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(bigContext);
size_t bytesPerPixel = CGBitmapContextGetBitsPerPixel(bigContext) / 8;
data += x * bytesPerPixel + y * bytesPerRow;
CGContextRef smallContext = CGBitmapContextCreate(data,
width, height,
CGBitmapContextGetBitsPerComponent(bigContext), bytesPerRow,
CGBitmapContextGetColorSpace(bigContext),
CGBitmapContextGetBitmapInfo(bigContext));
CGImageRef image = CGBitmapContextCreateImage(smallContext);
CGContextRelease(smallContext);
return image;
}
or this:
static CGImageRef createImageWithSectionOfBitmapContext(CGContextRef bigContext,
size_t x, size_t y, size_t width, size_t height)
{
uint8_t *data = CGBitmapContextGetData(bigContext);
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(bigContext);
size_t bytesPerPixel = CGBitmapContextGetBitsPerPixel(bigContext) / 8;
data += x * bytesPerPixel + y * bytesPerRow;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, data,
height * bytesPerRow, NULL);
CGImageRef image = CGImageCreate(width, height,
CGBitmapContextGetBitsPerComponent(bigContext),
CGBitmapContextGetBitsPerPixel(bigContext),
CGBitmapContextGetBytesPerRow(bigContext),
CGBitmapContextGetColorSpace(bigContext),
CGBitmapContextGetBitmapInfo(bigContext),
provider, NULL, NO, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
return image;
}
You can create a cropped image as follows as mentioned here,
For eg:-
UIImage *image = //original image
CGRect rect = //cropped rect
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);
UIImage *img = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
You need to get the CGImage from context to use the above code to crop it. You can use CGBitmapContextCreateImage as mentioned in question. Here is the documentation.
You could create your CGBitmapContext with a buffer that you allocated, and create a CGImage from scratch using the same buffer. With the context and the image sharing a buffer, you can draw into the context and then create a CGImage with that section of the master image.
Note that if you draw into the same context afterward, the cropped image may actually pick up the changes (depending on just how much shared-referencing-instead-of-copying is going on internally). Depending on what you're doing, you may or may not find this desirable.