Rotate CMSampleBuffer/CVPixelBuffer - ios

I am currently attempting to change the orientation of a CMSampleBuffer by first converting it to a CVPixelBuffer and then using vImageRotate90_ARGB8888 to convert the buffer. The problem with my code is that when vImageRotate90_ARGB8888 executes, it crashes immediately. I know there are answers (like this one or this one), but all of these solutions fail to work in my case, and I really cannot find any type of error, or think of anything that would cause this behavior. My current code is below:
- (CVPixelBufferRef)rotateBuffer:(CMSampleBufferRef)sampleBuffer {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
size_t currSize = bytesPerRow * height * sizeof(unsigned char);
size_t bytesPerRowOut = 4 * height * sizeof(unsigned char);
OSType pixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
void *baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer);
unsigned char *outPixelData = (unsigned char *)malloc(currSize);
vImage_Buffer sourceBuffer = {baseAddress, height, width, bytesPerRow};
vImage_Buffer destinationBuffer = {outPixelData, width, height, bytesPerRowOut};
uint8_t rotation = kRotate90DegreesClockwise;
Pixel_8888 bgColor = {0, 0, 0, 0};
vImageRotate90_ARGB8888(&sourceBuffer, &destinationBuffer, rotation, bgColor, kvImageNoFlags); // Crash!
CVPixelBufferRef rotatedBuffer = NULL;
CVPixelBufferCreateWithBytes(kCFAllocatorDefault, destinationBuffer.width, destinationBuffer.height, pixelFormat, destinationBuffer.data, destinationBuffer.rowBytes, freePixelBufferData, NULL, NULL, &rotatedBuffer);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return rotatedBuffer;
}
void freePixelBufferData(void *releaseRefCon, const void *baseAddress) {
free((void *)baseAddress);
}

Related

Why do I see a slightly zoomed version of the image on the iPhone camera compared to what is received at the backend server?

I am capturing the image from the iOS camera on the iPhone7 and sending the captured camera image to the backend for processing.
When I save the image at the backend, I see that the backend image has lot more content than that what was visible on the iOS screen when focusing on the object.
The server image is a zoomed out version with little extra content on the horitonal and vertical axis on both the sides. I verified that I am not doing any explicit zooming or anything like that in the Objective-C code.
The question is what is causing this difference in what I see on the screen v/s what gets received at the backend.
The code that I use to capture the image is
-(UIImage *) imageFromSamplePlanerPixelBuffer:(CMSampleBufferRef) sampleBuffer{
#autoreleasepool {
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
uint8_t *baseAddress = (uint8_t*) CVPixelBufferGetBaseAddress(imageBuffer);
uint8_t *yBuffer = (uint8_t*) CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
uint8_t *cbCrBuffer = (uint8_t*) CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1);
size_t yPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
size_t cbCrPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 1);
int bytesPerPixel = 4;
uint8_t *rgbBuffer = (uint8_t*)malloc(width * height * bytesPerPixel);
for(int y = 0; y < height; y++)
{
uint8_t *rgbBufferLine = &rgbBuffer[y * width * bytesPerPixel];
uint8_t *yBufferLine = &yBuffer[y * yPitch];
uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch];
for(int x = 0; x < width; x++)
{
int16_t y = yBufferLine[x];
int16_t cb = cbCrBufferLine[x & ~1] - 128;
int16_t cr = cbCrBufferLine[x | 1] - 128;
uint8_t *rgbOutput = &rgbBufferLine[x*bytesPerPixel];
int16_t r = (int16_t)roundf( y + cr * 1.4 );
int16_t g = (int16_t)roundf( y + cb * -0.343 + cr * -0.711 );
int16_t b = (int16_t)roundf( y + cb * 1.765);
rgbOutput[0] = 0xFF;
rgbOutput[1] = clamp(b);
rgbOutput[2] = clamp(g);
rgbOutput[3] = clamp(r);
}
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(rgbBuffer, width, height, 8, width * bytesPerPixel, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:0.5f orientation:UIImageOrientationUp];
NSData *imgData = UIImageJPEGRepresentation(image, 0.8);
NSLog(#"blabla %lu", (unsigned long)[imgData length]);
// Release the Quartz image
free(rgbBuffer);
CGImageRelease(quartzImage);
return (image);

glReadPixels returns incorrect image for iPhone 6, but works ok for iPad and iPhone 5

I'm using following code for reading on image from OpenGL ES scene:
- (UIImage *)drawableToCGImage
{
CGRect myRect = self.bounds;
NSInteger myDataLength = myRect.size.width * myRect.size.height * 4;
glFinish();
glPixelStorei(GL_PACK_ALIGNMENT, 4);
int width = myRect.size.width;
int height = myRect.size.height;
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer2);
for(int y1 = 0; y1 < height; y1++) {
for(int x1 = 0; x1 < width * 4; x1++) {
buffer[(height - 1 - y1) * width * 4 + x1] = buffer2[y1 * 4 * width + x1];
}
}
free(buffer2);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * myRect.size.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(myRect.size.width, myRect.size.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return image;
}
It works perfectly for iPad and old iPhone versions, but I noticed that on iPhone 6 (both device and simulators) it looks like monochrome glitches.
What could it be?
Also, here is my code for CAEAGLLayer properties:
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
#YES, kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
Could somebody shed the light on this crazy magic, please?
Thanks to #MaticOblak, I've figured out the problem.
Buffer was filled incorrectly, because float values of rect size were not correctly rounded (yeah, only for iPhone 6 dimension). Instead integer values should be used.
UPD: my issue was fixed with the following code:
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
int width = viewport[2];
int height = viewport[3];

how to calculate memory consummation for my application in IOS

In application,displaying the high resolution images and have some memory leaks so i would like to calculate the memory consumption of each statement to find out the leaks.
is there any method to calculate the memory (MB or KB)?
I need Like this:
// this is my method
+ (unsigned char *) convertUIImageToBitmapRGBA8:(UIImage *) image {
//Run a method to caclate the memory(MB or KB) --- Before
CGImageRef imageRef = image.CGImage;
// Create a bitmap context to draw the uiimage into
CGContextRef context = [self newBitmapRGBA8ContextFromImage:imageRef];
if(!context) {
return NULL;
}
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
CGRect rect = CGRectMake(0, 0, width, height);
// Draw image into the context to get the raw image data
CGContextDrawImage(context, rect, imageRef);
// Get a pointer to the data
unsigned char *bitmapData = (unsigned char *)CGBitmapContextGetData(context);
// Copy the data and release the memory (return memory allocated with new)
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(context);
size_t bufferLength = bytesPerRow * height;
unsigned char *newBitmap = NULL;
if(bitmapData) {
newBitmap = (unsigned char *)malloc(sizeof(unsigned char) * bytesPerRow * height);
if(newBitmap) { // Copy the data
for(int i = 0; i < bufferLength; ++i) {
newBitmap[i] = bitmapData[i];
}
}
free(bitmapData);
} else {
NSLog(#"Error getting bitmap pixel data\n");
}
CGContextRelease(context);
//Run a method to calculate the memory(MB or KB) --- After
return newBitmap;
}

How to go from vImage_Buffer to CVPixelBufferRef

I'm recording live video in my iOS app. On another Stack Overflow page, I found that you can use vImage_Buffer to work on my frames.
The problem is that I have no idea how to get back to a CVPixelBufferRef from the outputted vImage_buffer.
Here is the code that is given in the other article:
NSInteger cropX0 = 100,
cropY0 = 100,
cropHeight = 100,
cropWidth = 100,
outWidth = 480,
outHeight = 480;
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
vImage_Buffer inBuff;
inBuff.height = cropHeight;
inBuff.width = cropWidth;
inBuff.rowBytes = bytesPerRow;
int startpos = cropY0 * bytesPerRow + 4 * cropX0;
inBuff.data = baseAddress + startpos;
unsigned char *outImg = (unsigned char*)malloc(4 * outWidth * outHeight);
vImage_Buffer outBuff = {outImg, outHeight, outWidth, 4 * outWidth};
vImage_Error err = vImageScale_ARGB8888(&inBuff, &outBuff, NULL, 0);
if (err != kvImageNoError) NSLog(#" error %ld", err);
And now I need to convert outBuff to a CVPixelBufferRef.
I assume I need to use vImageBuffer_CopyToCVPixelBuffer, but I'm not sure how.
My first attempts failed with an EXC_BAD_ACCESS: CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(kCFAllocatorSystemDefault, 480, 480, kCVPixelFormatType_32BGRA, NULL, &pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
vImage_CGImageFormat format = {
.bitsPerComponent = 8,
.bitsPerPixel = 32,
.bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst, //BGRX8888
.colorSpace = NULL, //sRGB
};
vImageBuffer_CopyToCVPixelBuffer(&outBuff,
&format,
pixelBuffer,
NULL,
NULL,
kvImageNoFlags); // Here is the crash!
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
Any idea?
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool : YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool : YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
[NSNumber numberWithInt : 480], kCVPixelBufferWidthKey,
[NSNumber numberWithInt : 480], kCVPixelBufferHeightKey,
nil];
status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
480,
480,
kCVPixelFormatType_32BGRA,
outImg,
bytesPerRow,
NULL,
NULL,
(__bridge CFDictionaryRef)options,
&pixbuffer);
You should generate a new pixelBuffer like above.
Just in case... if you want a cropped live video feed into your interface, use an AVPlayerLayer, AVCaptureVideoPreviewLayer and/or other CALayer subclasses, use the layer bounds, frame and position for your 100x100 pixel area to 480x480 area.
Notes for vImage for your question (different circumstances may differ):
CVPixelBufferCreateWithBytes will not work with vImageBuffer_CopyToCVPixelBuffer() because you need to copy the vImage_Buffer data into a "clean" or "empty" CVPixelBuffer.
No need for locking/unlocking - make sure you know when to lock & when not to lock pixel buffers.
Your inBuff vImage_Buffer just needs to be initialized from the pixel buffer data, not manually (unless you know how to use CGContexts etc, to init the pixel grid)
use vImageBuffer_InitWithCVPixelBuffer()
vImageScale_ARGB8888 will scale the entire CVPixel data to a smaller/larger rectangle. It won't SCALE a portion/crop area of the buffer to another buffer.
When you use vImageBuffer_CopyToCVPixelBuffer(),
vImageCVImageFormatRef & vImage_CGImageFormat need to be filled out correctly.
CGColorSpaceRef dstColorSpace = CGColorSpaceCreateWithName(kCGColorSpaceITUR_709);
vImage_CGImageFormat format = {
.bitsPerComponent = 16,
.bitsPerPixel = 64,
.bitmapInfo = (CGBitmapInfo)kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder16Big ,
.colorSpace = dstColorSpace
};
vImageCVImageFormatRef vformat = vImageCVImageFormat_Create(kCVPixelFormatType_4444AYpCbCr16,
kvImage_ARGBToYpCbCrMatrix_ITU_R_709_2,
kCVImageBufferChromaLocation_Center,
format.colorSpace,
0);
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
480,
480,
kCVPixelFormatType_4444AYpCbCr16,
NULL,
&destBuffer);
NSParameterAssert(status == kCVReturnSuccess && destBuffer != NULL);
err = vImageBuffer_CopyToCVPixelBuffer(&sourceBuffer, &format, destBuffer, vformat, 0, kvImagePrintDiagnosticsToConsole);
NOTE: these are settings for 64 bit ProRes with Alpha - adjust for 32 bit.

Can I get any useful information from the camera?

I am using the AVCaptureVideoDataOutputSampleBufferDelegate to display the video from an iPhones camera in a custom UIView with the following delegate method.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
I would like to be able to pull out some useful information from the image such as Exposure, color, threshold.
What is the best way to access this sort of information?
Extract the metadata attachment from the sample buffer. You can find Exposure, color, etc. in it's metadata. something like this:
NSDictionary *exifDictionary = (NSDictionary*)CMGetAttachment(sampleBuffer, kCGImagePropertyExifDictionary, NULL);
You can access the underlying pixel data with this code:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVReturn lock = CVPixelBufferLockBaseAddress(pixelBuffer, 0);
if (lock == kCVReturnSuccess) {
int w = 0;
int h = 0;
int r = 0;
int bytesPerPixel = 0;
unsigned char *buffer;
if (CVPixelBufferIsPlanar(pixelBuffer)) {
w = CVPixelBufferGetWidthOfPlane(pixelBuffer, 0);
h = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0);
r = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
bytesPerPixel = r/w;
buffer = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
}else {
w = CVPixelBufferGetWidth(pixelBuffer);
h = CVPixelBufferGetHeight(pixelBuffer);
r = CVPixelBufferGetBytesPerRow(pixelBuffer);
bytesPerPixel = r/w;
buffer = CVPixelBufferGetBaseAddress(pixelBuffer);
}
}

Resources