I'm using captureOutput:didOutputSampleBuffer:fromConnection: in order to keep track of the frames. For my use-case, I only need to store the last frame and use it in case the app goes to background.
That's a sample from my code:
#property (nonatomic, strong) AVCaptureVideoDataOutput *videoDataOutput;
#property (atomic) CMSampleBufferRef currentBuffer;
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef con = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(con);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(con);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
// UIImage *image = [UIImage imageWithCGImage:quartzImage];
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:[[UIScreen mainScreen] scale] orientation:UIImageOrientationRight];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
//[self.videoDataOutput setSampleBufferDelegate:self queue:self.sessionQueue];
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CFRetain(sampleBuffer);
#synchronized (self) {
if (_currentBuffer) {
CFRelease(_currentBuffer);
}
self.currentBuffer = sampleBuffer;
}
}
- (void) goingToBackground:(NSNotification *) notification
{
UIImage *snapshot = [self imageFromSampleBuffer:_currentBuffer];
//Doing something with snapshot...
}
The problem is that in some cases I get this crash from within imageFromSampleBuffer:
<Error>: copy_read_only: vm_copy failed: status 1.
The crash happens on CGImageRef quartzImage = CGBitmapContextCreateImage(con);
What am I doing wrong?
I believe you need to copy the buffer, not just retain. From the description for captureOutput:didOutputSampleBuffer:fromConnection:
Note that to maintain optimal performance, some sample buffers directly reference pools of memory that may need to be reused by the device system and other capture inputs. [...] If multiple sample buffers reference such pools of memory for too long, inputs will no longer be able to copy new samples into memory and those samples will be dropped.
Related
I am working with AV Foundation, i am attempting to save a particular
output CMSampleBufferRef as UIImage in some variable. i am using manatee works sample code and it uses
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange for
kCVPixelBufferPixelFormatTypeKey
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
but when i save the image, the output is just nil or whatever is the background of ImageView. I also tried not to set the output setting and just use whatever is the default but of no use. the image is still not rendered. i also tried to set kCVPixelFormatType_32BGRAbut then manatee works stops detecting bar code.
I am using the context settings from sample code provided by apple on developer website
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(NULL,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer),
8,
0,
CGColorSpaceCreateDeviceRGB(),
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
Can anybody help me on what is going wrong here? It should be simple but i don't have much of experience with AVFoundation Framework. Is this is some color space problem as the context is using CGColorSpaceCreateDeviceRGB() ?
I can provide more info if needed. I searched StackOverflow and there were many entries regarding this but none solved my problem
Is there a reason you are passing 0 for bytesPerRow to CGBitmapContextCreate?
Also, you are passing NULL as the buffer instead of the CMSampleBufferRef address.
Creating the bitmap context should look approximately like this when sampleBuffer is your CMSampleBufferRef:
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer),
8,
CVPixelBufferGetBytesPerRow(imageBuffer),
colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(newContext);
Here is how I used to do it. The code is written in swift. But it works.
You should notice the orientation parameter at the last line, it depends on the video settings.
extension UIImage {
/**
Creates a new UIImage from the video frame sample buffer passed.
#param sampleBuffer the sample buffer to be converted into a UIImage.
*/
convenience init?(sampleBuffer: CMSampleBufferRef) {
// Get a CMSampleBuffer's Core Video image buffer for the media data
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0)
// Get the number of bytes per row for the pixel buffer
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
// Get the number of bytes per row for the pixel buffer
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
// Get the pixel buffer width and height
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
// Create a device-dependent RGB color space
let colorSpace = CGColorSpaceCreateDeviceRGB()
// Create a bitmap graphics context with the sample buffer data
let bitmap = CGBitmapInfo(CGBitmapInfo.ByteOrder32Little.rawValue|CGImageAlphaInfo.PremultipliedFirst.rawValue)
let context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, bitmap)
// Create a Quartz image from the pixel data in the bitmap graphics context
let quartzImage = CGBitmapContextCreateImage(context)
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0)
// Create an image object from the Quartz image
self.init(CGImage: quartzImage, scale: 1, orientation: UIImageOrientation.LeftMirrored)
}
}
I use this regularly:
UIImage *image = [UIImage imageWithData:[self imageToBuffer:sampleBuffer]];
- (NSData *) imageToBuffer:(CMSampleBufferRef)source {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(source);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void *src_buff = CVPixelBufferGetBaseAddress(imageBuffer);
NSData *data = [NSData dataWithBytes:src_buff length:bytesPerRow * height];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return data;
}
I want to take snapshot of the content in CCGLView in my viewController and display the resultant image in the same viewController.
Right now I'm using the following method to do so :
-(UIImage *) drawableToCGImage{
GLint backingWidth2, backingHeight2;
//backingHeight2=self.glView.frame.size.height;
//backingWidth2=self.glView.frame.size.width;
//Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, self.glView.colorRenderBuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth2);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight2);
NSInteger x = self.glView.frame.origin.x, y = self.glView.frame.origin.y, width2 = backingWidth2, height2 = backingHeight2;
NSInteger dataLength = width2 * height2 * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width2, height2, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width2, height2, 8, 32, width2 * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.glView.contentScaleFactor;
widthInPoints = width2 / scale;
heightInPoints = height2 / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
But it works only in simulator and in device when I test it, I don't get the content of the CCGLView. Why this method doesn't give the snapshot in device? Or is there any other way to get it done?
I don't know why the previous method didn't work, but I got to know another way of doing it, and its less expensive too :). I'm using the following method :
- (UIImage *)snapshot:(UIView *)view{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, YES, 0);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
for more info go to following link: https://developer.apple.com/library/ios/qa/qa1817/_index.html
I'm developing an iOS application with latest SDK.
This app will work with OpenCV and I have to make zoom on camera but this, it isn't available on iOS SDK, so I think to do it programmatically.
I have to do 'zoom' on every video frame. This is where I have to do it:
#pragma mark - AVCaptureSession delegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
//size_t stride = CVPixelBufferGetBytesPerRow(imageBuffer);
//put buffer in open cv, no memory copied
cv::Mat image = cv::Mat(height, width, CV_8UC4, baseAddress);
// copy the image
//cv::Mat copied_image = image.clone();
_lastFrame = [NSData dataWithBytes:image.data
length:image.elemSize() * image.total()];
[DataExchanger postFrame];
/*We unlock the image buffer*/
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}
Do you know how to make zoom on a NSData or on a CMSampleBufferRef?
One way would be to put your picture into a CGImageRef, choose a square in that picture and draw it again to a normal size. Something like this (though there may be better ways):
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGImageRef smallQuartzImage = CGImageCreateWithImageInRect(quartzImage, CGRectMake(200, 200, 600, 600));
cv::Mat image(height, width, CV_8UC4 );
CGContextRef contextRef = CGBitmapContextCreate( image.data, width, height, 8, cvMat.step[0], colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst );
CGContextDrawImage(contextRef, CGRectMake(0, 0, width, height), smallQuartzImage);
CGContextRelease( contextRef );
CGColorSpaceRelease( colorSpace );
I'm captuting video with AVCaptureSession.
But I would like to convert the captured image to an UIImage.
I found some code on Internet:
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
NSLog(#"imageFromSampleBuffer: called");
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
But I got some errors:
Jan 17 17:39:25 iPhone-4-de-XXX ThinkOutsideTheBox[2363] <Error>: CGBitmapContextCreate: invalid data bytes/row: should be at least 2560 for 8 integer bits/component, 3 components, kCGImageAlphaPremultipliedFirst.
Jan 17 17:39:25 iPhone-4-de-XXX ThinkOutsideTheBox[2363] <Error>: CGBitmapContextCreateImage: invalid context 0x0
2013-01-17 17:39:25.896 ThinkOutsideTheBox[2363:907] image <UIImage: 0x1d553f00>
Jan 17 17:39:25 iPhone-4-de-XXX ThinkOutsideTheBox[2363] <Error>: CGContextDrawImage: invalid context 0x0
Jan 17 17:39:25 iPhone-4-de-XXX ThinkOutsideTheBox[2363] <Error>: CGBitmapContextGetData: invalid context 0x0
EDIT:
I also use the UIImage to get the rgb color:
-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{
UIImage* image = [self imageFromSampleBuffer:sampleBuffer];
unsigned char* pixels = [image rgbaPixels];
double totalLuminance = 0.0;
for(int p=0;p<image.size.width*image.size.height*4;p+=4)
{
totalLuminance += pixels[p]*0.299 + pixels[p+1]*0.587 + pixels[p+2]*0.114;
}
totalLuminance /= (image.size.width*image.size.height);
totalLuminance /= 255.0;
NSLog(#"totalLuminance %f",totalLuminance);
}
Your best bet will be to set the capture video data output's videoSettings to a dictionary that specifies the pixel format you want, which you'll need to set to some variation on RGB that CGBitmapContext can handle.
The documentation has a list of all of the pixel formats that Core Video can process. Only a tiny subset of those are supported by CGBitmapContext. The format that the code you found on the internet is expecting is kCVPixelFormatType_32BGRA, but that might have been written for Macs—on iOS devices, kCVPixelFormatType_32ARGB (big-endian) might be faster. Try them both, on the device, and compare frame rates.
You can try this code.
-(UIImage *) screenshotOfVideoStream:(CMSampleBufferRef)samImageBuff
{
CVImageBufferRef imageBuffer =
CMSampleBufferGetImageBuffer(samImageBuff);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer))];
UIImage *image = [[UIImage alloc] initWithCGImage:videoImage];
CGImageRelease(videoImage);
return image;
}
It work for me.
In case anyone who are expecting jpeg image like me, there are simple API provided by Apple:
[AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:photoSampleBuffer];
and:
[AVCapturePhotoOutput JPEGPhotoDataRepresentationForJPEGSampleBuffer:photoSampleBuffer previewPhotoSampleBuffer:previewPhotoSampleBuffer]
I am using the following to turn image data from the camera into a UIImage. In order to save some time, and memory, I'd like to crop the image data before I turn it into an UIImage.
Ideally I pass in a cropRect, and get back a cropped UIImage. However, since the camera output could be sized differently based on whether I am using a photo or video preset, I may not know what dimensions to use for the cropRect. I could use a cropRect, similar to the focus or exposure points, that uses a CGPoint between (0,0) and (1,1) and do similarly for the CGSizeof the cropRect. Or I can get the dimensions of the sampleBuffer, before I call the following, and pass in an appropriate cropRect. I'd like some advice as to which I should use.
I also would like to know how best to crop in order not to have to create an entire UIImage and then crop it back down. Typically, I am only interested in keeping about 10-20% of the pixels. I assume I have to iterate through the pixels, and start copying the cropRect into a different pixel buffer, until I have all the pixels I want.
And keep in mind that there is possible rotation happening according to orientation.
+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer orientation:(UIImageOrientation) orientation
{
// Create a UIImage from sample buffer data
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:(CGFloat)1.0 orientation:orientation];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
In summary:
Should I pass in a cropRect which specifies a rect between (0,0,0,0) and (1,1,1,1) or do I pass in a cropRect that specifies exact pixel locations like (50,50,100,100)?
How best do I crop the pixel buffer?
I think you should use pixel as cropRect, as you have to convert the float-values to pixel-values at least at some point.
The following code is not tested, but should give you the idea.
CGRect cropRect = CGRectMake(50, 50, 100, 100); // cropRect
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVReturn lock = CVPixelBufferLockBaseAddress(pixelBuffer, 0);
if (lock == kCVReturnSuccess) {
int w = 0;
int h = 0;
int r = 0;
int bytesPerPixel = 0;
unsigned char *buffer;
w = CVPixelBufferGetWidth(pixelBuffer);
h = CVPixelBufferGetHeight(pixelBuffer);
r = CVPixelBufferGetBytesPerRow(pixelBuffer);
bytesPerPixel = r/w;
buffer = CVPixelBufferGetBaseAddress(pixelBuffer);
UIGraphicsBeginImageContext(cropRect.size); // create context for image storage, use cropRect as size
CGContextRef c = UIGraphicsGetCurrentContext();
unsigned char* data = CGBitmapContextGetData(c);
if (data != NULL) {
// iterate over the pixels in cropRect
for(int y = cropRect.origin.y, yDest = 0; y<CGRectGetMaxY(cropRect); y++, yDest++) {
for(int x = cropRect.origin.x, xDest = 0; x<CGRectGetMaxX(cropRect); x++, xDest++) {
int offset = bytesPerPixel*((w*y)+x); // offset calculation in cropRect
int offsetDest = bytesPerPixel*((cropRect.size.width*yDest)+xDest); // offset calculation for destination image
for (int i = 0; i<bytesPerPixel; i++) {
data[offsetDest+i] = buffer[offset+i];
}
}
}
}
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}