convert CMSampleBufferRef to UIImage - ios

I'm captuting video with AVCaptureSession.
But I would like to convert the captured image to an UIImage.
I found some code on Internet:
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
NSLog(#"imageFromSampleBuffer: called");
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
But I got some errors:
Jan 17 17:39:25 iPhone-4-de-XXX ThinkOutsideTheBox[2363] <Error>: CGBitmapContextCreate: invalid data bytes/row: should be at least 2560 for 8 integer bits/component, 3 components, kCGImageAlphaPremultipliedFirst.
Jan 17 17:39:25 iPhone-4-de-XXX ThinkOutsideTheBox[2363] <Error>: CGBitmapContextCreateImage: invalid context 0x0
2013-01-17 17:39:25.896 ThinkOutsideTheBox[2363:907] image <UIImage: 0x1d553f00>
Jan 17 17:39:25 iPhone-4-de-XXX ThinkOutsideTheBox[2363] <Error>: CGContextDrawImage: invalid context 0x0
Jan 17 17:39:25 iPhone-4-de-XXX ThinkOutsideTheBox[2363] <Error>: CGBitmapContextGetData: invalid context 0x0
EDIT:
I also use the UIImage to get the rgb color:
-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{
UIImage* image = [self imageFromSampleBuffer:sampleBuffer];
unsigned char* pixels = [image rgbaPixels];
double totalLuminance = 0.0;
for(int p=0;p<image.size.width*image.size.height*4;p+=4)
{
totalLuminance += pixels[p]*0.299 + pixels[p+1]*0.587 + pixels[p+2]*0.114;
}
totalLuminance /= (image.size.width*image.size.height);
totalLuminance /= 255.0;
NSLog(#"totalLuminance %f",totalLuminance);
}

Your best bet will be to set the capture video data output's videoSettings to a dictionary that specifies the pixel format you want, which you'll need to set to some variation on RGB that CGBitmapContext can handle.
The documentation has a list of all of the pixel formats that Core Video can process. Only a tiny subset of those are supported by CGBitmapContext. The format that the code you found on the internet is expecting is kCVPixelFormatType_32BGRA, but that might have been written for Macs—on iOS devices, kCVPixelFormatType_32ARGB (big-endian) might be faster. Try them both, on the device, and compare frame rates.

You can try this code.
-(UIImage *) screenshotOfVideoStream:(CMSampleBufferRef)samImageBuff
{
CVImageBufferRef imageBuffer =
CMSampleBufferGetImageBuffer(samImageBuff);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer))];
UIImage *image = [[UIImage alloc] initWithCGImage:videoImage];
CGImageRelease(videoImage);
return image;
}
It work for me.

In case anyone who are expecting jpeg image like me, there are simple API provided by Apple:
[AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:photoSampleBuffer];
and:
[AVCapturePhotoOutput JPEGPhotoDataRepresentationForJPEGSampleBuffer:photoSampleBuffer previewPhotoSampleBuffer:previewPhotoSampleBuffer]

Related

UIImage cv::Mat conversions with alpha channel

I'm using following codes for converting UIImage* and cv::Mat to each other:
- (cv::Mat)cvMatFromUIImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels (color channels + alpha)
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
and
-(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(cvMat.cols, //width
cvMat.rows, //height
8, //bits per component
8 * cvMat.elemSize(), //bits per pixel
cvMat.step[0], //bytesPerRow
colorSpace, //colorspace
kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
I took these from OpenCV Documentation. I use them as follows:
UIImage *img = [UIImage imageNamed:#"transparent.png"];
UIImage *img2 = [self UIImageFromCVMat:[self cvMatFromUIImage:img]];
However these functions loses the alpha channel information. I know it is because of the flags kCGImageAlphaNone and kCGImageAlphaNoneSkipLast, unfortunately I could't find a way not lose alpha information by changing these flags.
So, how do I convert these two types between each other without losing alpha information?
Here is the image that I use:
We should use these functions from opencv v2.4.6:
UIImage* MatToUIImage(const cv::Mat& image);
void UIImageToMat(const UIImage* image, cv::Mat& m, bool alphaExist = false);
And don't forget to include:
opencv2/imgcodecs/ios.h
You need to not pass kCGImageAlphaNoneSkipLast and instead pass (kCGBitmapByteOrder32Host | kCGImageAlphaPremultipliedFirst) to get premultiplied alpha in BGRA format. CoreGraphics only supports premultiplied alpha. But, you will need to check on how OpenCV represents alpha in pixels to determine how to tell OpenCV that the pixels are already premultiplied. The code I have used assumes straight alpha with OpenCV, so you will need to be careful of that.

Retaining CMSampleBufferRef cause random crashes

I'm using captureOutput:didOutputSampleBuffer:fromConnection: in order to keep track of the frames. For my use-case, I only need to store the last frame and use it in case the app goes to background.
That's a sample from my code:
#property (nonatomic, strong) AVCaptureVideoDataOutput *videoDataOutput;
#property (atomic) CMSampleBufferRef currentBuffer;
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef con = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(con);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(con);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
// UIImage *image = [UIImage imageWithCGImage:quartzImage];
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:[[UIScreen mainScreen] scale] orientation:UIImageOrientationRight];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
//[self.videoDataOutput setSampleBufferDelegate:self queue:self.sessionQueue];
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CFRetain(sampleBuffer);
#synchronized (self) {
if (_currentBuffer) {
CFRelease(_currentBuffer);
}
self.currentBuffer = sampleBuffer;
}
}
- (void) goingToBackground:(NSNotification *) notification
{
UIImage *snapshot = [self imageFromSampleBuffer:_currentBuffer];
//Doing something with snapshot...
}
The problem is that in some cases I get this crash from within imageFromSampleBuffer:
<Error>: copy_read_only: vm_copy failed: status 1.
The crash happens on CGImageRef quartzImage = CGBitmapContextCreateImage(con);
What am I doing wrong?
I believe you need to copy the buffer, not just retain. From the description for captureOutput:didOutputSampleBuffer:fromConnection:
Note that to maintain optimal performance, some sample buffers directly reference pools of memory that may need to be reused by the device system and other capture inputs. [...] If multiple sample buffers reference such pools of memory for too long, inputs will no longer be able to copy new samples into memory and those samples will be dropped.

UIImage from CMSampleBufferRef conversion, resulting UIImage not rendering properly

I am working with AV Foundation, i am attempting to save a particular
output CMSampleBufferRef as UIImage in some variable. i am using manatee works sample code and it uses
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange for
kCVPixelBufferPixelFormatTypeKey
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
but when i save the image, the output is just nil or whatever is the background of ImageView. I also tried not to set the output setting and just use whatever is the default but of no use. the image is still not rendered. i also tried to set kCVPixelFormatType_32BGRAbut then manatee works stops detecting bar code.
I am using the context settings from sample code provided by apple on developer website
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(NULL,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer),
8,
0,
CGColorSpaceCreateDeviceRGB(),
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
Can anybody help me on what is going wrong here? It should be simple but i don't have much of experience with AVFoundation Framework. Is this is some color space problem as the context is using CGColorSpaceCreateDeviceRGB() ?
I can provide more info if needed. I searched StackOverflow and there were many entries regarding this but none solved my problem
Is there a reason you are passing 0 for bytesPerRow to CGBitmapContextCreate?
Also, you are passing NULL as the buffer instead of the CMSampleBufferRef address.
Creating the bitmap context should look approximately like this when sampleBuffer is your CMSampleBufferRef:
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer),
8,
CVPixelBufferGetBytesPerRow(imageBuffer),
colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(newContext);
Here is how I used to do it. The code is written in swift. But it works.
You should notice the orientation parameter at the last line, it depends on the video settings.
extension UIImage {
/**
Creates a new UIImage from the video frame sample buffer passed.
#param sampleBuffer the sample buffer to be converted into a UIImage.
*/
convenience init?(sampleBuffer: CMSampleBufferRef) {
// Get a CMSampleBuffer's Core Video image buffer for the media data
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0)
// Get the number of bytes per row for the pixel buffer
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
// Get the number of bytes per row for the pixel buffer
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
// Get the pixel buffer width and height
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
// Create a device-dependent RGB color space
let colorSpace = CGColorSpaceCreateDeviceRGB()
// Create a bitmap graphics context with the sample buffer data
let bitmap = CGBitmapInfo(CGBitmapInfo.ByteOrder32Little.rawValue|CGImageAlphaInfo.PremultipliedFirst.rawValue)
let context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, bitmap)
// Create a Quartz image from the pixel data in the bitmap graphics context
let quartzImage = CGBitmapContextCreateImage(context)
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0)
// Create an image object from the Quartz image
self.init(CGImage: quartzImage, scale: 1, orientation: UIImageOrientation.LeftMirrored)
}
}
I use this regularly:
UIImage *image = [UIImage imageWithData:[self imageToBuffer:sampleBuffer]];
- (NSData *) imageToBuffer:(CMSampleBufferRef)source {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(source);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void *src_buff = CVPixelBufferGetBaseAddress(imageBuffer);
NSData *data = [NSData dataWithBytes:src_buff length:bytesPerRow * height];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return data;
}

CGContextDrawImage: invalid context 0x0 when loading grayscaled image

i m using xcode and would like to load a grayscale image into the program but i am having problem with it.
Previously i have converted a grayscaled IplImage(size,8, 1) to UIimage and stored as an jpg. Now i would like to revert the process to get back the IplImage.
I obtain UIimage by doing
UIImage *uiimage1 = [UIImage imageNamed:#"IMG_1.JPG"];
Then i use the following function.
- (IplImage *)CreateIplImageFromUIImage:(UIImage *)image {
// Getting CGImage from UIImage
CGImageRef imageRef = image.CGImage;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Creating temporal IplImage for drawing
IplImage *iplimage = cvCreateImage(
cvSize(image.size.width,image.size.height), IPL_DEPTH_8U, 4
);
// Creating CGContext for temporal IplImage
CGContextRef contextRef = CGBitmapContextCreate(
iplimage->imageData, iplimage->width, iplimage->height,
iplimage->depth, iplimage->widthStep,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault
);
// Drawing CGImage to CGContext
CGContextDrawImage(
contextRef,
CGRectMake(0, 0, image.size.width, image.size.height),
imageRef
);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
// Creating result IplImage
IplImage *ret = cvCreateImage(cvGetSize(iplimage), IPL_DEPTH_8U, 3);
cvCvtColor(iplimage, ret, CV_RGBA2BGR);
cvReleaseImage(&iplimage);
return ret;
}
This works fine for loading color image with the standard RGBA channels, but i have some problem when i want to load grayscale image with only 1 channel and no alpha channel.
i have tried to change colorSpace to CGColorSpaceCreateDeviceGray(), change the number of channels from 4 to 1 and commented out the cvCvtColor and return IplImage *iplimage directly
however, i still have the error, CGContextDrawImage: invalid context 0x0
I think there might be a problem with CGBitmapContextCreate and likely something wrong with bitmapInfo..
i tried a few combinations such as kCGBitmapByteOrderDefault|kCGImageAlphaNone but none of them work.
Any idea how what i should do please? thanks in advance!
Look into CGBitmapContextCreate's documentation. (in xCode CMD + right click I think to see documentation)
Did u set size_t bytesPerRow for that cases? RGBA is 8+8+8+8 bit long, only Alpha is 8
CGContextRef CGBitmapContextCreate ( void *data, size_t width, size_t height, size_t bitsPerComponent, size_t bytesPerRow, CGColorSpaceRef space, CGBitmapInfo bitmapInfo );

Cropping a Pixel Buffer

I am using the following to turn image data from the camera into a UIImage. In order to save some time, and memory, I'd like to crop the image data before I turn it into an UIImage.
Ideally I pass in a cropRect, and get back a cropped UIImage. However, since the camera output could be sized differently based on whether I am using a photo or video preset, I may not know what dimensions to use for the cropRect. I could use a cropRect, similar to the focus or exposure points, that uses a CGPoint between (0,0) and (1,1) and do similarly for the CGSizeof the cropRect. Or I can get the dimensions of the sampleBuffer, before I call the following, and pass in an appropriate cropRect. I'd like some advice as to which I should use.
I also would like to know how best to crop in order not to have to create an entire UIImage and then crop it back down. Typically, I am only interested in keeping about 10-20% of the pixels. I assume I have to iterate through the pixels, and start copying the cropRect into a different pixel buffer, until I have all the pixels I want.
And keep in mind that there is possible rotation happening according to orientation.
+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer orientation:(UIImageOrientation) orientation
{
// Create a UIImage from sample buffer data
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:(CGFloat)1.0 orientation:orientation];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
In summary:
Should I pass in a cropRect which specifies a rect between (0,0,0,0) and (1,1,1,1) or do I pass in a cropRect that specifies exact pixel locations like (50,50,100,100)?
How best do I crop the pixel buffer?
I think you should use pixel as cropRect, as you have to convert the float-values to pixel-values at least at some point.
The following code is not tested, but should give you the idea.
CGRect cropRect = CGRectMake(50, 50, 100, 100); // cropRect
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVReturn lock = CVPixelBufferLockBaseAddress(pixelBuffer, 0);
if (lock == kCVReturnSuccess) {
int w = 0;
int h = 0;
int r = 0;
int bytesPerPixel = 0;
unsigned char *buffer;
w = CVPixelBufferGetWidth(pixelBuffer);
h = CVPixelBufferGetHeight(pixelBuffer);
r = CVPixelBufferGetBytesPerRow(pixelBuffer);
bytesPerPixel = r/w;
buffer = CVPixelBufferGetBaseAddress(pixelBuffer);
UIGraphicsBeginImageContext(cropRect.size); // create context for image storage, use cropRect as size
CGContextRef c = UIGraphicsGetCurrentContext();
unsigned char* data = CGBitmapContextGetData(c);
if (data != NULL) {
// iterate over the pixels in cropRect
for(int y = cropRect.origin.y, yDest = 0; y<CGRectGetMaxY(cropRect); y++, yDest++) {
for(int x = cropRect.origin.x, xDest = 0; x<CGRectGetMaxX(cropRect); x++, xDest++) {
int offset = bytesPerPixel*((w*y)+x); // offset calculation in cropRect
int offsetDest = bytesPerPixel*((cropRect.size.width*yDest)+xDest); // offset calculation for destination image
for (int i = 0; i<bytesPerPixel; i++) {
data[offsetDest+i] = buffer[offset+i];
}
}
}
}
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}

Resources