UIImage from CMSampleBufferRef conversion, resulting UIImage not rendering properly - ios

I am working with AV Foundation, i am attempting to save a particular
output CMSampleBufferRef as UIImage in some variable. i am using manatee works sample code and it uses
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange for
kCVPixelBufferPixelFormatTypeKey
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
but when i save the image, the output is just nil or whatever is the background of ImageView. I also tried not to set the output setting and just use whatever is the default but of no use. the image is still not rendered. i also tried to set kCVPixelFormatType_32BGRAbut then manatee works stops detecting bar code.
I am using the context settings from sample code provided by apple on developer website
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(NULL,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer),
8,
0,
CGColorSpaceCreateDeviceRGB(),
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
Can anybody help me on what is going wrong here? It should be simple but i don't have much of experience with AVFoundation Framework. Is this is some color space problem as the context is using CGColorSpaceCreateDeviceRGB() ?
I can provide more info if needed. I searched StackOverflow and there were many entries regarding this but none solved my problem

Is there a reason you are passing 0 for bytesPerRow to CGBitmapContextCreate?
Also, you are passing NULL as the buffer instead of the CMSampleBufferRef address.
Creating the bitmap context should look approximately like this when sampleBuffer is your CMSampleBufferRef:
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer),
8,
CVPixelBufferGetBytesPerRow(imageBuffer),
colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(newContext);

Here is how I used to do it. The code is written in swift. But it works.
You should notice the orientation parameter at the last line, it depends on the video settings.
extension UIImage {
/**
Creates a new UIImage from the video frame sample buffer passed.
#param sampleBuffer the sample buffer to be converted into a UIImage.
*/
convenience init?(sampleBuffer: CMSampleBufferRef) {
// Get a CMSampleBuffer's Core Video image buffer for the media data
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0)
// Get the number of bytes per row for the pixel buffer
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
// Get the number of bytes per row for the pixel buffer
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
// Get the pixel buffer width and height
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
// Create a device-dependent RGB color space
let colorSpace = CGColorSpaceCreateDeviceRGB()
// Create a bitmap graphics context with the sample buffer data
let bitmap = CGBitmapInfo(CGBitmapInfo.ByteOrder32Little.rawValue|CGImageAlphaInfo.PremultipliedFirst.rawValue)
let context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, bitmap)
// Create a Quartz image from the pixel data in the bitmap graphics context
let quartzImage = CGBitmapContextCreateImage(context)
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0)
// Create an image object from the Quartz image
self.init(CGImage: quartzImage, scale: 1, orientation: UIImageOrientation.LeftMirrored)
}
}

I use this regularly:
UIImage *image = [UIImage imageWithData:[self imageToBuffer:sampleBuffer]];
- (NSData *) imageToBuffer:(CMSampleBufferRef)source {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(source);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void *src_buff = CVPixelBufferGetBaseAddress(imageBuffer);
NSData *data = [NSData dataWithBytes:src_buff length:bytesPerRow * height];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return data;
}

Related

Converting Objective-C code to Swift: okay to omit release calls?

We need to convert the code below from Objective-C to Swift.
Question:
There are a few function calls to release objects, e.g., CGImageRelease(newImage). Is it safe to assume no analog is needed for the Swift version since all the memory management is automatic, or do you need to free up memory in Swift as well?
Objective-C code:
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext); CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:orientation];
CGImageRelease(newImage);
Swift version so far:
private func turnBufferToPNGImage(imageSampleBuffer: CMSampleBufferRef, scale: CGFloat) -> UIImage {
let imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer)
// Lock base address
CVPixelBufferLockBaseAddress(imageBuffer, 0)
// Set properties for CGBitmapContext
let pixelData = CVPixelBufferGetBaseAddress(imageBuffer)
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
let colorSpace = CGColorSpaceCreateDeviceRGB()
// Create CGBitmapContext
let newContext = CGBitmapContextCreate(pixelData, width, height, 8, bytesPerRow, colorSpace, CGImageAlphaInfo.PremultipliedFirst.rawValue)
// Create image from context
let rawImage = CGBitmapContextCreateImage(newContext)!
let newImage = UIImage(CGImage: rawImage, scale: scale, orientation: .Up)
// Unlock base address
CVPixelBufferUnlockBaseAddress(imageBuffer,0)
// Return image
return newImage
}
Per the docs:
Core Foundation types are automatically imported as full-fledged Swift classes. Wherever memory management annotations have been provided, Swift automatically manages the memory of Core Foundation objects, including Core Foundation objects that you instantiate yourself
So you can omit the calls.
No You don't need release Core Foundation object because
Apple said:
The Core Foundation CFTypeRef type completely remaps to the AnyObject
type.
And
Core Foundation objects returned from annotated APIs are automatically
memory managed in Swift—you do not need to invoke the CFRetain,
CFRelease, or CFAutorelease functions yourself.
document here

Retaining CMSampleBufferRef cause random crashes

I'm using captureOutput:didOutputSampleBuffer:fromConnection: in order to keep track of the frames. For my use-case, I only need to store the last frame and use it in case the app goes to background.
That's a sample from my code:
#property (nonatomic, strong) AVCaptureVideoDataOutput *videoDataOutput;
#property (atomic) CMSampleBufferRef currentBuffer;
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef con = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(con);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(con);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
// UIImage *image = [UIImage imageWithCGImage:quartzImage];
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:[[UIScreen mainScreen] scale] orientation:UIImageOrientationRight];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
//[self.videoDataOutput setSampleBufferDelegate:self queue:self.sessionQueue];
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CFRetain(sampleBuffer);
#synchronized (self) {
if (_currentBuffer) {
CFRelease(_currentBuffer);
}
self.currentBuffer = sampleBuffer;
}
}
- (void) goingToBackground:(NSNotification *) notification
{
UIImage *snapshot = [self imageFromSampleBuffer:_currentBuffer];
//Doing something with snapshot...
}
The problem is that in some cases I get this crash from within imageFromSampleBuffer:
<Error>: copy_read_only: vm_copy failed: status 1.
The crash happens on CGImageRef quartzImage = CGBitmapContextCreateImage(con);
What am I doing wrong?
I believe you need to copy the buffer, not just retain. From the description for captureOutput:didOutputSampleBuffer:fromConnection:
Note that to maintain optimal performance, some sample buffers directly reference pools of memory that may need to be reused by the device system and other capture inputs. [...] If multiple sample buffers reference such pools of memory for too long, inputs will no longer be able to copy new samples into memory and those samples will be dropped.

Make zoom on NSData or on CMSampleBufferRef

I'm developing an iOS application with latest SDK.
This app will work with OpenCV and I have to make zoom on camera but this, it isn't available on iOS SDK, so I think to do it programmatically.
I have to do 'zoom' on every video frame. This is where I have to do it:
#pragma mark - AVCaptureSession delegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
//size_t stride = CVPixelBufferGetBytesPerRow(imageBuffer);
//put buffer in open cv, no memory copied
cv::Mat image = cv::Mat(height, width, CV_8UC4, baseAddress);
// copy the image
//cv::Mat copied_image = image.clone();
_lastFrame = [NSData dataWithBytes:image.data
length:image.elemSize() * image.total()];
[DataExchanger postFrame];
/*We unlock the image buffer*/
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}
Do you know how to make zoom on a NSData or on a CMSampleBufferRef?
One way would be to put your picture into a CGImageRef, choose a square in that picture and draw it again to a normal size. Something like this (though there may be better ways):
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGImageRef smallQuartzImage = CGImageCreateWithImageInRect(quartzImage, CGRectMake(200, 200, 600, 600));
cv::Mat image(height, width, CV_8UC4 );
CGContextRef contextRef = CGBitmapContextCreate( image.data, width, height, 8, cvMat.step[0], colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst );
CGContextDrawImage(contextRef, CGRectMake(0, 0, width, height), smallQuartzImage);
CGContextRelease( contextRef );
CGColorSpaceRelease( colorSpace );

convert CMSampleBufferRef to UIImage

I'm captuting video with AVCaptureSession.
But I would like to convert the captured image to an UIImage.
I found some code on Internet:
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
NSLog(#"imageFromSampleBuffer: called");
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
But I got some errors:
Jan 17 17:39:25 iPhone-4-de-XXX ThinkOutsideTheBox[2363] <Error>: CGBitmapContextCreate: invalid data bytes/row: should be at least 2560 for 8 integer bits/component, 3 components, kCGImageAlphaPremultipliedFirst.
Jan 17 17:39:25 iPhone-4-de-XXX ThinkOutsideTheBox[2363] <Error>: CGBitmapContextCreateImage: invalid context 0x0
2013-01-17 17:39:25.896 ThinkOutsideTheBox[2363:907] image <UIImage: 0x1d553f00>
Jan 17 17:39:25 iPhone-4-de-XXX ThinkOutsideTheBox[2363] <Error>: CGContextDrawImage: invalid context 0x0
Jan 17 17:39:25 iPhone-4-de-XXX ThinkOutsideTheBox[2363] <Error>: CGBitmapContextGetData: invalid context 0x0
EDIT:
I also use the UIImage to get the rgb color:
-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{
UIImage* image = [self imageFromSampleBuffer:sampleBuffer];
unsigned char* pixels = [image rgbaPixels];
double totalLuminance = 0.0;
for(int p=0;p<image.size.width*image.size.height*4;p+=4)
{
totalLuminance += pixels[p]*0.299 + pixels[p+1]*0.587 + pixels[p+2]*0.114;
}
totalLuminance /= (image.size.width*image.size.height);
totalLuminance /= 255.0;
NSLog(#"totalLuminance %f",totalLuminance);
}
Your best bet will be to set the capture video data output's videoSettings to a dictionary that specifies the pixel format you want, which you'll need to set to some variation on RGB that CGBitmapContext can handle.
The documentation has a list of all of the pixel formats that Core Video can process. Only a tiny subset of those are supported by CGBitmapContext. The format that the code you found on the internet is expecting is kCVPixelFormatType_32BGRA, but that might have been written for Macs—on iOS devices, kCVPixelFormatType_32ARGB (big-endian) might be faster. Try them both, on the device, and compare frame rates.
You can try this code.
-(UIImage *) screenshotOfVideoStream:(CMSampleBufferRef)samImageBuff
{
CVImageBufferRef imageBuffer =
CMSampleBufferGetImageBuffer(samImageBuff);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer))];
UIImage *image = [[UIImage alloc] initWithCGImage:videoImage];
CGImageRelease(videoImage);
return image;
}
It work for me.
In case anyone who are expecting jpeg image like me, there are simple API provided by Apple:
[AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:photoSampleBuffer];
and:
[AVCapturePhotoOutput JPEGPhotoDataRepresentationForJPEGSampleBuffer:photoSampleBuffer previewPhotoSampleBuffer:previewPhotoSampleBuffer]

Cropping a Pixel Buffer

I am using the following to turn image data from the camera into a UIImage. In order to save some time, and memory, I'd like to crop the image data before I turn it into an UIImage.
Ideally I pass in a cropRect, and get back a cropped UIImage. However, since the camera output could be sized differently based on whether I am using a photo or video preset, I may not know what dimensions to use for the cropRect. I could use a cropRect, similar to the focus or exposure points, that uses a CGPoint between (0,0) and (1,1) and do similarly for the CGSizeof the cropRect. Or I can get the dimensions of the sampleBuffer, before I call the following, and pass in an appropriate cropRect. I'd like some advice as to which I should use.
I also would like to know how best to crop in order not to have to create an entire UIImage and then crop it back down. Typically, I am only interested in keeping about 10-20% of the pixels. I assume I have to iterate through the pixels, and start copying the cropRect into a different pixel buffer, until I have all the pixels I want.
And keep in mind that there is possible rotation happening according to orientation.
+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer orientation:(UIImageOrientation) orientation
{
// Create a UIImage from sample buffer data
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:(CGFloat)1.0 orientation:orientation];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
In summary:
Should I pass in a cropRect which specifies a rect between (0,0,0,0) and (1,1,1,1) or do I pass in a cropRect that specifies exact pixel locations like (50,50,100,100)?
How best do I crop the pixel buffer?
I think you should use pixel as cropRect, as you have to convert the float-values to pixel-values at least at some point.
The following code is not tested, but should give you the idea.
CGRect cropRect = CGRectMake(50, 50, 100, 100); // cropRect
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVReturn lock = CVPixelBufferLockBaseAddress(pixelBuffer, 0);
if (lock == kCVReturnSuccess) {
int w = 0;
int h = 0;
int r = 0;
int bytesPerPixel = 0;
unsigned char *buffer;
w = CVPixelBufferGetWidth(pixelBuffer);
h = CVPixelBufferGetHeight(pixelBuffer);
r = CVPixelBufferGetBytesPerRow(pixelBuffer);
bytesPerPixel = r/w;
buffer = CVPixelBufferGetBaseAddress(pixelBuffer);
UIGraphicsBeginImageContext(cropRect.size); // create context for image storage, use cropRect as size
CGContextRef c = UIGraphicsGetCurrentContext();
unsigned char* data = CGBitmapContextGetData(c);
if (data != NULL) {
// iterate over the pixels in cropRect
for(int y = cropRect.origin.y, yDest = 0; y<CGRectGetMaxY(cropRect); y++, yDest++) {
for(int x = cropRect.origin.x, xDest = 0; x<CGRectGetMaxX(cropRect); x++, xDest++) {
int offset = bytesPerPixel*((w*y)+x); // offset calculation in cropRect
int offsetDest = bytesPerPixel*((cropRect.size.width*yDest)+xDest); // offset calculation for destination image
for (int i = 0; i<bytesPerPixel; i++) {
data[offsetDest+i] = buffer[offset+i];
}
}
}
}
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}

Resources