Alpha channel gone when converting UIImage to CVPixelBufferRef - ios

I'm using this code to create a movie from different uiimages with an AVAssetWriter. The codes works great but the problem is that the Alpha channel is gone when I add the images to the writer. I can't figure out if the alpha doesn't exists in the CVPixelBufferRef or that the AVAssetWriter isn't able to process.
My end result isn't a movie with an alpha channel but multiple images on top of each other and merged in a movie file. I can put images on top of other images in a single frame but all the images (pixel buffers) have a black background...
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image andSize:(CGSize) size {
#autoreleasepool {
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4*size.width, rgbColorSpace, (CGBitmapInfo)kCGImageAlphaPremultipliedFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
}

This is never going to work because h.264 does not support an alpha channel. You cannot encode a movie with an alpha channel using the built in iOS logic, end of story. It is possible to composite layers before encoding though. It is also possible to compose encode with a 3rd party library that does support an alpha channel. See this question for more info.

Related

CGContextDrawImage huge memory peak

I'm developing a movie maker application which make some effects on imported videos.
I'm using AVAssetWriter to code my application.
Everything works very good but I have a big problem in memory.
My app takes over 500 MB of the RAM in buffering process.
Simply the algorithm for making a filtered video is going like this:
1- import video.
2- extract all the frames for the video as CMSampleBuffer objects.
3- convert CMSampleBuffer object to uiimage.
4- implement the filter on the uiimage.
5- convert the uiimage back to a new CMSAmpleBuffer object.
6- Append the new buffer to a writer output.
7- Finally save the new movie to PhotoGallery.
The problem is in step5 I have a function which converts a UIImage to cvpixelBuffer object and return it.
Then I convert the CVPixelBuffer object to CMSampleBuffer.
The function increases the memory a lot and the application crashes at the end.
This is my code:
-(CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image andSize:(CGSize)size
{
double height = CGImageGetHeight(image);
double width = CGImageGetWidth(image);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
if (status != kCVReturnSuccess) {
return NULL;
}
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata,size.width ,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
CGFloat Y ;
if (height == size.height)
Y = 0;
else
Y = (size.height /2) - (height/2) ;
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, Y,width,height), image);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
CGContextDrawImage increases the memory by 2~5 MB per frame conversion.
I tried the following solutions:
1- releasing pxbuffer using CFRelease.
2- I used CGImageRelease to release the image ref.
3- I surrounded the code with #autoreleasepool block.
4- I used CGContextRelease.
5- UIGraphicsEndImageContext.
6- Used Analyze too in Xcode and fixed all the points.
Here is the full code for Video filtering:
- (void)assetFilteringMethod:(FilterType)filterType AndAssetURL:(NSURL *)assetURL{
CMSampleBufferRef sbuff ;
[areader addOutput:rout];
[areader startReading];
UIImage* bufferedImage;
while ([areader status] != AVAssetReaderStatusCompleted) {
sbuff = [rout copyNextSampleBuffer];
if (sbuff == nil)
[areader cancelReading];
else{
if (writerInput.readyForMoreMediaData) {
#autoreleasepool {
bufferedImage = [self imageFromSampleBuffer:sbuff];
bufferedImage = [FrameFilterClass convertImageToFilterWithFilterType:filterType andImage: bufferedImage];
CVPixelBufferRef buffer = NULL;
buffer = [self pixelBufferFromCGImage:[bufferedImage CGImage] andSize:CGSizeMake(320,240)];
[adaptor appendPixelBuffer:buffer withPresentationTime:CMSampleBufferGetPresentationTimeStamp(sbuff)];
CFRelease(buffer);
CFRelease(sbuff);
}
}
}
}
//Finished buffering
[videoWriter finishWritingWithCompletionHandler:^{
if (videoWriter.status != AVAssetWriterStatusFailed && videoWriter.status == AVAssetWriterStatusCompleted){
dispatch_async(dispatch_get_main_queue(), ^{
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
if ([library
videoAtPathIsCompatibleWithSavedPhotosAlbum:[NSURL fileURLWithPath:moviePath]]) {
[library writeVideoAtPathToSavedPhotosAlbum:[NSURL fileURLWithPath:moviePath]
completionBlock:^(NSURL *assetURL, NSError *error){
}];
}
});
}
else
NSLog(#"Video writing failed: %#", videoWriter.error);
}];
}
I spent around 3 to 4 days trying to solve this problem...
Any help would be appreciated.
You have to release the image using this line:
cgimagerelease(image.cgimage)

Receive memory warning when I create video from images

I was working with create video from images and its working very well but undortunatelly application getting crash in iPhone 4s. see my code and provide your suggestion please.
- (void)createMovieFromImages:(NSArray *)images withCompletion:(CEMovieMakerCompletion)completion;
{
self.completionBlock = completion;
[self.assetWriter startWriting];
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
dispatch_queue_t mediaInputQueue = dispatch_queue_create("mediaInputQueue", NULL);
__block NSInteger i = 0;
NSInteger frameNumber = [images count];
[self.writerInput requestMediaDataWhenReadyOnQueue:mediaInputQueue usingBlock:^{
while (YES){
if (i >= frameNumber) {
break;
}
if ([self.writerInput isReadyForMoreMediaData]) {
CVPixelBufferRef sampleBuffer = [self newPixelBufferFromCGImage:[[images objectAtIndex:i] CGImage]];
if (sampleBuffer) {
if (i == 0) {
[self.bufferAdapter appendPixelBuffer:sampleBuffer withPresentationTime:kCMTimeZero];
}else{
CMTime lastTime = CMTimeMake(i-1, self.frameTime.timescale);
CMTime presentTime = CMTimeAdd(lastTime, self.frameTime);
[self.bufferAdapter appendPixelBuffer:sampleBuffer withPresentationTime:presentTime];
}
CFRelease(sampleBuffer);
i++;
}
}
}
[self.writerInput markAsFinished];
[self.assetWriter finishWritingWithCompletionHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
self.completionBlock(self.fileURL);
});
}];
CVPixelBufferPoolRelease(self.bufferAdapter.pixelBufferPool);
}];
}
- (CVPixelBufferRef)newPixelBufferFromCGImage:(CGImageRef)image
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CGFloat frameWidth = [[self.videoSettings objectForKey:AVVideoWidthKey] floatValue];
CGFloat frameHeight = [[self.videoSettings objectForKey:AVVideoHeightKey] floatValue];
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
frameWidth,
frameHeight,
kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata,
frameWidth,
frameHeight,
8,
4 * frameWidth,
rgbColorSpace,
(CGBitmapInfo)kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformIdentity);
CGContextDrawImage(context, CGRectMake(0,
0,
CGImageGetWidth(image),
CGImageGetHeight(image)),
image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
I have pass image array and get video but it was crash in iPhone 4s please help me.
This issue is becoming more common because of some of the following:
"iPhone 4S has 512 MB of DDR2 RAM" (wiki) , and with the numerous processes of the OS and their own demands of the hardware, and the addition of age of the device (wear and tear), an iPhone 4s is unlikely to be capable of something as demanding as this.
This question (ios app maximum memory budget) assumes that no more than 200MB (to be safe keep around 120MB) of memory should be consumed by the app at any given time.
To try and make this work, place as many functions not relevant to User Interface on background threads. Your entire - (CVPixelBufferRef)newPixelBufferFromCGImage:(CGImageRef)image is handled on the main thread, as your method - (void)createMovieFromImages:(NSArray *)images...
There is no guarantee placing these methods on a background thread will work, but it is worth trying. The following questions/answer/links have some relevant points of interest regarding threading, if you are not aware of them, and even if you are some of the points are interesting to read as a developer:
GCD - main vs background thread for updating a UIImageView
NSOperation and NSOperationQueue working thread vs main thread
http://pinkstone.co.uk/how-to-execute-a-method-on-a-background-thread-in-ios/

Recording and merging video for screen capture

I am currently grabbing the frames of a video using AVPlayerItemVideoOutput. I use CADisplayLink to grab frames from the output. Then i pass the pixel buffer off to the assetwriter. I do it like this:
- (void)displayLinkCallback:(CADisplayLink *)sender
{
CMTime outputItemTime = kCMTimeInvalid;
// Calculate the nextVsync time which is when the screen will be refreshed next.
CFTimeInterval nextVSync = (sender.timestamp + sender.duration);
outputItemTime = [self.videoOutput itemTimeForHostTime:nextVSync];
if (self.playerOne.playerAsset.playable) {
if ([[self videoOutput] hasNewPixelBufferForItemTime:outputItemTime] && self.newSampleReady) {
dispatch_async(self.captureSessionQueue, ^{
CVPixelBufferRelease(self.lastPixelBuffer);
self.lastPixelBuffer = [self.videoOutput copyPixelBufferForItemTime:outputItemTime itemTimeForDisplay:NULL];
CMTime fpsTime = CMTimeMake(1, 24);
self.currentVideoTime = CMTimeAdd(self.currentVideoTime, fpsTime);
[_assetWriterInputPixelBufferAdaptor appendPixelBuffer:self.lastPixelBuffer withPresentationTime:self.currentVideoTime];
self.newSampleReady = NO;
});
}
}
}
This allows me to switch videos real time and keep making a screen recording. But I also want to switch to a split view with two players, grab each of the frames from the players and merge them into a single video. AVComposition would work except that you have to know in advance what tracks and times you want to merge. My screen capture program lets the user switch freely between single and split view and back. Is there a way to get the pixel buffer's and use those to merge the recordings into a single video?
I have tried doing the following by just taking the first pixel buffer and creating two images, combining them and then creating a new pixel buffer that I pass back to the assetwriter but I just get a black screen video. Here's my code for this:
-(CVPixelBufferRef)pixelBufferToCGImageRef:(CVPixelBufferRef)pixelBuffer withSecond:(CVPixelBufferRef)pixelBuffer2
{
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
UIImage *im1 = [UIImage imageWithCIImage:ciImage];
UIImage *im2 = [UIImage imageWithCIImage:ciImage];
CGSize newSize = CGSizeMake(640, 480);
UIGraphicsBeginImageContext( newSize );
[im1 drawInRect:CGRectMake(0,0,newSize.width/2,newSize.height)];
[im2 drawInRect:CGRectMake(newSize.width/2,0,newSize.width/2,newSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
CIImage *newCIImage = [newImage CIImage];
UIGraphicsEndImageContext();
CVPixelBufferRef pbuff = NULL;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
640,
480,
kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef)(options),
&pbuff);
if (status == kCVReturnSuccess) {
[temporaryContext render:newCIImage
toCVPixelBuffer:pbuff
bounds:CGRectMake(0, 0, 640, 480)
colorSpace:nil];
} else {
NSLog(#"Failed create pbuff");
}
return pbuff;
}
Any suggestions?
The black screen I was getting was because ciImage becomes nil right after I get it on the simulator. If I run the code on a device then it works.

Memory warning OpenGL iOS Application

I am working on a rich graphics iOS application. At an instance, the memory taken by our application is 250 MB. I would take each frame from Camera, process it with OpenGL shaders and extract some data. Each time I use the camera to get the frames for processing I see an increase in the memory up to 280 MB. When I stop capturing the frames, memory comes back to normal to 250 MB. If I repeat the process of starting the camera and exiting for 10 times (lets say), I receive a memory warning (Though no memory leak being observed). I am not using ARC here. I am maintaing an auto release pool that includes the entire processing of a frame. I don't see any leaks while profiling. After 10 times, the memory seems to stand at 250 MB. I am not sure of the reason for memory warning. Any insights? I am happy to provide further information. Opengl version - ES 2.0, iOS version - 7.0
you have to use ARC, it will automatically release the bad memory, and make your application optimized
According to some other questions like this one (Crash running OpenGL on iOS after memory warning) and this one (instruments with iOS: Why does Memory Monitor disagree with Allocations?) the problem may be that you aren't deleting OpenGL resources (VBOs, textures, renderbuffers, whatever) when you're done with them.
Without seeing code, who knows? Are you simply rendering the frame buffer using the presentRenderbuffer method of EAGLContext? Then, what are you doing with the pixelBuffer you passed to CVOpenGLESTextureCacheCreateTextureFromImage? The pixel buffer is the only source of substantial memory in a typical use scenario.
However, if you're swapping the data in the render buffer to another buffer with, say, glReadPixels, then you've introduced one of several memory hogs. If the buffer you swapped to was a CoreGraphics buffer via, say, a CGDataProvider, did you include a data release callback, or did you pass nil as the parameter when you created the provider? Did you glFlush after you swapped buffers?
These are questions for which I could ascertain answers if you provided code; if you think you can tackle this without doing so, but would like to see working code that successfully manages memory in the most arduous use-case scenario there could possibly be:
https://demonicactivity.blogspot.com/2016/11/tech-serious-ios-developers-use-every.html
For your convenience, I've provided some code below. Place it after any call to the presentRenderbuffer method, commenting out the call if you do not want to render the buffer to the display in the CAEAGLLayer (as I did in the sample below):
// [_context presentRenderbuffer:GL_RENDERBUFFER];
dispatch_async(dispatch_get_main_queue(), ^{
#autoreleasepool {
// To capture the output to an OpenGL render buffer...
NSInteger myDataLength = _backingWidth * _backingHeight * 4;
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glPixelStorei(GL_UNPACK_ALIGNMENT, 8);
glReadPixels(0, 0, _backingWidth, _backingHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// To swap the pixel buffer to a CoreGraphics context (as a CGImage)
CGDataProviderRef provider;
CGColorSpaceRef colorSpaceRef;
CGImageRef imageRef;
CVPixelBufferRef pixelBuffer;
#try {
provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, &releaseDataCallback);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * _backingWidth;
colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
imageRef = CGImageCreate(_backingWidth, _backingHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
} #catch (NSException *exception) {
NSLog(#"Exception: %#", [exception reason]);
} #finally {
if (imageRef) {
// To convert the CGImage to a pixel buffer (for writing to a file using AVAssetWriter)
pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:imageRef];
// To verify the integrity of the pixel buffer (by converting it back to a CGIImage, and thendisplaying it in a layer)
imageLayer.contents = (__bridge id)[CVCGImageUtil cgImageFromPixelBuffer:pixelBuffer context:_ciContext];
}
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(imageRef);
}
}
});
.
.
.
The callback to free the data in the instance of the CGDataProvider class:
static void releaseDataCallback (void *info, const void *data, size_t size) {
free((void*)data);
}
The CVCGImageUtil class interface and implementation files, respectively:
#import Foundation;
#import CoreMedia;
#import CoreGraphics;
#import QuartzCore;
#import CoreImage;
#import UIKit;
#interface CVCGImageUtil : NSObject
+ (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context;
+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image;
+ (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image;
#end
#import "CVCGImageUtil.h"
#implementation CVCGImageUtil
+ (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context
{
// CVPixelBuffer to CoreImage
CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
image = [image imageByApplyingTransform:CGAffineTransformMakeRotation(M_PI)];
CGPoint origin = [image extent].origin;
image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-origin.x, -origin.y)];
// CoreImage to CGImage via CoreImage context
CGImageRef cgImage = [context createCGImage:image fromRect:[image extent]];
// CGImage to UIImage (OPTIONAL)
//UIImage *uiImage = [UIImage imageWithCGImage:cgImage];
//return (CGImageRef)uiImage.CGImage;
return cgImage;
}
+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
{
CGSize frameSize = CGSizeMake(CGImageGetWidth(image),
CGImageGetHeight(image));
NSDictionary *options =
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES],
kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES],
kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status =
CVPixelBufferCreate(
kCFAllocatorDefault, frameSize.width, frameSize.height,
kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(
pxdata, frameSize.width, frameSize.height,
8, CVPixelBufferGetBytesPerRow(pxbuffer),
rgbColorSpace,
(CGBitmapInfo)kCGBitmapByteOrder32Little |
kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
+ (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image
{
CVPixelBufferRef pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:image];
CMSampleBufferRef newSampleBuffer = NULL;
CMSampleTimingInfo timimgInfo = kCMTimingInfoInvalid;
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(
NULL, pixelBuffer, &videoInfo);
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
pixelBuffer,
true,
NULL,
NULL,
videoInfo,
&timimgInfo,
&newSampleBuffer);
return newSampleBuffer;
}
#end

Capture still UIImage without compression (from CMSampleBufferRef)?

I need to obtain the UIImage from uncompressed image data from CMSampleBufferRef. I'm using the code:
captureStillImageOutput captureStillImageAsynchronouslyFromConnection:connection
completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
// that famous function from Apple docs found on a lot of websites
// does NOT work for still images
UIImage *capturedImage = [self imageFromSampleBuffer:imageSampleBuffer];
}
http://developer.apple.com/library/ios/#qa/qa1702/_index.html is a link to imageFromSampleBuffer function.
But it does not work properly. :(
There is a jpegStillImageNSDataRepresentation:imageSampleBuffer method, but it gives the compressed data (well, because JPEG).
How can I get UIImage created with the most raw non-compressed data after capturing Still Image?
Maybe, I should specify some settings to video output? I'm currently using those:
captureStillImageOutput = [[AVCaptureStillImageOutput alloc] init];
captureStillImageOutput.outputSettings = #{ (id)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
I've noticed, that output has a default value for AVVideoCodecKey, which is AVVideoCodecJPEG. Can it be avoided in any way, or does it even matter when capturing still image?
I found something there: Raw image data from camera like "645 PRO" , but I need just a UIImage, without using OpenCV or OGLES or other 3rd party.
The method imageFromSampleBuffer does work in fact I'm using a changed version of it, but if I remember correctly you need to set the outputSettings right. I think you need to set the key as kCVPixelBufferPixelFormatTypeKey and the value as kCVPixelFormatType_32BGRA.
So for example:
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* outputSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[newStillImageOutput setOutputSettings:outputSettings];
EDIT
I am using those settings to take stillImages not video.
Is your sessionPreset AVCaptureSessionPresetPhoto? There may be problems with that
AVCaptureSession *newCaptureSession = [[AVCaptureSession alloc] init];
[newCaptureSession setSessionPreset:AVCaptureSessionPresetPhoto];
EDIT 2
The part about saving it to UIImage is identical with the one from the documentation. That's the reason I was asking for other origins of the problem, but I guess that was just grasping for straws.
There is another way I know of, but that requires OpenCV.
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
I guess that is of no help to you, sorry. I don't know enough to think of other origins for your problem.
Here's a more efficient way:
UIImage *image = [UIImage imageWithData:[self imageToBuffer:sampleBuffer]];
- (NSData *) imageToBuffer:(CMSampleBufferRef)source {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(source);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void *src_buff = CVPixelBufferGetBaseAddress(imageBuffer);
NSData *data = [NSData dataWithBytes:src_buff length:bytesPerRow * height];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return data;
}

Resources