Calling captureOutput in IBAction - ios

I'd like to create an app, which after clicking IBAction button should display current frame from live camera on UIImageView.
Using this method: https://developer.apple.com/library/ios/qa/qa1702/_index.html
I'd like to create fuction in IBAction fuction. This one exactly:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
Unfortunately after dozen of attempts I can't do it.
Summarizing: How to create a function (void), which contains image in
UIImage form and save it to object in storyboard?
I'd be grateful for any help.
Regards.

Use this method to capture still images from AVCaptureSession where stillImageOutput is AVCaptureStillImageOutput:
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
CVImageBufferRef imageBuffer =CMSampleBufferGetImageBuffer(imageSampleBuffer);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage =[temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(
0, 0,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer)
)];
UIImage *image = [[UIImage alloc] initWithCGImage:videoImage];
self.imageView.image = image;
}

Related

didDropSampleBuffer is called very often in iOS

I capture the video and do some analyses on it in captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer delegate. but after a short time this method is not called. then captureOutput:(AVCaptureOutput *)output didDropSampleBuffer delegate is called.
When I don't do anything in didOutputSampleBuffer, everything is okay.
I run a tensor flow model in this delegate. And this causes the problem.
Problem:
The problem is that when didDropSampleBuffer is called, didOutputSampleBuffer will not called again.
My solution: My solution was stoping and starting avCaptureSession. but that caused extra memory usage! Which finally caused my app to crash.
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
// ****** do heavy work in this delegate *********
graph = [TensorflowGraph new];
predictions = [graph runModelOnPixelBuffer:pixelBuffer orientation: UIDeviceOrientationPortrait CardRect: _vwRect];
}
- (void)captureOutput:(AVCaptureOutput *)output didDropSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CFTypeRef droppedFrameReason = CMGetAttachment(sampleBuffer, kCMSampleBufferAttachmentKey_DroppedFrameReason, NULL);
NSLog(#"dropped frame, reason: %#", droppedFrameReason);
}
----> dropped frame, reason: OutOfBuffers
According to [https://developer.apple.com/library/archive/technotes/tn2445/_index.html]:
This condition is typically caused by the client holding onto buffers
for too long, and can be alleviated by returning buffers to the
provider.
How can I return buffer to the provider?
Edited
After 11 times that CGImageRef cgImage = [context createCGImage:resized fromRect:resized.extent]; line is executed, The didDropSampleBuffer is called. commenting CFRelease(pixelBuffer) has no difference in result. Does it means that pixelBuffer is not released?
CFRetain(pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
CIImage* ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer];
ciImage = [ciImage imageByCroppingToRect:cropRect];
CGAffineTransform transform = CGAffineTransformIdentity;
CGFloat angle = 0.0;
transform = CGAffineTransformRotate(transform, angle);
CIImage* resized = [ciImage imageByApplyingTransform:transform];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:resized fromRect:resized.extent]; // *********************************
UIImage* _res = [[UIImage alloc] initWithCGImage:cgImage];
CFRelease(pixelBuffer);
1

Why isn't UIImageWriteToSavedPhotosAlbum() working when using CIImage?

I'm generating a CIImage using a few chained filters and trying to output the generated image in the users photo album for certain debug purposes. The callback I supply to UIImageWriteToSavedPhotosAlbum() always has a nil error returned, so I assume nothing is going wrong. But the image never seems to show up.
I've used this function in the past to dump OpenGL buffers to the photo album for debugging, but I realize this isn't the same case. Should I be doing something differently?
-(void)cropAndSaveImage:(CIImage *)inputImage fromFeature:(CIFaceFeature *)feature
{
// First crop out the face.
[_cropFilter setValue:inputImage forKey:#"inputImage"];
[_cropFilter setValue:[CIVector vectorWithCGRect:feature.bounds] forKey:#"inputRectangle"];
CIImage * croppedImage = _cropFilter.outputImage;
__block CIImage * outImage = croppedImage;
dispatch_async(dispatch_get_main_queue(), ^{
UIImage * outUIImage = [UIImage imageWithCIImage:outImage];
UIImageWriteToSavedPhotosAlbum(outUIImage, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
});
}
-(void)image:(UIImage *)image didFinishSavingWithError:(NSError *)error contextInfo:(void *)contextInfo
{
NSLog(#"debug LBP output face. error: %#", error);
}
I've verified that the boundaries are never 0.
The callback output is always
debug LBP output face. error: (null)
I figured this out on my own and deleted the question, but then I thought maybe someone will get some use out of it. I say this because I came across an older answer that suggested the original implementation worked. But in actuality in had to do the following to make it work properly.
__block CIImage * outImage = _lbpFilter.outputImage;
dispatch_async(dispatch_get_main_queue(), ^{
CGImageRef imgRef = [self renderCIImage:outImage];
UIImage * uiImage = [UIImage imageWithCGImage:imgRef];
UIImageWriteToSavedPhotosAlbum(uiImage, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
});
+ (CGImageRef)renderCIImage:(CIImage *)img
{
if ( !m_ctx ) {
NSDictionary * options = #{kCIContextOutputColorSpace:[NSNull null], kCIContextWorkingColorSpace:[NSNull null]};
m_ctx = [CIContext contextWithOptions:options];
}
return [m_ctx createCGImage:img fromRect:img.extent];
}
Using filter.outputImage to convert to CGImage by CIContext.createCGImage() and converting CGImage to UIImage will save image successfully.

Memory Leak in CMSampleBufferGetImageBuffer

I'm getting a UIImage from a CMSampleBufferRef video buffer every N video frames like:
- (void)imageFromVideoBuffer:(void(^)(UIImage* image))completion {
CMSampleBufferRef sampleBuffer = _myLastSampleBuffer;
if (sampleBuffer != nil) {
CFRetain(sampleBuffer);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)];
_lastAppendedVideoBuffer.sampleBuffer = nil;
if (_context == nil) {
_context = [CIContext contextWithOptions:nil];
}
CVPixelBufferRef buffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CGImageRef cgImage = [_context createCGImage:ciImage fromRect:
CGRectMake(0, 0, CVPixelBufferGetWidth(buffer), CVPixelBufferGetHeight(buffer))];
__block UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CFRelease(sampleBuffer);
if(completion) completion(image);
return;
}
if(completion) completion(nil);
}
XCode and Instruments detect a Memory Leak, but I'm not able to get rid of it.
I'm releasing the CGImageRef and CMSampleBufferRef as usual:
CGImageRelease(cgImage);
CFRelease(sampleBuffer);
[UPDATE]
I put in the AVCapture output callback to get the sampleBuffer.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
if (captureOutput == _videoOutput) {
_lastVideoBuffer.sampleBuffer = sampleBuffer;
id<CIImageRenderer> imageRenderer = _CIImageRenderer;
dispatch_async(dispatch_get_main_queue(), ^{
#autoreleasepool {
CIImage *ciImage = nil;
ciImage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)];
if(_context==nil) {
_context = [CIContext contextWithOptions:nil];
}
CGImageRef processedCGImage = [_context createCGImage:ciImage
fromRect:[ciImage extent]];
//UIImage *image=[UIImage imageWithCGImage:processedCGImage];
CGImageRelease(processedCGImage);
NSLog(#"Captured image %#", ciImage);
}
});
The code that leaks is the createCGImage:ciImage:
CGImageRef processedCGImage = [_context createCGImage:ciImage
fromRect:[ciImage extent]];
even having a autoreleasepool, the CGImageRelease of the CGImage reference and a CIContext as instance property.
This seems to be the same issue addressed here: Can't save CIImage to file on iOS without memory leaks
[UPDATE]
The leak seems to be due a bug. The issue is well described in
Memory leak on CIContext createCGImage at iOS 9?
A sample project shows how to reproduce this leak: http://www.osamu.co.jp/DataArea/VideoCameraTest.zip
The last comments assure that
It looks like they fixed this in 9.1b3. If anyone needs a workaround
that works on iOS 9.0.x, I was able to get it working with this:
in a test code (Swift in this case):
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
if (error) return;
__block NSString *filePath = [NSTemporaryDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:#"ipdf_pic_%i.jpeg",(int)[NSDate date].timeIntervalSince1970]];
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
dispatch_async(dispatch_get_main_queue(), ^
{
#autoreleasepool
{
CIImage *enhancedImage = [CIImage imageWithData:imageData];
if (!enhancedImage) return;
static CIContext *ctx = nil; if (!ctx) ctx = [CIContext contextWithOptions:nil];
CGImageRef imageRef = [ctx createCGImage:enhancedImage fromRect:enhancedImage.extent format:kCIFormatBGRA8 colorSpace:nil];
UIImage *image = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:UIImageOrientationRight];
[[NSFileManager defaultManager] createFileAtPath:filePath contents:UIImageJPEGRepresentation(image, 0.8) attributes:nil];
CGImageRelease(imageRef);
}
});
}];
and the workaround for iOS9.0 should be
extension CIContext {
func createCGImage_(image:CIImage, fromRect:CGRect) -> CGImage {
let width = Int(fromRect.width)
let height = Int(fromRect.height)
let rawData = UnsafeMutablePointer<UInt8>.alloc(width * height * 4)
render(image, toBitmap: rawData, rowBytes: width * 4, bounds: fromRect, format: kCIFormatRGBA8, colorSpace: CGColorSpaceCreateDeviceRGB())
let dataProvider = CGDataProviderCreateWithData(nil, rawData, height * width * 4) {info, data, size in UnsafeMutablePointer<UInt8>(data).dealloc(size)}
return CGImageCreate(width, height, 8, 32, width * 4, CGColorSpaceCreateDeviceRGB(), CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedLast.rawValue), dataProvider, nil, false, .RenderingIntentDefault)!
}
}
We were experiencing a similar issue in an app we created, where we are processing each frame for feature keypoints with OpenCV, and sending off a frame every couple of seconds. After a while of running we would end up with quite a few memory pressure messages.
We managed to rectify this by running our processing code in it's own auto release pool like so (jpegDataFromSampleBufferAndCrop does something similar to what you are doing, with added cropping):
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
#autoreleasepool {
if ([self.lastFrameSentAt timeIntervalSinceNow] < -kContinuousRateInSeconds) {
NSData *imageData = [self jpegDataFromSampleBufferAndCrop:sampleBuffer];
if (imageData) {
[self processImageData:imageData];
}
self.lastFrameSentAt = [NSDate date];
imageData = nil;
}
}
}
}
I can confirm that this memory leak still exists on iOS 9.2. (I've also posted on the Apple Developer Forum.)
I get the same memory leak on iOS 9.2. I've tested dropping EAGLContext by using MetalKit and MLKDevice. I've tested using different methods of CIContext like drawImage, createCGImage and render but nothing seem to work.
It is very clear that this is a bug as of iOS 9. Try it out your self by downloading the example app from Apple (see below) and then run the same project on a device with iOS 8.4, then on a device with iOS 9.2 and pay attention to the memory gauge in Xcode.
Download https://developer.apple.com/library/ios/samplecode/AVBasicVideoOutput/Introduction/Intro.html#//apple_ref/doc/uid/DTS40013109
Add this to the APLEAGLView.h:20
#property (strong, nonatomic) CIContext* ciContext;
Replace APLEAGLView.m:118 with this
[EAGLContext setCurrentContext:_context];
_ciContext = [CIContext contextWithEAGLContext:_context];
And finaly replace APLEAGLView.m:341-343 with this
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
#autoreleasepool
{
CIImage* sourceImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIFilter* filter = [CIFilter filterWithName:#"CIGaussianBlur" keysAndValues:kCIInputImageKey, sourceImage, nil];
CIImage* filteredImage = filter.outputImage;
[_ciContext render:filteredImage toCVPixelBuffer:pixelBuffer];
}
glBindRenderbuffer(GL_RENDERBUFFER, _colorBufferHandle);

Make video from frames captured in captureOutput:didOutputSampleBuffer:fromConnection

How to make movie from frames captured in
-captureOutput:didOutputSampleBuffer:fromConnection
and save to photo library ?
Thank You.
Sorry I haven't noticed. See edited answer maybe you find it useful.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef) CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
[_coreImageContext drawImage:image inRect:[image extent] fromRect:[image extent]];
[_context presentRenderbuffer:GL_RENDERBUFFER ];
}

Why is my image not updating when i call it from the capture output protocol?

I am trying to do something very simple. I want to display the video layer in full screen, and once every second update an UIImage with the CMSampleBufferRef i got at that time. However i am running into two different problems. The first one is that changing the:
[connection setVideoMaxFrameDuration:CMTimeMake(1, 1)];
[connection setVideoMinFrameDuration:CMTimeMake(1, 1)];
Will also modify the video preview layer, I thought it would only modify the rate at where av foundation sends the information to the delegate but it seems to affect the entire session (which looks more obvious). So this makes my video update every second. I guess i could omit those lines and simply add a timer in the delegate so that every second it sends the CMSampleBufferRef to another method to process it. But i dunno if this is the right approach.
My second problem is that the UIImageView is NOT updating, or sometimes it just updates once and doesn't change after. I am using this method to update it:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
//NSData *jpeg = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:sampleBuffer] ;
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
[imageView setImage:image];
// Add your code here that uses the image.
NSLog(#"update");
}
Which i took from the apple examples. The method is being called correctly every second which i checked by reading the update message. But the image is not changing at all. Also is the sampleBuffer automatically destroyed or do i have to release it?
This are the other 2 important methods:
View Did Load:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
session = [[AVCaptureSession alloc] init];
// Add inputs and outputs.
if ([session canSetSessionPreset:AVCaptureSessionPreset640x480]) {
session.sessionPreset = AVCaptureSessionPreset640x480;
}
else {
// Handle the failure.
NSLog(#"Cannot set session preset to 640x480");
}
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
NSLog(#"Could create input: %#", error);
}
if ([session canAddInput:input]) {
[session addInput:input];
}
else {
// Handle the failure.
NSLog(#"Could not add input");
}
// DATA OUTPUT
dataOutput = [[AVCaptureVideoDataOutput alloc] init];
if ([session canAddOutput:dataOutput]) {
[session addOutput:dataOutput];
dataOutput.videoSettings =
[NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey: (id)kCVPixelBufferPixelFormatTypeKey];
//dataOutput.minFrameDuration = CMTimeMake(1, 15);
//dataOutput.minFrameDuration = CMTimeMake(1, 1);
AVCaptureConnection *connection = [dataOutput connectionWithMediaType:AVMediaTypeVideo];
[connection setVideoMaxFrameDuration:CMTimeMake(1, 1)];
[connection setVideoMinFrameDuration:CMTimeMake(1, 1)];
}
else {
// Handle the failure.
NSLog(#"Could not add output");
}
// DATA OUTPUT END
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[dataOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[captureVideoPreviewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
[captureVideoPreviewLayer setBounds:videoLayer.layer.bounds];
[captureVideoPreviewLayer setPosition:videoLayer.layer.position];
[videoLayer.layer addSublayer:captureVideoPreviewLayer];
[session startRunning];
}
Covert the CMSampleBufferRef to UIImage:
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
Thanks in advance for any help you can give me.
From the documentation for the captureOutput:didOutputSampleBuffer:fromConnection: method :
This method is called on the dispatch queue specified by the output’s sampleBufferCallbackQueue property.
This means that if you need to update the UI using the buffer in this method you need to do that on the main queue like this :
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer: (CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
dispatch_async(dispatch_get_main_queue(), ^{
[imageView setImage:image];
});
}
EDIT : About your first questions :
I'm not sure I'm understanding the problem, but if you want to update the image only once every second you can also have a "lastImageUpdateTime" value to compare to in the "didOutputSampleBuffer" method and see if enough time passed and only update the image there, and ignore the sample buffer otherwise.

Resources