AVCaptureSession with multiple previews - ios

I have an AVCaptureSession running with an AVCaptureVideoPreviewLayer.
I can see the video so I know it's working.
However, I'd like to have a collection view and in each cell add a preview layer so that each cell shows a preview of the video.
If I try to pass the preview layer into the cell and add it as a subLayer then it removes the layer from the other cells so it only ever displays in one cell at a time.
Is there another (better) way of doing this?

I ran into the same problem of needing multiple live views displayed at the same time. The answer of using UIImage above was too slow for what I needed. Here are the two solutions I found:
1. CAReplicatorLayer
The first option is to use a CAReplicatorLayer to duplicate the layer automatically. As the docs say, it will automatically create "...a specified number of copies of its sublayers (the source layer), each copy potentially having geometric, temporal and color transformations applied to it."
This is super useful if there isn't a lot of interaction with the live previews besides simple geometric or color transformations (Think Photo Booth). I have most often seen the CAReplicatorLayer used as a way to create the 'reflection' effect.
Here is some sample code to replicate a CACaptureVideoPreviewLayer:
Init AVCaptureVideoPreviewLayer
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[previewLayer setFrame:CGRectMake(0.0, 0.0, self.view.bounds.size.width, self.view.bounds.size.height / 4)];
Init CAReplicatorLayer and set properties
Note: This will replicate the live preview layer four times.
NSUInteger replicatorInstances = 4;
CAReplicatorLayer *replicatorLayer = [CAReplicatorLayer layer];
replicatorLayer.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height / replicatorInstances);
replicatorLayer.instanceCount = instances;
replicatorLayer.instanceTransform = CATransform3DMakeTranslation(0.0, self.view.bounds.size.height / replicatorInstances, 0.0);
Add Layers
Note: From my experience you need to add the layer you want to replicate to the CAReplicatorLayer as a sublayer.
[replicatorLayer addSublayer:previewLayer];
[self.view.layer addSublayer:replicatorLayer];
Downsides
A downside to using CAReplicatorLayer is that it handles all placement of the layer replications. So it will apply any set transformations to each instance and and it will all be contained within itself. E.g. There would be no way to have a replication of a AVCaptureVideoPreviewLayer on two separate cells.
2. Manually Rendering SampleBuffer
This method, albeit a tad more complex, solves the above mentioned downside of CAReplicatorLayer. By manually rendering the live previews, you are able to render as many views as you want. Granted, performance might be affected.
Note: There might be other ways to render the SampleBuffer but I chose OpenGL because of its performance. Code was inspired and altered from CIFunHouse.
Here is how I implemented it:
2.1 Contexts and Session
Setup OpenGL and CoreImage Context
_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
// Note: must be done after the all your GLKViews are properly set up
_ciContext = [CIContext contextWithEAGLContext:_eaglContext
options:#{kCIContextWorkingColorSpace : [NSNull null]}];
Dispatch Queue
This queue will be used for the session and delegate.
self.captureSessionQueue = dispatch_queue_create("capture_session_queue", NULL);
Init your AVSession & AVCaptureVideoDataOutput
Note: I have removed all device capability checks to make this more readable.
dispatch_async(self.captureSessionQueue, ^(void) {
NSError *error = nil;
// get the input device and also validate the settings
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *_videoDevice = nil;
if (!_videoDevice) {
_videoDevice = [videoDevices objectAtIndex:0];
}
// obtain device input
AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:self.videoDevice error:&error];
// obtain the preset and validate the preset
NSString *preset = AVCaptureSessionPresetMedium;
// CoreImage wants BGRA pixel format
NSDictionary *outputSettings = #{(id)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA)};
// create the capture session
self.captureSession = [[AVCaptureSession alloc] init];
self.captureSession.sessionPreset = preset;
:
Note: The following code is the 'magic code'. It is where we are create and add a DataOutput to the AVSession so we can intercept the camera frames using the delegate. This is the breakthrough I needed to figure out how to solve the problem.
:
// create and configure video data output
AVCaptureVideoDataOutput *videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
videoDataOutput.videoSettings = outputSettings;
[videoDataOutput setSampleBufferDelegate:self queue:self.captureSessionQueue];
// begin configure capture session
[self.captureSession beginConfiguration];
// connect the video device input and video data and still image outputs
[self.captureSession addInput:videoDeviceInput];
[self.captureSession addOutput:videoDataOutput];
[self.captureSession commitConfiguration];
// then start everything
[self.captureSession startRunning];
});
2.2 OpenGL Views
We are using GLKView to render our live previews. So if you want 4 live previews, then you need 4 GLKView.
self.livePreviewView = [[GLKView alloc] initWithFrame:self.bounds context:self.eaglContext];
self.livePreviewView = NO;
Because the native video image from the back camera is in UIDeviceOrientationLandscapeLeft (i.e. the home button is on the right), we need to apply a clockwise 90 degree transform so that we can draw the video preview as if we were in a landscape-oriented view; if you're using the front camera and you want to have a mirrored preview (so that the user is seeing themselves in the mirror), you need to apply an additional horizontal flip (by concatenating CGAffineTransformMakeScale(-1.0, 1.0) to the rotation transform)
self.livePreviewView.transform = CGAffineTransformMakeRotation(M_PI_2);
self.livePreviewView.frame = self.bounds;
[self addSubview: self.livePreviewView];
Bind the frame buffer to get the frame buffer width and height. The bounds used by CIContext when drawing to a GLKView are in pixels (not points), hence the need to read from the frame buffer's width and height.
[self.livePreviewView bindDrawable];
In addition, since we will be accessing the bounds in another queue (_captureSessionQueue), we want to obtain this piece of information so that we won't be accessing _videoPreviewView's properties from another thread/queue.
_videoPreviewViewBounds = CGRectZero;
_videoPreviewViewBounds.size.width = _videoPreviewView.drawableWidth;
_videoPreviewViewBounds.size.height = _videoPreviewView.drawableHeight;
dispatch_async(dispatch_get_main_queue(), ^(void) {
CGAffineTransform transform = CGAffineTransformMakeRotation(M_PI_2);
// *Horizontally flip here, if using front camera.*
self.livePreviewView.transform = transform;
self.livePreviewView.frame = self.bounds;
});
Note: If you are using the front camera you can horizontally flip the live preview like this:
transform = CGAffineTransformConcat(transform, CGAffineTransformMakeScale(-1.0, 1.0));
2.3 Delegate Implementation
After we have the Contexts, Sessions, and GLKViews set up we can now render to our views from the AVCaptureVideoDataOutputSampleBufferDelegate method captureOutput:didOutputSampleBuffer:fromConnection:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CMFormatDescriptionRef formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer);
// update the video dimensions information
self.currentVideoDimensions = CMVideoFormatDescriptionGetDimensions(formatDesc);
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:(CVPixelBufferRef)imageBuffer options:nil];
CGRect sourceExtent = sourceImage.extent;
CGFloat sourceAspect = sourceExtent.size.width / sourceExtent.size.height;
You will need to have a reference to each GLKView and it's videoPreviewViewBounds. For easiness, I will assume they are both contained in a UICollectionViewCell. You will need to alter this for your own use-case.
for(CustomLivePreviewCell *cell in self.livePreviewCells) {
CGFloat previewAspect = cell.videoPreviewViewBounds.size.width / cell.videoPreviewViewBounds.size.height;
// To maintain the aspect radio of the screen size, we clip the video image
CGRect drawRect = sourceExtent;
if (sourceAspect > previewAspect) {
// use full height of the video image, and center crop the width
drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0;
drawRect.size.width = drawRect.size.height * previewAspect;
} else {
// use full width of the video image, and center crop the height
drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0;
drawRect.size.height = drawRect.size.width / previewAspect;
}
[cell.livePreviewView bindDrawable];
if (_eaglContext != [EAGLContext currentContext]) {
[EAGLContext setCurrentContext:_eaglContext];
}
// clear eagl view to grey
glClearColor(0.5, 0.5, 0.5, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
// set the blend mode to "source over" so that CI will use that
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
if (sourceImage) {
[_ciContext drawImage:sourceImage inRect:cell.videoPreviewViewBounds fromRect:drawRect];
}
[cell.livePreviewView display];
}
}
This solution lets you have as many live previews as you want using OpenGL to render the buffer of images received from the AVCaptureVideoDataOutputSampleBufferDelegate.
3. Sample Code
Here is a github project I threw together with both soultions: https://github.com/JohnnySlagle/Multiple-Camera-Feeds

implement the AVCaptureSession delegate method which is
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
using this you can get the sample buffer output of each and every video frame. Using the buffer output you can create an image using the method below.
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0 orientation:UIImageOrientationRight];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
so you can add several imageViews to your view and add these lines inside the delegate method that i have mentioned before:
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
imageViewOne.image = image;
imageViewTwo.image = image;

Simply set the contents of the preview layer to another CALayer:
CGImageRef cgImage = (__bridge CGImage)self.previewLayer.contents;
self.duplicateLayer.contents = (__bridge id)cgImage;
You can do this with the contents of any Metal or OpenGL layer. There was no increase in memory usage or CPU load on my end, either. You're not duplicating anything but a tiny pointer. That's not so with these other "solutions."
I have a sample project that you can download that displays 20 preview layers at the same time from a single camera feed. Each layer has a different effect applied to our.
You can watch a video of the app running, as well as download the source code at:
https://demonicactivity.blogspot.com/2017/05/developer-iphone-video-camera-wall.html?m=1

Working in Swift 5 on iOS 13, I implemented a somewhat simpler version of the answer by #Ushan87. For testing purposes, I dragged a new, small UIImageView on top of my existing AVCaptureVideoPreviewLayer. In the ViewController for that window, I added an IBOutlet for the new view and a variable to describe the correct orientation for the camera being used:
#IBOutlet var testView: UIImageView!
private var extOrientation: UIImage.Orientation = .up
I then implemented the AVCaptureVideoDataOutputSampleBufferDelegate as follows:
// MARK: - AVCaptureVideoDataOutputSampleBufferDelegate
extension CameraViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let imageBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let ciimage : CIImage = CIImage(cvPixelBuffer: imageBuffer)
let image : UIImage = self.convert(cmage: ciimage)
DispatchQueue.main.sync(execute: {() -> Void in
testView.image = image
})
}
// Convert CIImage to CGImage
func convert(cmage:CIImage) -> UIImage
{
let context:CIContext = CIContext.init(options: nil)
let cgImage:CGImage = context.createCGImage(cmage, from: cmage.extent)!
let image:UIImage = UIImage.init(cgImage: cgImage, scale: 1.0, orientation: extOrientation)
return image
}
For my purposes, the performance was fine. I did not notice any lagginess in the new view.

You can't have multiple previews. Only one output stream as the Apple AVFundation says. I've tried many ways but you just can't.

Related

AVPlayer plays video composition result incorrectly

I need a simple thing: play a video while rotating and applying CIFilter on it.
First, I create the player item:
AVPlayerItem *item = [AVPlayerItem playerItemWithURL:videoURL];
// DEBUG LOGGING
AVAssetTrack *track = [[item.asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
NSLog(#"Natural size is: %#", NSStringFromCGSize(track.naturalSize));
NSLog(#"Preffered track transform is: %#", NSStringFromCGAffineTransform(track.preferredTransform));
NSLog(#"Preffered asset transform is: %#", NSStringFromCGAffineTransform(item.asset.preferredTransform));
Then I need to apply the video composition. Originally, I was thinking to create an AVVideoComposition with 2 instructions - one will be the AVVideoCompositionLayerInstruction for rotation and the other one will be CIFilter application. However, I got an exception thrown saying "Expecting video composition to contain only AVCoreImageFilterVideoCompositionInstruction" which means Apple doesn't allow to combine those 2 instructions. As a result, I combined both under the filtering, here is the code:
AVAsset *asset = playerItem.asset;
CGAffineTransform rotation = [self transformForItem:playerItem];
AVVideoComposition *composition = [AVVideoComposition videoCompositionWithAsset:asset applyingCIFiltersWithHandler:^(AVAsynchronousCIImageFilteringRequest * _Nonnull request) {
// Step 1: get the input frame image (screenshot 1)
CIImage *sourceImage = request.sourceImage;
// Step 2: rotate the frame
CIFilter *transformFilter = [CIFilter filterWithName:#"CIAffineTransform"];
[transformFilter setValue:sourceImage forKey: kCIInputImageKey];
[transformFilter setValue: [NSValue valueWithCGAffineTransform: rotation] forKey: kCIInputTransformKey];
sourceImage = transformFilter.outputImage;
CGRect extent = sourceImage.extent;
CGAffineTransform translation = CGAffineTransformMakeTranslation(-extent.origin.x, -extent.origin.y);
[transformFilter setValue:sourceImage forKey: kCIInputImageKey];
[transformFilter setValue: [NSValue valueWithCGAffineTransform: translation] forKey: kCIInputTransformKey];
sourceImage = transformFilter.outputImage;
// Step 3: apply the custom filter chosen by the user
extent = sourceImage.extent;
sourceImage = [sourceImage imageByClampingToExtent];
[filter setValue:sourceImage forKey:kCIInputImageKey];
sourceImage = filter.outputImage;
sourceImage = [sourceImage imageByCroppingToRect:extent];
// Step 4: finish processing the frame (screenshot 2)
[request finishWithImage:sourceImage context:nil];
}];
playerItem.videoComposition = composition;
The screenshots I made during debugging show that the image is successfully rotated and the filter is applied (in this example it was an identity filter which doesn't change the image). Here are the screenshot 1 and screenshot 2 which were taken at the points marked in the comments above:
As you can see, the rotation is successful, the extent of the resulting frame was also correct.
The problem starts when I try to play this video in a player. Here is what I get:
So seems like all the frames are scaled and shifted down. The green area is the empty frame info, when I clamp to extent to make frame infinite size it shows border pixels instead of green. I have a feeling that the player still takes some old size info before rotation from the AVPlayerItem, that's why in the first code snippet above I was logging the sizes and transforms, there are the logs:
Natural size is: {1920, 1080}
Preffered track transform is: [0, 1, -1, 0, 1080, 0]
Preffered asset transform is: [1, 0, 0, 1, 0, 0]
The player is set up like this:
layer.videoGravity = AVLayerVideoGravityResizeAspectFill;
layer.needsDisplayOnBoundsChange = YES;
PLEASE NOTE the most important thing: this only happens to videos which were recorded by the app itself using camera in landscape iPhone[6s] orientation and saved on the device storage previously. The videos that the app records in portrait mode are totally fine (by the way, the portrait videos got exactly the same size and transform log like landscape videos! strange...maybe iphone puts the rotation info in the video and fixes it). So zooming and shifting the video seems like a combination of "aspect fill" and old resolution info before rotation. By the way, the portrait video frames are shown partially because of scaling to fill the player area which has a different aspect ratio, but this is expected behavior.
Let me know your thoughts on this and, if you know a better way how to accomplish what I need, then it would be great to know.
UPDATE: There comes out to be an easier way to "change" the AVPlayerItem video dimensions during playback - set the renderSize property of video composition (can be done using AVMutableVideoComposition class).
MY OLD ANSWER BELOW:
After a lot of debugging I understood the problem and found a solution. My initial guess that AVPlayer still considers the video being of the original size was correct. In the image below it is explained what was happening:
As for the solution, I couldn't find a way to change the video size inside AVAsset or AVPlayerItem. So I just manipulated the video to fit the size and scale that AVPlayer was expecting, and then when playing in a player with correct aspect ratio and flag to scale and fill the player area - everything looks good. Here is the graphical explanation:
And here goes the additional code that needs to be inserted in the applyingCIFiltersWithHandler block mentioned in the question:
... after Step 3 in the question codes above
// make the frame the same aspect ratio as the original input frame
// by adding empty spaces at the top and the bottom of the extent rectangle
CGFloat newHeight = originalExtent.size.height * originalExtent.size.height / extent.size.height;
CGFloat inset = (extent.size.height - newHeight) / 2;
extent = CGRectInset(extent, 0, inset);
sourceImage = [sourceImage imageByCroppingToRect:extent];
// scale down to the original frame size
CGFloat scale = originalExtent.size.height / newHeight;
CGAffineTransform scaleTransform = CGAffineTransformMakeScale(scale, scale);
[transformFilter setValue:sourceImage forKey: kCIInputImageKey];
[transformFilter setValue: [NSValue valueWithCGAffineTransform: scaleTransform] forKey: kCIInputTransformKey];
sourceImage = transformFilter.outputImage;
// translate the frame to make it's origin start at (0, 0)
CGAffineTransform translation = CGAffineTransformMakeTranslation(0, -inset * scale);
[transformFilter setValue:sourceImage forKey: kCIInputImageKey];
[transformFilter setValue: [NSValue valueWithCGAffineTransform: translation] forKey: kCIInputTransformKey];
sourceImage = transformFilter.outputImage;

How to draw cropped bitmap using the Metal API or Accelerate Framework?

I'm implementing a custom video compositor that crops video frames. Currently I use Core Graphics to do this:
-(void)renderImage:(CGImageRef)image inBuffer:(CVPixelBufferRef)destination {
CGRect cropRect = // some rect ...
CGImageRef currentRectImage = CGImageCreateWithImageInRect(photoFullImage, cropRect);
size_t width = CVPixelBufferGetWidth(destination);
size_t height = CVPixelBufferGetHeight(destination);
CGContextRef context = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(destination), // data
width,
height,
8, // bpp
CVPixelBufferGetBytesPerRow(destination),
CGImageGetColorSpace(backImage),
CGImageGetBitmapInfo(backImage));
CGRect frame = CGRectMake(0, 0, width, height);
CGContextDrawImage(context, frame, currentRectImage);
CGContextRelease(context);
}
How can I use the Metal API to do this? It should be much faster, right?
What about using the Accelerate Framework (vImage specifically)? Would that be simpler?
Ok, I don't know wether this will be useful for you, but still.
Check out the following code:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
id<MTLTexture> textureY = nil;
{
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
MTLPixelFormat pixelFormat = MTLPixelFormatBGRA8Unorm;
CVMetalTextureRef texture = NULL;
CVReturn status = CVMetalTextureCacheCreateTextureFromImage(NULL, _textureCache, pixelBuffer, NULL, pixelFormat, width, height, 0, &texture);
if(status == kCVReturnSuccess)
{
textureY = CVMetalTextureGetTexture(texture);
if (self.delegate){
[self.delegate textureUpdated:textureY];
}
CFRelease(texture);
}
}
}
I use this code to convert CVPixelBufferRef into MTLTexture. After that you should probably create blitCommandEncoder and use it's
func copyFromTexture(sourceTexture: MTLTexture, sourceSlice: Int, sourceLevel: Int, sourceOrigin: MTLOrigin, sourceSize: MTLSize, toTexture destinationTexture: MTLTexture, destinationSlice: Int, destinationLevel: Int, destinationOrigin: MTLOrigin)
In it, you can select cropped rectangle and copy it to some other texture.
The next step is to convert generated MTLTextures back into CVPixelBufferRefs and then make a video out of that, unfortunately I don't know how to do that.
Would really like to hear what you came up with. Cheers.
Because it uses bare pointers and unencapsulated data, vImage would "crop" things by just moving the pointer to the top left corner of the image to point to the new top left corner and reducing the height and width accordingly. You now have a vImage_Buffer that refers to a region in the middle of your image. Of course, you still need to export the content again as a file or copy it to something destined to draw to the screen. See for example vImageCreateCGImageFromBuffer().
CG can do this itself with CGImageCreateWithImageInRect()
Metal would do this either as a simple compute copy kernel, a MTLBlitCommandEncoder blit, or a 3D render application of a texture to a collection of triangles with appropriate coordinate offset.

Capture 120/240 fps using AVCaptureVideoDataOutput into frame buffer using low resolution

Currently, using the iPhone 5s/6 I am able to capture 120(iPhone 5s) or 240(iPhone 6) frames/second into a CMSampleBufferRef. However, the AVCaptureDeviceFormat that is returned to me only provides these high speed frame rates with a resolution of 1280x720.
I would like to capture this in lower resolution (640x480 or lower) since I will be putting this into a circular buffer for storage purpose. While I am able to reduce the resolution in the didOutputSampleBuffer delegate method, I would like to know if there is any way for the CMSampleBufferRef to provide me a lower resolution directly by configuring the device or setting, instead of taking the 720p image and lowering the resolution manually using CVPixelBuffer.
I need to store the images in a buffer for later processing and want to apply minimum processing necessary or else I will begin to drop frames. If I can avoid resizing and obtain a lower resolution CMSampleBuffer from the didOutputSampleBuffer delegate method directly, that would be ideal.
At 240fps, I would need to process each image within 5ms and the resizing routine cannot keep up with downscaling the image at this rate. However, I would like to store it into a circular buffer for later processing (e.g. writing out to a movie using AVAssetWriter) but require a lower resolution.
It seems that the only image size supported in high frame rate recording is 1280x720. Putting multiple images of this resolution into the frame buffer will generate memory pressure so I'm looking to capture a lower resolution image directly from didOutputSampleBuffer if it is at all possible to save on memory and to keep up with the frame rate.
Thank you for your assistance.
// core image use GPU to all image ops, crop / transform / ...
// --- create once ---
EAGLContext *glCtx = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
CIContext *ciContext = [CIContext contextWithEAGLContext:glCtx options:#{kCIContextWorkingColorSpace:[NSNull null]}];
// use rgb faster 3x
CGColorSpaceRef ciContextColorSpace = CGColorSpaceCreateDeviceRGB();
OSType cvPixelFormat = kCVPixelFormatType_32BGRA;
// create compression session
VTCompressionSessionRef compressionSession;
NSDictionary* pixelBufferOptions = #{(__bridge NSString*) kCVPixelBufferPixelFormatTypeKey:#(cvPixelFormat),
(__bridge NSString*) kCVPixelBufferWidthKey:#(outputResolution.width),
(__bridge NSString*) kCVPixelBufferHeightKey:#(outputResolution.height),
(__bridge NSString*) kCVPixelBufferOpenGLESCompatibilityKey : #YES,
(__bridge NSString*) kCVPixelBufferIOSurfacePropertiesKey : #{}};
OSStatus ret = VTCompressionSessionCreate(kCFAllocatorDefault,
outputResolution.width,
outputResolution.height,
kCMVideoCodecType_H264,
NULL,
(__bridge CFDictionaryRef)pixelBufferOptions,
NULL,
VTEncoderOutputCallback,
(__bridge void*)self,
&compressionSession);
CVPixelBufferRef finishPixelBuffer;
// I'm use VTCompressionSession pool, you can use AVAssetWriterInputPixelBufferAdaptor
CVReturn res = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, VTCompressionSessionGetPixelBufferPool(compressionSession), &finishPixelBuffer);
// -------------------
// ------ scale ------
// new buffer comming...
// - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
CIImage *baseImg = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CGFloat outHeight = 240;
CGFloat scale = 1 / (CVPixelBufferGetHeight(pixelBuffer) / outHeight);
CGAffineTransform transform = CGAffineTransformMakeScale(scale, scale);
// result image not changed after
CIImage *resultImg = [baseImg imageByApplyingTransform:transform];
// resultImg = [resultImg imageByCroppingToRect:...];
// CIContext applies transform to CIImage and draws to finish buffer
[ciContext render:resultImg toCVPixelBuffer:finishPixelBuffer bounds:resultImg.extent colorSpace:ciContextColorSpace];
CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
// [videoInput appendSampleBuffer:CMSampleBufferCreateForImageBuffer(... finishPixelBuffer...)]
VTCompressionSessionEncodeFrame(compressionSession, finishPixelBuffer, CMSampleBufferGetPresentationTimeStamp(sampleBuffer), CMSampleBufferGetDuration(sampleBuffer), NULL, sampleBuffer, NULL);
// -------------------

Cropping with Core Image and AV Foundation

I am taking a photo using AV Foundation, and then I went to crop that image into a square that fits my UI. In the UI, there are two semi-transparent views that show what's being captured, and I want to crop the image to include just what's in between the bottom of the top view and the top of the bottom view:
topView
|
area I want to capture and crop
|
bottom View
The actual capturing of the full image works fine. The problem is using Core Image to crop the image successfully.
// Custom function that takes a photo asynchronously from the capture session and gives
// the photo and error back in a block. Works fine.
[self.captureSession takePhotoWithCompletionBlock:^(UIImage *photo, NSError *error) {
if (photo) {
CIImage *imageToCrop = [CIImage imageWithCGImage:photo.CGImage];
// Find, proportionately, the y-value at which I should start the
// cropping, based on my UI
CGFloat beginningYOfCrop = topView.frame.size.height * photo.size.height / self.view.frame.size.height;
CGFloat endYOfCrop = CGRectGetMinY(bottomView.frame) * photo.size.height / self.view.frame.size.height;
CGRect croppedFrame = CGRectMake(0,
beginningYOfCrop,
photo.size.width,
endYOfCrop - beginningYOfCrop);
// Attempt to transform the croppedFrame to fit Core Image's
// different coordinate system
CGAffineTransform coordinateTransform = CGAffineTransformMakeScale(1.0, -1.0);
coordinateTransform = CGAffineTransformTranslate(coordinateTransform,
0,
-photo.size.height);
CGRectApplyAffineTransform(croppedFrame, coordinateTransform);
imageToCrop = [imageToCrop imageByCroppingToRect:croppedFrame];
// Orient the image correctly
UIImage *filteredImage = [UIImage imageWithCIImage:imageToCrop
scale:1.0
orientation:UIImageOrientationRight];
}
}];

Zxing zoom functionality in ios AVFoundation

I have been scouring the internet and have been looking high and low for any type of code to help me zoom in on barcodes using ZXing.
I started with the code from their git site here
https://github.com/zxing/zxing
Since then I have been able to increase the default resolution to 1920x1080.
self.captureSession.sessionPreset = AVCaptureSessionPreset1920x1080;
This would be fine but the issue is that I am scanning very small barcodes and even though 1920x1080 would work it doesnt give me any kind of zoom to capture closer to a smaller barcode without losing focus. Now the resolution did help me quite a bit but its simply not close enough.
Im thinking what I need to do is to set the capture session to a scroll view that is 1920x1080 and then set the actual image capture to take from the bounds of my screen so i can zoom in and out of the scroll view itself to achieve a "zoom" kind of affect.
The problem with that is im really not sure where to start...any ideas?
Ok since I have seen this multiple time on here and no one seems to have an answer. I thought I would share my own answer.
There are 2 properties NO ONE seems to know about. Ill cover both.
Now the first one is good for iOS 6+. Apple added a property called setVideoScaleAndCropfactor.
This setting this returns a this is a CGFloat type. The only downfall in this is that you have to set the value to your AVConnection and then set the connection to a stillImageCapture. It will not work with anything else in iOS 6. Now in order to do this you have to set it up to work Asynchronously and you have to loop your code for the decoder to work and take pictures at that scale.
Last thing is that you have to scale your preview layer yourself.
This all sounds like a lot of work. And it really really is. However, This sets your original scan picture to be taken at 1920x1080 or whatever you have it set as. Rather than scaling a current image which will stretch pixels causing the decoder to miss the barcode.
Sp this will look something like this
stillImageConnection = [stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
[stillImageConnection setVideoOrientation:AVCaptureVideoOrientationPortrait];
[stillImageConnection setVideoScaleAndCropFactor:effectiveScale];
[stillImageOutput setOutputSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCMPixelFormat_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[stillImageOutput captureStillImageAsynchronouslyFromConnection:stillImageConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
{
if(error)
return;
NSString *path = [NSString stringWithFormat:#"%#%#",
[[NSBundle mainBundle] resourcePath],
#"/blank.wav"];
SystemSoundID soundID;
NSURL *filePath = [NSURL fileURLWithPath:path isDirectory:NO];
AudioServicesCreateSystemSoundID(( CFURLRef)filePath, &soundID);
AudioServicesPlaySystemSound(soundID);
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
/*Get information about the image*/
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
uint8_t* baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
void* free_me = 0;
if (true) { // iOS bug?
uint8_t* tmp = baseAddress;
int bytes = bytesPerRow*height;
free_me = baseAddress = (uint8_t*)malloc(bytes);
baseAddress[0] = 0xdb;
memcpy(baseAddress,tmp,bytes);
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext =
CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst);
CGImageRef capture = CGBitmapContextCreateImage(newContext);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
free(free_me);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
Decoder* d = [[Decoder alloc] init];
[self decoding:d withimage:&capture];
}];
}
Now the second one that is coming in iOS 7 that will change EVERYTHING I just said. You have a new property called videoZoomFactor. this is a CGFloat. However it changes everything at the TOP of the stack rather than just affecting like the stillimagecapture.
In Otherwords you wont have to manually zoom your preview layer. You wont have to go through some stillimagecaptureloop and you wont have to set it to an AVConnection. You simply set the CGFloat and it scales everything for you.
Now I know its going to be a while before you can publish iOS 7 applications. So I would seriously consider figuring out how to do this the hard way. Quick tips. I would use a pinch and zoom gesture to set your CGFloat for setvideoscaleandcropfactor. Dont forget to set the value to 1 in your didload and you can scale from there. At the same time in your gesture you can use it to do your CATransaction to scale the preview layer.
Heres a sample of how to do the gesture capture and preview layer
- (IBAction)handlePinchGesture:(UIPinchGestureRecognizer *)recognizer
{
effectiveScale = recognizer.scale;
if (effectiveScale < 1.0)
effectiveScale = 1.0;
if (effectiveScale > 25)
effectiveScale = 25;
stillImageConnection = [stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
[stillImageConnection setVideoScaleAndCropFactor:effectiveScale];
[CATransaction begin];
[CATransaction setAnimationDuration:0];
[prevLayer setAffineTransform:CGAffineTransformMakeScale(effectiveScale, effectiveScale)];
[CATransaction commit];
}
Hope this helps someone out! I may go ahead and just to a video tutorial on this. Depends on what kind of demand there is for it I guess.

Resources