AVPlayer plays video composition result incorrectly - ios

I need a simple thing: play a video while rotating and applying CIFilter on it.
First, I create the player item:
AVPlayerItem *item = [AVPlayerItem playerItemWithURL:videoURL];
// DEBUG LOGGING
AVAssetTrack *track = [[item.asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
NSLog(#"Natural size is: %#", NSStringFromCGSize(track.naturalSize));
NSLog(#"Preffered track transform is: %#", NSStringFromCGAffineTransform(track.preferredTransform));
NSLog(#"Preffered asset transform is: %#", NSStringFromCGAffineTransform(item.asset.preferredTransform));
Then I need to apply the video composition. Originally, I was thinking to create an AVVideoComposition with 2 instructions - one will be the AVVideoCompositionLayerInstruction for rotation and the other one will be CIFilter application. However, I got an exception thrown saying "Expecting video composition to contain only AVCoreImageFilterVideoCompositionInstruction" which means Apple doesn't allow to combine those 2 instructions. As a result, I combined both under the filtering, here is the code:
AVAsset *asset = playerItem.asset;
CGAffineTransform rotation = [self transformForItem:playerItem];
AVVideoComposition *composition = [AVVideoComposition videoCompositionWithAsset:asset applyingCIFiltersWithHandler:^(AVAsynchronousCIImageFilteringRequest * _Nonnull request) {
// Step 1: get the input frame image (screenshot 1)
CIImage *sourceImage = request.sourceImage;
// Step 2: rotate the frame
CIFilter *transformFilter = [CIFilter filterWithName:#"CIAffineTransform"];
[transformFilter setValue:sourceImage forKey: kCIInputImageKey];
[transformFilter setValue: [NSValue valueWithCGAffineTransform: rotation] forKey: kCIInputTransformKey];
sourceImage = transformFilter.outputImage;
CGRect extent = sourceImage.extent;
CGAffineTransform translation = CGAffineTransformMakeTranslation(-extent.origin.x, -extent.origin.y);
[transformFilter setValue:sourceImage forKey: kCIInputImageKey];
[transformFilter setValue: [NSValue valueWithCGAffineTransform: translation] forKey: kCIInputTransformKey];
sourceImage = transformFilter.outputImage;
// Step 3: apply the custom filter chosen by the user
extent = sourceImage.extent;
sourceImage = [sourceImage imageByClampingToExtent];
[filter setValue:sourceImage forKey:kCIInputImageKey];
sourceImage = filter.outputImage;
sourceImage = [sourceImage imageByCroppingToRect:extent];
// Step 4: finish processing the frame (screenshot 2)
[request finishWithImage:sourceImage context:nil];
}];
playerItem.videoComposition = composition;
The screenshots I made during debugging show that the image is successfully rotated and the filter is applied (in this example it was an identity filter which doesn't change the image). Here are the screenshot 1 and screenshot 2 which were taken at the points marked in the comments above:
As you can see, the rotation is successful, the extent of the resulting frame was also correct.
The problem starts when I try to play this video in a player. Here is what I get:
So seems like all the frames are scaled and shifted down. The green area is the empty frame info, when I clamp to extent to make frame infinite size it shows border pixels instead of green. I have a feeling that the player still takes some old size info before rotation from the AVPlayerItem, that's why in the first code snippet above I was logging the sizes and transforms, there are the logs:
Natural size is: {1920, 1080}
Preffered track transform is: [0, 1, -1, 0, 1080, 0]
Preffered asset transform is: [1, 0, 0, 1, 0, 0]
The player is set up like this:
layer.videoGravity = AVLayerVideoGravityResizeAspectFill;
layer.needsDisplayOnBoundsChange = YES;
PLEASE NOTE the most important thing: this only happens to videos which were recorded by the app itself using camera in landscape iPhone[6s] orientation and saved on the device storage previously. The videos that the app records in portrait mode are totally fine (by the way, the portrait videos got exactly the same size and transform log like landscape videos! strange...maybe iphone puts the rotation info in the video and fixes it). So zooming and shifting the video seems like a combination of "aspect fill" and old resolution info before rotation. By the way, the portrait video frames are shown partially because of scaling to fill the player area which has a different aspect ratio, but this is expected behavior.
Let me know your thoughts on this and, if you know a better way how to accomplish what I need, then it would be great to know.

UPDATE: There comes out to be an easier way to "change" the AVPlayerItem video dimensions during playback - set the renderSize property of video composition (can be done using AVMutableVideoComposition class).
MY OLD ANSWER BELOW:
After a lot of debugging I understood the problem and found a solution. My initial guess that AVPlayer still considers the video being of the original size was correct. In the image below it is explained what was happening:
As for the solution, I couldn't find a way to change the video size inside AVAsset or AVPlayerItem. So I just manipulated the video to fit the size and scale that AVPlayer was expecting, and then when playing in a player with correct aspect ratio and flag to scale and fill the player area - everything looks good. Here is the graphical explanation:
And here goes the additional code that needs to be inserted in the applyingCIFiltersWithHandler block mentioned in the question:
... after Step 3 in the question codes above
// make the frame the same aspect ratio as the original input frame
// by adding empty spaces at the top and the bottom of the extent rectangle
CGFloat newHeight = originalExtent.size.height * originalExtent.size.height / extent.size.height;
CGFloat inset = (extent.size.height - newHeight) / 2;
extent = CGRectInset(extent, 0, inset);
sourceImage = [sourceImage imageByCroppingToRect:extent];
// scale down to the original frame size
CGFloat scale = originalExtent.size.height / newHeight;
CGAffineTransform scaleTransform = CGAffineTransformMakeScale(scale, scale);
[transformFilter setValue:sourceImage forKey: kCIInputImageKey];
[transformFilter setValue: [NSValue valueWithCGAffineTransform: scaleTransform] forKey: kCIInputTransformKey];
sourceImage = transformFilter.outputImage;
// translate the frame to make it's origin start at (0, 0)
CGAffineTransform translation = CGAffineTransformMakeTranslation(0, -inset * scale);
[transformFilter setValue:sourceImage forKey: kCIInputImageKey];
[transformFilter setValue: [NSValue valueWithCGAffineTransform: translation] forKey: kCIInputTransformKey];
sourceImage = transformFilter.outputImage;

Related

Real Time Performance With Core Image Blend Modes

I'm trying to do a really basic demo app that allows the user to pick an image and a blend mode and then drag and manipulate the blended image overtop of the background image. When the user is dragging the image overtop of the background I want real time performance (20+ fps on iPhone 4). The images are the same resolution as the screen.
Is this possible to do with core image? I have tried a couple different approaches but I can't seem to get the performance I want.
Right now I am doing something like this:
CIFilter * overlayBlendMode = [CIFilter filterWithName:#"CIOverlayBlendMode"];
[overlayBlendMode setValue:self.foregroundImage forKey:#"inputImage"];
[overlayBlendMode setValue:self.backgroundImage forKey:#"inputBackgroundImage"];
CIImage * test = [overlayBlendMode outputImage];
// render background image
[self.ciContext drawImage:test inRect:test.extent fromRect:test.extent];
This code is being executed each time display gets called from my GLKViewController.
And my setup code is:
self.glContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
self.ciContext = [CIContext contextWithEAGLContext:self.glContext];
...
UIImage * foregroundImage = [ViewController imageScaledFromImage:[UIImage imageNamed:#"Smiley"] inRect:CGRectMake(0, 0, 100, 100)];
GLKTextureInfo * foregroundTexture = [GLKTextureLoader textureWithCGImage:foregroundImage.CGImage options:#{GLKTextureLoaderOriginBottomLeft: #(YES)} error:nil];
self.foregroundImage = [CIImage imageWithTexture:foregroundTexture.name size:foregroundImage.size flipped:NO colorSpace:nil];
UIImage *backgroundImage = [ViewController imageCenterScaledFromImage:[UIImage imageNamed:#"Kate.jpg"] inRect:(CGRect){0,0,self.renderBufferSize}];
GLKTextureInfo * backgroundTexture = [GLKTextureLoader textureWithCGImage:backgroundImage.CGImage options:#{GLKTextureLoaderOriginBottomLeft: #(YES)} error:nil];
self.backgroundImage = [CIImage imageWithTexture:backgroundTexture.name size:backgroundImage.size flipped:NO colorSpace:nil];
The performance I am getting is not what I expected, I was expecting 60fps since it is such a simple scene but on my iPad 4 I'm getting ~35 or so and I'm sure it would be worse on the iPhone 4 which is my lowest common denominator.
Did you set GLKViewController -> preferredFramesPerSecond to something other than its default 30?

Cropping with Core Image and AV Foundation

I am taking a photo using AV Foundation, and then I went to crop that image into a square that fits my UI. In the UI, there are two semi-transparent views that show what's being captured, and I want to crop the image to include just what's in between the bottom of the top view and the top of the bottom view:
topView
|
area I want to capture and crop
|
bottom View
The actual capturing of the full image works fine. The problem is using Core Image to crop the image successfully.
// Custom function that takes a photo asynchronously from the capture session and gives
// the photo and error back in a block. Works fine.
[self.captureSession takePhotoWithCompletionBlock:^(UIImage *photo, NSError *error) {
if (photo) {
CIImage *imageToCrop = [CIImage imageWithCGImage:photo.CGImage];
// Find, proportionately, the y-value at which I should start the
// cropping, based on my UI
CGFloat beginningYOfCrop = topView.frame.size.height * photo.size.height / self.view.frame.size.height;
CGFloat endYOfCrop = CGRectGetMinY(bottomView.frame) * photo.size.height / self.view.frame.size.height;
CGRect croppedFrame = CGRectMake(0,
beginningYOfCrop,
photo.size.width,
endYOfCrop - beginningYOfCrop);
// Attempt to transform the croppedFrame to fit Core Image's
// different coordinate system
CGAffineTransform coordinateTransform = CGAffineTransformMakeScale(1.0, -1.0);
coordinateTransform = CGAffineTransformTranslate(coordinateTransform,
0,
-photo.size.height);
CGRectApplyAffineTransform(croppedFrame, coordinateTransform);
imageToCrop = [imageToCrop imageByCroppingToRect:croppedFrame];
// Orient the image correctly
UIImage *filteredImage = [UIImage imageWithCIImage:imageToCrop
scale:1.0
orientation:UIImageOrientationRight];
}
}];

Face Detection ios7 Coordinates Scaling Issue

I am using the Face Detection API and would like to know how to convert coordinates from large high resolution images to smaller images displayed on an UIImageView. So far, I have inverted the co-ordinate system of my image and container view so that it matches the Core Image coordinate system and I have also calculated the ratio of heights between my high resolution image and the dimensions of my image view, but the coordinates that I am getting are not accurate at all. I am assuming I cannot convert the points from the large image to the small image as easily as I thought. Can anyone please point out my mistake(s)?
[self.shownImageViewer setTransform:CGAffineTransformMakeScale(1,-1)];
[self.view setTransform:CGAffineTransformMakeScale(1,-1)];
//240 x 320
self.shownImageViewer.image = self.imageToShow;
yscale = 320/self.imageToShow.size.height;
xscale = 240/self.imageToShow.size.width;
height = 320;
CIImage *image = [[CIImage alloc] initWithCGImage:[self.imageToShow CGImage]];
CIContext *faceDetectionContext = [CIContext contextWithOptions:nil];
CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:faceDetectionContext options:#{CIDetectorAccuracy:CIDetectorAccuracyHigh}];
NSArray * features = [faceDetector featuresInImage:image options:#{CIDetectorImageOrientation:[NSNumber numberWithInt:6]}];
for(CIFaceFeature *feature in features)
{
if(feature.hasLeftEyePosition)
self.leftEye = feature.leftEyePosition;
if(feature.hasRightEyePosition)
self.rightEye = feature.rightEyePosition;
if(feature.hasMouthPosition)
self.mouth = feature.mouthPosition;
}
NSLog(#"%g and %g",xscale*self.rightEye.x, yscale*self.rightEye.y);
NSLog(#"%g and %g",yscale*self.leftEye.x, yscale*self.leftEye.y);
NSLog(#"%g",height);
self.rightEyeMarker.center = CGPointMake(xscale*self.rightEye.x,yscale*self.rightEye.y);
self.leftEyeMarker.center = CGPointMake(xscale*self.leftEye.x,yscale*self.leftEye.y);
I would start by removing the transform from your image view. Just have the image view display the image in the orientation its in already. This will make the calculations a lot easier.
Now the CIFaceFeature outputs its features in image coordinates. But your imageView might be smaller or bigger. So first, keep it simple and setting the imageView's content mode to top left.
imageView.contentMode = UIViewContentModeTopLeft;
Now you dont have to scale the coordinates at all.
When you are happy with that set the contentMode to something more sensible like aspect fit.
imageView.contentMode = UIViewContentModeScaleAspectFit;
Now you need to scale the x and the y co-ordinates by multiplying each co-ordinate by the aspect fit ratio.
CGFloat xRatio = imageView.frame.size.width / image.size.width;
CGFloat yRatio = imageView.frame.size.height / image.size.height;
CGFloat aspectFitRatio = MIN(xRatio, yRatio);
Lastly you want to add the rotation back in. Try to avoid this if possible, e.g. fix you images so they are upright to begin with.

Using a GPUImageTransformFilter on a GPUImageVideoCamera yields flickering

I want to use a GPUImageTransformFilter to crop a live video stream from GPUImageVideoCamera.
GPUImageTransformFilter *transformFilter = [[GPUImageTransformFilter alloc] init];
// Zoom is 4x
[transformFilter setAffineTransform:CGAffineTransformMakeScale(4, 4)];
// Is this needed. Say the zoom is 4 and the video stream size is 320 x 426. It that right?
[transformFilter forceProcessingAtSize:CGSizeMake(320*4, 426*4)];
[videoCameraDevice addTarget:transformFilter];
[transformFilter addTarget:liveTextureFilter]
[transformFilter prepareForImageCapture];
[videoCameraDevice resumeCameraCapture];
I have two problems:
1) The orientation of the video stream is rotated by 90 degrees.
2) The output in my GPUImageView flickers between the image without the transform (as if no transform was applied) and the 90-degree-rotated, transformed image at an approximate rate of 1/10th a second on an iPhone 5s.
Any ideas of what I am doing wrong and/or how would you approach cropping live video with a texture on top of the video stream that you do not want cropped?

AVCaptureSession with multiple previews

I have an AVCaptureSession running with an AVCaptureVideoPreviewLayer.
I can see the video so I know it's working.
However, I'd like to have a collection view and in each cell add a preview layer so that each cell shows a preview of the video.
If I try to pass the preview layer into the cell and add it as a subLayer then it removes the layer from the other cells so it only ever displays in one cell at a time.
Is there another (better) way of doing this?
I ran into the same problem of needing multiple live views displayed at the same time. The answer of using UIImage above was too slow for what I needed. Here are the two solutions I found:
1. CAReplicatorLayer
The first option is to use a CAReplicatorLayer to duplicate the layer automatically. As the docs say, it will automatically create "...a specified number of copies of its sublayers (the source layer), each copy potentially having geometric, temporal and color transformations applied to it."
This is super useful if there isn't a lot of interaction with the live previews besides simple geometric or color transformations (Think Photo Booth). I have most often seen the CAReplicatorLayer used as a way to create the 'reflection' effect.
Here is some sample code to replicate a CACaptureVideoPreviewLayer:
Init AVCaptureVideoPreviewLayer
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[previewLayer setFrame:CGRectMake(0.0, 0.0, self.view.bounds.size.width, self.view.bounds.size.height / 4)];
Init CAReplicatorLayer and set properties
Note: This will replicate the live preview layer four times.
NSUInteger replicatorInstances = 4;
CAReplicatorLayer *replicatorLayer = [CAReplicatorLayer layer];
replicatorLayer.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height / replicatorInstances);
replicatorLayer.instanceCount = instances;
replicatorLayer.instanceTransform = CATransform3DMakeTranslation(0.0, self.view.bounds.size.height / replicatorInstances, 0.0);
Add Layers
Note: From my experience you need to add the layer you want to replicate to the CAReplicatorLayer as a sublayer.
[replicatorLayer addSublayer:previewLayer];
[self.view.layer addSublayer:replicatorLayer];
Downsides
A downside to using CAReplicatorLayer is that it handles all placement of the layer replications. So it will apply any set transformations to each instance and and it will all be contained within itself. E.g. There would be no way to have a replication of a AVCaptureVideoPreviewLayer on two separate cells.
2. Manually Rendering SampleBuffer
This method, albeit a tad more complex, solves the above mentioned downside of CAReplicatorLayer. By manually rendering the live previews, you are able to render as many views as you want. Granted, performance might be affected.
Note: There might be other ways to render the SampleBuffer but I chose OpenGL because of its performance. Code was inspired and altered from CIFunHouse.
Here is how I implemented it:
2.1 Contexts and Session
Setup OpenGL and CoreImage Context
_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
// Note: must be done after the all your GLKViews are properly set up
_ciContext = [CIContext contextWithEAGLContext:_eaglContext
options:#{kCIContextWorkingColorSpace : [NSNull null]}];
Dispatch Queue
This queue will be used for the session and delegate.
self.captureSessionQueue = dispatch_queue_create("capture_session_queue", NULL);
Init your AVSession & AVCaptureVideoDataOutput
Note: I have removed all device capability checks to make this more readable.
dispatch_async(self.captureSessionQueue, ^(void) {
NSError *error = nil;
// get the input device and also validate the settings
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *_videoDevice = nil;
if (!_videoDevice) {
_videoDevice = [videoDevices objectAtIndex:0];
}
// obtain device input
AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:self.videoDevice error:&error];
// obtain the preset and validate the preset
NSString *preset = AVCaptureSessionPresetMedium;
// CoreImage wants BGRA pixel format
NSDictionary *outputSettings = #{(id)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA)};
// create the capture session
self.captureSession = [[AVCaptureSession alloc] init];
self.captureSession.sessionPreset = preset;
:
Note: The following code is the 'magic code'. It is where we are create and add a DataOutput to the AVSession so we can intercept the camera frames using the delegate. This is the breakthrough I needed to figure out how to solve the problem.
:
// create and configure video data output
AVCaptureVideoDataOutput *videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
videoDataOutput.videoSettings = outputSettings;
[videoDataOutput setSampleBufferDelegate:self queue:self.captureSessionQueue];
// begin configure capture session
[self.captureSession beginConfiguration];
// connect the video device input and video data and still image outputs
[self.captureSession addInput:videoDeviceInput];
[self.captureSession addOutput:videoDataOutput];
[self.captureSession commitConfiguration];
// then start everything
[self.captureSession startRunning];
});
2.2 OpenGL Views
We are using GLKView to render our live previews. So if you want 4 live previews, then you need 4 GLKView.
self.livePreviewView = [[GLKView alloc] initWithFrame:self.bounds context:self.eaglContext];
self.livePreviewView = NO;
Because the native video image from the back camera is in UIDeviceOrientationLandscapeLeft (i.e. the home button is on the right), we need to apply a clockwise 90 degree transform so that we can draw the video preview as if we were in a landscape-oriented view; if you're using the front camera and you want to have a mirrored preview (so that the user is seeing themselves in the mirror), you need to apply an additional horizontal flip (by concatenating CGAffineTransformMakeScale(-1.0, 1.0) to the rotation transform)
self.livePreviewView.transform = CGAffineTransformMakeRotation(M_PI_2);
self.livePreviewView.frame = self.bounds;
[self addSubview: self.livePreviewView];
Bind the frame buffer to get the frame buffer width and height. The bounds used by CIContext when drawing to a GLKView are in pixels (not points), hence the need to read from the frame buffer's width and height.
[self.livePreviewView bindDrawable];
In addition, since we will be accessing the bounds in another queue (_captureSessionQueue), we want to obtain this piece of information so that we won't be accessing _videoPreviewView's properties from another thread/queue.
_videoPreviewViewBounds = CGRectZero;
_videoPreviewViewBounds.size.width = _videoPreviewView.drawableWidth;
_videoPreviewViewBounds.size.height = _videoPreviewView.drawableHeight;
dispatch_async(dispatch_get_main_queue(), ^(void) {
CGAffineTransform transform = CGAffineTransformMakeRotation(M_PI_2);
// *Horizontally flip here, if using front camera.*
self.livePreviewView.transform = transform;
self.livePreviewView.frame = self.bounds;
});
Note: If you are using the front camera you can horizontally flip the live preview like this:
transform = CGAffineTransformConcat(transform, CGAffineTransformMakeScale(-1.0, 1.0));
2.3 Delegate Implementation
After we have the Contexts, Sessions, and GLKViews set up we can now render to our views from the AVCaptureVideoDataOutputSampleBufferDelegate method captureOutput:didOutputSampleBuffer:fromConnection:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CMFormatDescriptionRef formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer);
// update the video dimensions information
self.currentVideoDimensions = CMVideoFormatDescriptionGetDimensions(formatDesc);
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:(CVPixelBufferRef)imageBuffer options:nil];
CGRect sourceExtent = sourceImage.extent;
CGFloat sourceAspect = sourceExtent.size.width / sourceExtent.size.height;
You will need to have a reference to each GLKView and it's videoPreviewViewBounds. For easiness, I will assume they are both contained in a UICollectionViewCell. You will need to alter this for your own use-case.
for(CustomLivePreviewCell *cell in self.livePreviewCells) {
CGFloat previewAspect = cell.videoPreviewViewBounds.size.width / cell.videoPreviewViewBounds.size.height;
// To maintain the aspect radio of the screen size, we clip the video image
CGRect drawRect = sourceExtent;
if (sourceAspect > previewAspect) {
// use full height of the video image, and center crop the width
drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0;
drawRect.size.width = drawRect.size.height * previewAspect;
} else {
// use full width of the video image, and center crop the height
drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0;
drawRect.size.height = drawRect.size.width / previewAspect;
}
[cell.livePreviewView bindDrawable];
if (_eaglContext != [EAGLContext currentContext]) {
[EAGLContext setCurrentContext:_eaglContext];
}
// clear eagl view to grey
glClearColor(0.5, 0.5, 0.5, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
// set the blend mode to "source over" so that CI will use that
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
if (sourceImage) {
[_ciContext drawImage:sourceImage inRect:cell.videoPreviewViewBounds fromRect:drawRect];
}
[cell.livePreviewView display];
}
}
This solution lets you have as many live previews as you want using OpenGL to render the buffer of images received from the AVCaptureVideoDataOutputSampleBufferDelegate.
3. Sample Code
Here is a github project I threw together with both soultions: https://github.com/JohnnySlagle/Multiple-Camera-Feeds
implement the AVCaptureSession delegate method which is
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
using this you can get the sample buffer output of each and every video frame. Using the buffer output you can create an image using the method below.
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0 orientation:UIImageOrientationRight];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
so you can add several imageViews to your view and add these lines inside the delegate method that i have mentioned before:
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
imageViewOne.image = image;
imageViewTwo.image = image;
Simply set the contents of the preview layer to another CALayer:
CGImageRef cgImage = (__bridge CGImage)self.previewLayer.contents;
self.duplicateLayer.contents = (__bridge id)cgImage;
You can do this with the contents of any Metal or OpenGL layer. There was no increase in memory usage or CPU load on my end, either. You're not duplicating anything but a tiny pointer. That's not so with these other "solutions."
I have a sample project that you can download that displays 20 preview layers at the same time from a single camera feed. Each layer has a different effect applied to our.
You can watch a video of the app running, as well as download the source code at:
https://demonicactivity.blogspot.com/2017/05/developer-iphone-video-camera-wall.html?m=1
Working in Swift 5 on iOS 13, I implemented a somewhat simpler version of the answer by #Ushan87. For testing purposes, I dragged a new, small UIImageView on top of my existing AVCaptureVideoPreviewLayer. In the ViewController for that window, I added an IBOutlet for the new view and a variable to describe the correct orientation for the camera being used:
#IBOutlet var testView: UIImageView!
private var extOrientation: UIImage.Orientation = .up
I then implemented the AVCaptureVideoDataOutputSampleBufferDelegate as follows:
// MARK: - AVCaptureVideoDataOutputSampleBufferDelegate
extension CameraViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let imageBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let ciimage : CIImage = CIImage(cvPixelBuffer: imageBuffer)
let image : UIImage = self.convert(cmage: ciimage)
DispatchQueue.main.sync(execute: {() -> Void in
testView.image = image
})
}
// Convert CIImage to CGImage
func convert(cmage:CIImage) -> UIImage
{
let context:CIContext = CIContext.init(options: nil)
let cgImage:CGImage = context.createCGImage(cmage, from: cmage.extent)!
let image:UIImage = UIImage.init(cgImage: cgImage, scale: 1.0, orientation: extOrientation)
return image
}
For my purposes, the performance was fine. I did not notice any lagginess in the new view.
You can't have multiple previews. Only one output stream as the Apple AVFundation says. I've tried many ways but you just can't.

Resources