OpenGL format texture - ios

I am currently trying to display a video on screen using OpenGL ES 2 on iOS.
I will sum up a bit what I am doing to playback and display the video on screen :
First I have a .mov file recorded using a GPUImageMovieWriter object. When the recording is completed I am going to playback the video using AVPlayer. Therefore I set a AVPlayerItemVideoOutput to be able to retrieve frame from the video :
NSDictionary *test = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey: (id)kCVPixelBufferPixelFormatTypeKey];
self.videoOutput = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:test];
I then use the copyPixelBufferForItemTime function from the AVPlayerItemVideoOutput and receive the CVImageBufferRef corresponding to the frame of the initial video at a specific time.
Finally, here is the function I created to create an OpenGL texture from the buffer :
- (void)setupTextureFromBuffer:(CVImageBufferRef)imageBuffer {
CVPixelBufferLockBaseAddress(imageBuffer, 0);
int bufferHeight = CVPixelBufferGetHeight(imageBuffer);
int bufferWidth = CVPixelBufferGetWidth(imageBuffer);
CVPixelBufferGetPixelFormatType(imageBuffer);
glBindTexture(GL_TEXTURE_2D, m_videoTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(imageBuffer));
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
By doing this (and also using some non related algorithms to do some augmented reality things) I got a very strange result as if the video has been put in slices(I can't show you because I don't have enough reputation to do so).
It looks like the data are not well interpreted by OpenGL (wrong format ? type ?)
I checked whether it could be a corrupted buffer error by using this function :
- (void)saveImage:(CVPixelBufferRef)pixBuffer
{
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(pixBuffer),
CVPixelBufferGetHeight(pixBuffer))];
UIImage *uiImage = [UIImage imageWithCGImage:videoImage];
UIImageWriteToSavedPhotosAlbum(uiImage, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
}
-> The saved image appeared properly in the photo album.
It may come from the .mov file but what can I do to check if there's something wrong with this file ?
Thanks a lot for your help, I'm really stuck on this problem for hours/days !

You need to use kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange.
Then transfer them to separate chroma and luma OpenGLES textures. Example at https://developer.apple.com/library/ios/samplecode/AVBasicVideoOutput/Listings/AVBasicVideoOutput_APLEAGLView_m.html
I tried using several RGB based options but could not make it work.

Related

Get upright frames from AVPlayerItemVideoOutput when video has rotated exif orientation

I'm using an AVPlayerItemVideoOutput to get video frames from an AVPlayer and upload them to a GL texture for display. The issue is that AVPlayerItemVideoOutput seems to ignore the video's rotation exif data, so the CVPixelBufferRef it returns isn't upright.
Options:
I could edit my GL code to counter rotate the texture when displaying it, but i'd kind of prefer to get the frames upright in the first place so I don't have to transform the texture coordinates.
Some magic to get AVPlayerItemVideoOutput to give me the frames upright in the first place. Solution must be hardware accelerated.
code:
//
// Setup
//
AVPlayer *player = [AVPlayer playerWithURL:fileURL];
AVPlayerItemVideoOutput *output = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:#{
(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32ARGB),
(id)kCVPixelBufferOpenGLCompatibilityKey: #YES,
}];
[player.currentItem addOutput:output];
//
// getting the frame data into a GL texture
//
CMTime currentTime = player.currentTime;
if ([output hasNewPixelBufferForItemTime:currentTime]){
CVPixelBufferRef frame = [output copyPixelBufferForItemTime:currentTime itemTimeForDisplay:NULL];
CVPixelBufferLockBaseAddress(frame, kCVPixelBufferLock_ReadOnly);
GLsizei height = (GLsizei)CVPixelBufferGetHeight(frame);
GLsizei bpr = (GLsizei)CVPixelBufferGetBytesPerRow(frame);
void *data = CVPixelBufferGetBaseAddress(frame);
glBindTexture(GL_TEXTURE_2D, gltexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bpr/4, height, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8, data);
CVPixelBufferUnlockBaseAddress(frame, kCVPixelBufferLock_ReadOnly);
CVPixelBufferRelease(frame);
}

Fast way to create openGL texture from JPEG-2000?

I need to load large-ish (5 megapixel) jpeg images and create openGL texture from them. They are non-power-of-two, and cannot be pre-processed for this application. Loading is extremely slow, about one second per image on an iPad Air 2. I need to load a dozen or two such images and create a GL texture for each, as quickly as I can.
Profiling shows the bottleneck to be CGContextDrawImage. Previous answers suggest this is a common problem.
This previous answer seems most relevant and (unfortunately) does not leave me hopeful. I haven't tried lib-jpeg (suggested in another answer) yet - trying to keep third party code out for several reasons.
But - that answer was 2014 and things change. Does anybody know of a faster way to create textures from jpegs? Either by changing the arguments to CGContextDrawImage (as in this answer- I've tried the suggested changes with no noticeable speed change) or using a different approach entirely?
The current texture creation block (called asynchronously):
UIImage *image = [UIImage imageWithData:jpegImageData];
if (image) {
GLuint textureID;
glGenTextures(1, &textureID);
glBindTexture( GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
GLsizei width = (GLsizei)CGImageGetWidth(image.CGImage);
GLsizei height = (GLsizei)CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef imgcontext = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextDrawImage( imgcontext, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(imgcontext);
free(imageData);
// ... store the textureID for use by the caller
// ...
}
(edited to add)
I tried GLKTextureLoader. I kept getting a nil return value, with error theError NSError * domain: "GLKTextureLoaderErrorDomain" - code: 12.
I've realized that the JPEGs I need to load are JPEG 2000; and that may be the problem. I've played with the GLKTextureLoader approach; I can get it to work non-J2K jpegs, but not the J2K ones I need to load. (FWIW, the files I need to load are packed inside larger files, thus I extract a data subrange from within the file, as such:
NSData *jpegImageData = [data subdataWithRange:NSMakeRange(offset, dataLength)];
GLKTextureInfo *jpegTexture;
NSError *theError;
jpegTexture = [GLKTextureLoader textureWithContentsOfData:jpegImageData options:nil error:&theError];
but, as mentioned, jpegImageData comes back as nil with the aforementioned error. This works on small jpegs, even using the subdataWithRange approach.
Likewise,
UIImage *image = [UIImage imageWithData:jpegImageData];
jpegTexture = [GLKTextureLoader textureWithCGImage:image.CGImage options:nil error:&theError];
returns nil with the same "code 12" error.
This iOS Developer page (Table 1-1) suggests that JPEG-2000 is supported on OS X only, but when I try the
CFArrayRef mySourceTypes = CGImageSourceCopyTypeIdentifiers();
CFShow(mySourceTypes);
approach for showing supported formats, JPEG-2000 is among them (running on my iOS device):
33 : <CFString 0x19d721bf8 [0x1a1da0150]>{contents = "public.jpeg-
Any suggestions for using the faster GLKTextureLoader methods on JPEG-2000?
Did you try the GLKit Framework method?
GLKTexGtureInfo *spriteTexture;
NSError *theError;
NSString *filePath = [[NSBundle mainBundle] pathForResource:#"Sprite" ofType:#"jpg"]; // 1
spriteTexture = [GLKTextureLoader textureWithContentsOfFile:filePath options:nil error:&theError]; // 2
glBindTexture(spriteTexture.target, spriteTexture.name); // 3

Capture 120/240 fps using AVCaptureVideoDataOutput into frame buffer using low resolution

Currently, using the iPhone 5s/6 I am able to capture 120(iPhone 5s) or 240(iPhone 6) frames/second into a CMSampleBufferRef. However, the AVCaptureDeviceFormat that is returned to me only provides these high speed frame rates with a resolution of 1280x720.
I would like to capture this in lower resolution (640x480 or lower) since I will be putting this into a circular buffer for storage purpose. While I am able to reduce the resolution in the didOutputSampleBuffer delegate method, I would like to know if there is any way for the CMSampleBufferRef to provide me a lower resolution directly by configuring the device or setting, instead of taking the 720p image and lowering the resolution manually using CVPixelBuffer.
I need to store the images in a buffer for later processing and want to apply minimum processing necessary or else I will begin to drop frames. If I can avoid resizing and obtain a lower resolution CMSampleBuffer from the didOutputSampleBuffer delegate method directly, that would be ideal.
At 240fps, I would need to process each image within 5ms and the resizing routine cannot keep up with downscaling the image at this rate. However, I would like to store it into a circular buffer for later processing (e.g. writing out to a movie using AVAssetWriter) but require a lower resolution.
It seems that the only image size supported in high frame rate recording is 1280x720. Putting multiple images of this resolution into the frame buffer will generate memory pressure so I'm looking to capture a lower resolution image directly from didOutputSampleBuffer if it is at all possible to save on memory and to keep up with the frame rate.
Thank you for your assistance.
// core image use GPU to all image ops, crop / transform / ...
// --- create once ---
EAGLContext *glCtx = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
CIContext *ciContext = [CIContext contextWithEAGLContext:glCtx options:#{kCIContextWorkingColorSpace:[NSNull null]}];
// use rgb faster 3x
CGColorSpaceRef ciContextColorSpace = CGColorSpaceCreateDeviceRGB();
OSType cvPixelFormat = kCVPixelFormatType_32BGRA;
// create compression session
VTCompressionSessionRef compressionSession;
NSDictionary* pixelBufferOptions = #{(__bridge NSString*) kCVPixelBufferPixelFormatTypeKey:#(cvPixelFormat),
(__bridge NSString*) kCVPixelBufferWidthKey:#(outputResolution.width),
(__bridge NSString*) kCVPixelBufferHeightKey:#(outputResolution.height),
(__bridge NSString*) kCVPixelBufferOpenGLESCompatibilityKey : #YES,
(__bridge NSString*) kCVPixelBufferIOSurfacePropertiesKey : #{}};
OSStatus ret = VTCompressionSessionCreate(kCFAllocatorDefault,
outputResolution.width,
outputResolution.height,
kCMVideoCodecType_H264,
NULL,
(__bridge CFDictionaryRef)pixelBufferOptions,
NULL,
VTEncoderOutputCallback,
(__bridge void*)self,
&compressionSession);
CVPixelBufferRef finishPixelBuffer;
// I'm use VTCompressionSession pool, you can use AVAssetWriterInputPixelBufferAdaptor
CVReturn res = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, VTCompressionSessionGetPixelBufferPool(compressionSession), &finishPixelBuffer);
// -------------------
// ------ scale ------
// new buffer comming...
// - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
CIImage *baseImg = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CGFloat outHeight = 240;
CGFloat scale = 1 / (CVPixelBufferGetHeight(pixelBuffer) / outHeight);
CGAffineTransform transform = CGAffineTransformMakeScale(scale, scale);
// result image not changed after
CIImage *resultImg = [baseImg imageByApplyingTransform:transform];
// resultImg = [resultImg imageByCroppingToRect:...];
// CIContext applies transform to CIImage and draws to finish buffer
[ciContext render:resultImg toCVPixelBuffer:finishPixelBuffer bounds:resultImg.extent colorSpace:ciContextColorSpace];
CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
// [videoInput appendSampleBuffer:CMSampleBufferCreateForImageBuffer(... finishPixelBuffer...)]
VTCompressionSessionEncodeFrame(compressionSession, finishPixelBuffer, CMSampleBufferGetPresentationTimeStamp(sampleBuffer), CMSampleBufferGetDuration(sampleBuffer), NULL, sampleBuffer, NULL);
// -------------------

AVCaptureSession with multiple previews

I have an AVCaptureSession running with an AVCaptureVideoPreviewLayer.
I can see the video so I know it's working.
However, I'd like to have a collection view and in each cell add a preview layer so that each cell shows a preview of the video.
If I try to pass the preview layer into the cell and add it as a subLayer then it removes the layer from the other cells so it only ever displays in one cell at a time.
Is there another (better) way of doing this?
I ran into the same problem of needing multiple live views displayed at the same time. The answer of using UIImage above was too slow for what I needed. Here are the two solutions I found:
1. CAReplicatorLayer
The first option is to use a CAReplicatorLayer to duplicate the layer automatically. As the docs say, it will automatically create "...a specified number of copies of its sublayers (the source layer), each copy potentially having geometric, temporal and color transformations applied to it."
This is super useful if there isn't a lot of interaction with the live previews besides simple geometric or color transformations (Think Photo Booth). I have most often seen the CAReplicatorLayer used as a way to create the 'reflection' effect.
Here is some sample code to replicate a CACaptureVideoPreviewLayer:
Init AVCaptureVideoPreviewLayer
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[previewLayer setFrame:CGRectMake(0.0, 0.0, self.view.bounds.size.width, self.view.bounds.size.height / 4)];
Init CAReplicatorLayer and set properties
Note: This will replicate the live preview layer four times.
NSUInteger replicatorInstances = 4;
CAReplicatorLayer *replicatorLayer = [CAReplicatorLayer layer];
replicatorLayer.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height / replicatorInstances);
replicatorLayer.instanceCount = instances;
replicatorLayer.instanceTransform = CATransform3DMakeTranslation(0.0, self.view.bounds.size.height / replicatorInstances, 0.0);
Add Layers
Note: From my experience you need to add the layer you want to replicate to the CAReplicatorLayer as a sublayer.
[replicatorLayer addSublayer:previewLayer];
[self.view.layer addSublayer:replicatorLayer];
Downsides
A downside to using CAReplicatorLayer is that it handles all placement of the layer replications. So it will apply any set transformations to each instance and and it will all be contained within itself. E.g. There would be no way to have a replication of a AVCaptureVideoPreviewLayer on two separate cells.
2. Manually Rendering SampleBuffer
This method, albeit a tad more complex, solves the above mentioned downside of CAReplicatorLayer. By manually rendering the live previews, you are able to render as many views as you want. Granted, performance might be affected.
Note: There might be other ways to render the SampleBuffer but I chose OpenGL because of its performance. Code was inspired and altered from CIFunHouse.
Here is how I implemented it:
2.1 Contexts and Session
Setup OpenGL and CoreImage Context
_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
// Note: must be done after the all your GLKViews are properly set up
_ciContext = [CIContext contextWithEAGLContext:_eaglContext
options:#{kCIContextWorkingColorSpace : [NSNull null]}];
Dispatch Queue
This queue will be used for the session and delegate.
self.captureSessionQueue = dispatch_queue_create("capture_session_queue", NULL);
Init your AVSession & AVCaptureVideoDataOutput
Note: I have removed all device capability checks to make this more readable.
dispatch_async(self.captureSessionQueue, ^(void) {
NSError *error = nil;
// get the input device and also validate the settings
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *_videoDevice = nil;
if (!_videoDevice) {
_videoDevice = [videoDevices objectAtIndex:0];
}
// obtain device input
AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:self.videoDevice error:&error];
// obtain the preset and validate the preset
NSString *preset = AVCaptureSessionPresetMedium;
// CoreImage wants BGRA pixel format
NSDictionary *outputSettings = #{(id)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA)};
// create the capture session
self.captureSession = [[AVCaptureSession alloc] init];
self.captureSession.sessionPreset = preset;
:
Note: The following code is the 'magic code'. It is where we are create and add a DataOutput to the AVSession so we can intercept the camera frames using the delegate. This is the breakthrough I needed to figure out how to solve the problem.
:
// create and configure video data output
AVCaptureVideoDataOutput *videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
videoDataOutput.videoSettings = outputSettings;
[videoDataOutput setSampleBufferDelegate:self queue:self.captureSessionQueue];
// begin configure capture session
[self.captureSession beginConfiguration];
// connect the video device input and video data and still image outputs
[self.captureSession addInput:videoDeviceInput];
[self.captureSession addOutput:videoDataOutput];
[self.captureSession commitConfiguration];
// then start everything
[self.captureSession startRunning];
});
2.2 OpenGL Views
We are using GLKView to render our live previews. So if you want 4 live previews, then you need 4 GLKView.
self.livePreviewView = [[GLKView alloc] initWithFrame:self.bounds context:self.eaglContext];
self.livePreviewView = NO;
Because the native video image from the back camera is in UIDeviceOrientationLandscapeLeft (i.e. the home button is on the right), we need to apply a clockwise 90 degree transform so that we can draw the video preview as if we were in a landscape-oriented view; if you're using the front camera and you want to have a mirrored preview (so that the user is seeing themselves in the mirror), you need to apply an additional horizontal flip (by concatenating CGAffineTransformMakeScale(-1.0, 1.0) to the rotation transform)
self.livePreviewView.transform = CGAffineTransformMakeRotation(M_PI_2);
self.livePreviewView.frame = self.bounds;
[self addSubview: self.livePreviewView];
Bind the frame buffer to get the frame buffer width and height. The bounds used by CIContext when drawing to a GLKView are in pixels (not points), hence the need to read from the frame buffer's width and height.
[self.livePreviewView bindDrawable];
In addition, since we will be accessing the bounds in another queue (_captureSessionQueue), we want to obtain this piece of information so that we won't be accessing _videoPreviewView's properties from another thread/queue.
_videoPreviewViewBounds = CGRectZero;
_videoPreviewViewBounds.size.width = _videoPreviewView.drawableWidth;
_videoPreviewViewBounds.size.height = _videoPreviewView.drawableHeight;
dispatch_async(dispatch_get_main_queue(), ^(void) {
CGAffineTransform transform = CGAffineTransformMakeRotation(M_PI_2);
// *Horizontally flip here, if using front camera.*
self.livePreviewView.transform = transform;
self.livePreviewView.frame = self.bounds;
});
Note: If you are using the front camera you can horizontally flip the live preview like this:
transform = CGAffineTransformConcat(transform, CGAffineTransformMakeScale(-1.0, 1.0));
2.3 Delegate Implementation
After we have the Contexts, Sessions, and GLKViews set up we can now render to our views from the AVCaptureVideoDataOutputSampleBufferDelegate method captureOutput:didOutputSampleBuffer:fromConnection:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CMFormatDescriptionRef formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer);
// update the video dimensions information
self.currentVideoDimensions = CMVideoFormatDescriptionGetDimensions(formatDesc);
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:(CVPixelBufferRef)imageBuffer options:nil];
CGRect sourceExtent = sourceImage.extent;
CGFloat sourceAspect = sourceExtent.size.width / sourceExtent.size.height;
You will need to have a reference to each GLKView and it's videoPreviewViewBounds. For easiness, I will assume they are both contained in a UICollectionViewCell. You will need to alter this for your own use-case.
for(CustomLivePreviewCell *cell in self.livePreviewCells) {
CGFloat previewAspect = cell.videoPreviewViewBounds.size.width / cell.videoPreviewViewBounds.size.height;
// To maintain the aspect radio of the screen size, we clip the video image
CGRect drawRect = sourceExtent;
if (sourceAspect > previewAspect) {
// use full height of the video image, and center crop the width
drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0;
drawRect.size.width = drawRect.size.height * previewAspect;
} else {
// use full width of the video image, and center crop the height
drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0;
drawRect.size.height = drawRect.size.width / previewAspect;
}
[cell.livePreviewView bindDrawable];
if (_eaglContext != [EAGLContext currentContext]) {
[EAGLContext setCurrentContext:_eaglContext];
}
// clear eagl view to grey
glClearColor(0.5, 0.5, 0.5, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
// set the blend mode to "source over" so that CI will use that
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
if (sourceImage) {
[_ciContext drawImage:sourceImage inRect:cell.videoPreviewViewBounds fromRect:drawRect];
}
[cell.livePreviewView display];
}
}
This solution lets you have as many live previews as you want using OpenGL to render the buffer of images received from the AVCaptureVideoDataOutputSampleBufferDelegate.
3. Sample Code
Here is a github project I threw together with both soultions: https://github.com/JohnnySlagle/Multiple-Camera-Feeds
implement the AVCaptureSession delegate method which is
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
using this you can get the sample buffer output of each and every video frame. Using the buffer output you can create an image using the method below.
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0 orientation:UIImageOrientationRight];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
so you can add several imageViews to your view and add these lines inside the delegate method that i have mentioned before:
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
imageViewOne.image = image;
imageViewTwo.image = image;
Simply set the contents of the preview layer to another CALayer:
CGImageRef cgImage = (__bridge CGImage)self.previewLayer.contents;
self.duplicateLayer.contents = (__bridge id)cgImage;
You can do this with the contents of any Metal or OpenGL layer. There was no increase in memory usage or CPU load on my end, either. You're not duplicating anything but a tiny pointer. That's not so with these other "solutions."
I have a sample project that you can download that displays 20 preview layers at the same time from a single camera feed. Each layer has a different effect applied to our.
You can watch a video of the app running, as well as download the source code at:
https://demonicactivity.blogspot.com/2017/05/developer-iphone-video-camera-wall.html?m=1
Working in Swift 5 on iOS 13, I implemented a somewhat simpler version of the answer by #Ushan87. For testing purposes, I dragged a new, small UIImageView on top of my existing AVCaptureVideoPreviewLayer. In the ViewController for that window, I added an IBOutlet for the new view and a variable to describe the correct orientation for the camera being used:
#IBOutlet var testView: UIImageView!
private var extOrientation: UIImage.Orientation = .up
I then implemented the AVCaptureVideoDataOutputSampleBufferDelegate as follows:
// MARK: - AVCaptureVideoDataOutputSampleBufferDelegate
extension CameraViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let imageBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let ciimage : CIImage = CIImage(cvPixelBuffer: imageBuffer)
let image : UIImage = self.convert(cmage: ciimage)
DispatchQueue.main.sync(execute: {() -> Void in
testView.image = image
})
}
// Convert CIImage to CGImage
func convert(cmage:CIImage) -> UIImage
{
let context:CIContext = CIContext.init(options: nil)
let cgImage:CGImage = context.createCGImage(cmage, from: cmage.extent)!
let image:UIImage = UIImage.init(cgImage: cgImage, scale: 1.0, orientation: extOrientation)
return image
}
For my purposes, the performance was fine. I did not notice any lagginess in the new view.
You can't have multiple previews. Only one output stream as the Apple AVFundation says. I've tried many ways but you just can't.

iOS drawing screen video capture not smooth

I am creating an application in which we can able to draw using our finger in an imageView , the same time we can record the screen also.
I have done these features so far , but the problem is once the video recording is completed , if we play the recorded video the finger drawing is not smooth in video.
I am not using opengl , the drawing is on UIImageView and on every 0.01 sec we capture th image from UIImageView and append the pixel buffer to the AVAssetWriterInputPixelBufferAdaptor object .
Here is the code I used for converting the UIImage into buffer
- (CVPixelBufferRef) pixelBufferFromCGImage:(CGImageRef) image {
CGSize frameSize = CGSizeMake(976, 667);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width,
frameSize.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGImageGetColorSpace(image);
CGContextRef context = CGBitmapContextCreate(pxdata, frameSize.width,
frameSize.height, 8, 4*frameSize.width, rgbColorSpace,
kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
the below method is calling on 0.01 sec timeinterval
CVPixelBufferRef pixelBufferX = (CVPixelBufferRef)[self pixelBufferFromCGImage:theIM];
bValue = [self.avAdaptor appendPixelBuffer:pixelBufferX withPresentationTime:presentTime];
Can any one guide for the improvement in video capture ?
Thanks in advance
You shouldn't display things by calling them every 0.01 seconds. If you want to stay in sync with video, see AVSynchronizedLayer, which is explicitly for this. Alternately, see CADisplayLink, which is for staying in sync with screen refreshes. 0.01 seconds doesn't line up with anything in particular, and you're probably getting beats where you're out of sync with the video and with the display. In any case, you should be doing your drawing in some callback from your player, not with a timer.
You are also leaking your pixel buffer in every loop. Since you called CVPixelBufferCreate(), you're responsible for eventually calling CFRelease() on the resulting pixel buffer. I would expect your program to eventually crash by running out of memory if this ran for awhile.
Make sure you've studied the AV Foundation Programming Guide so you know how all the pieces fit together in media playback.

Resources