GLKView with stencil buffer's glDrawElements results in black screen - ios

I have an iOS app that uses GLKViewController and I set up the render buffer as follows:
inside
#interface RootViewController : GLKViewController<UIKeyInput>
- viewDidLoad {
[super viewDidLoad];
_context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
GLKView* view = (GLKView*)self.view;
view.context = _context;
view.drawableColorFormat = GLKViewDrawableColorFormatRGBA8888;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
view.drawableStencilFormat = GLKViewDrawableStencilFormat8;
view.drawableMultisample = GLKViewDrawableMultisampleNone;
self.preferredFramesPerSecond = 60;
[EAGLContext setCurrentContext:_context];
}
However, when I call draw later:
glDrawElements(getGlPrimitiveType(ePrimType), numIndis, GL_UNSIGNED_SHORT, startIndis);
It results in black screen and upon Capture GPU Frame, this error shows up:
Your app rendered with STENCIL_TEST enabled into a framebuffer without an attached stencil buffer.
Is there anything that I missed?
I remembered having the same problem before due to depth testing, and I fix it with the view.drawableDepthFormat = GLKViewDrawableDepthFormat24; on viewDidLoad I am not sure about stencil testing, Apple's documentation is either very minimal or very general with theories all around (i.e: pretty much useless).

I found the culprit,
I've lost the original FBO-ID already setup by GLKView when I do render to texture:
uint m_nFboId;
glGenFramebuffers(1, &m_nFboId);
glBindFramebuffer(GL_FRAMEBUFFER, m_nFboId);
then, when I try to reset back to the original FBO-ID:
GLint defaultFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &defaultFBO);
here, the defaultFBO is the value of m_nFboId generated before.. So the solution is to either back it up before the operation, or call back [GLKView bindDrawable];

Related

Anyone know how to convert a little OpenGL code to metal?

I have been using some OpenGL code for my objective-c camera capture session, which I barely understand. Now OpenGL is depreciated, and I have no idea how to convert this little bit of OpenGL code to metal. If anyone knows both enough to convert the below please help.
if (self.eaglContext != [EAGLContext currentContext]) {
[EAGLContext setCurrentContext:self.eaglContext];
}
glClearColor(0.5, 0.5, 0.5, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
there is a little more OpenGL I didn't see:
_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
_videoPreviewView = [[GLKView alloc] initWithFrame:self.view.bounds context:_eaglContext];
_videoPreviewView.enableSetNeedsDisplay = NO;
_videoPreviewView.transform = CGAffineTransformMakeRotation(M_PI_2);
_videoPreviewView.frame = self.view.bounds;
[self.view addSubview:_videoPreviewView];
[self.view sendSubviewToBack:_videoPreviewView];
[_videoPreviewView bindDrawable];
_videoPreviewViewBounds = CGRectZero;
_videoPreviewViewBounds.size.width = _videoPreviewView.drawableWidth;
_videoPreviewViewBounds.size.height = _videoPreviewView.drawableHeight;
_ciContext = [CIContext contextWithEAGLContext:_eaglContext options:#{kCIContextWorkingColorSpace : [NSNull null]} ];
It's not a trivial conversion from OpenGL to Metal. The first step would be to replace the GLKView with an MTKView. You'll want to then create an id<MTKViewDelegate> to handle the actual drawing, resizing, etc. I recommend watching the Adopting Metal video from WWDC 2016. It shows how to get up and running with an MTKView.
To do the blending, you'll need to set the blending options on the MTLRenderPipelineColorAttachmentDescriptor that you use to create the render pipeline state. You'll need to set the blendingEnabled property to YES and set the rgbBlendOperation property to MTLBlendOperationAdd. Then you'll set the sourceRGBBlendFactor to MTLBlendFactorOne and the destinationRGBBlendFactor to MTLBlendFactorOneMinusSourceAlpha.
You can create a CIContext from a id<MTLDevice> via +[CIContext contextWithMetalDevice:options:]

Is it safe to call [EAGLContext presentRenderBuffer:] on a secondary thread?

I have (multiple) UIViews with layers of type CAEAGLLayer, and am able to call [EAGLContext presentRenderBuffer:] on renderbuffers attached to these layers, on a secondary thread, without any kind of graphical glitches.
I would have expected to see at least some tearing, since other UI with which these UIViews are composited is updated on the main thread.
Does CAEAGLLayer (I have kEAGLDrawablePropertyRetainedBacking set to NO) do some double-buffering behind the scenes?
I just want to understand why it is that this works...
Example:
BView is a UIView subclass that owns a framebuffer with renderbuffer storage assigned to its OpenGLES layer, in a shared EAGLContext:
#implementation BView
-(id) initWithFrame:(CGRect)frame context:(EAGLContext*)context
{
self = [super initWithFrame:frame];
// Configure layer
CAEAGLLayer* eaglLayer = (CAEAGLLayer*)self.layer;
eaglLayer.opaque = YES;
eaglLayer.drawableProperties = #{ kEAGLDrawablePropertyRetainedBacking : [NSNumber numberWithBool:NO], kEAGLDrawablePropertyColorFormat : kEAGLColorFormatSRGBA8 };
// Create framebuffer with renderbuffer attached to layer
[EAGLContext setCurrentContext:context];
glGenFramebuffers( 1, &FrameBuffer );
glBindFramebuffer( GL_FRAMEBUFFER, FrameBuffer );
glGenRenderbuffers( 1, &RenderBuffer );
glBindRenderbuffer( GL_RENDERBUFFER, RenderBuffer );
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(id<EAGLDrawable>)self.layer];
glFramebufferRenderbuffer( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, RenderBuffer );
return self;
}
+(Class) layerClass
{
return [CAEAGLLayer class];
}`
A UIViewController adds a BView instance on the main thread at init time:
BView* view = [[BView alloc] initWithFrame:(CGRect){ 0.0, 0.0, 75.0, 75.0 } context:Context];
[self.view addSubView:view];
On a secondary thread, render to the framebuffer in the BView and present it; in this case it's in a callback from a video AVCaptureDevice, called regularly:
-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{
[EAGLContext setCurrentContext:bPipe->Context.GlContext];
// Render into framebuffer ...
// Present renderbuffer
glBindRenderbuffer( GL_RENDERBUFFER, BViewsRenderBuffer );
[Context presentRenderbuffer:GL_RENDERBUFFER];
}
It used to not work. There used to be several issues with updating the view if the buffer was presented on any but the main thread. It seems this has been working for some time now but it is on your own risk to implement it as you do. Later versions may have issues with it as well as some older probably still do (not that you need to support some old OS versions anyway).
Apple was always a bit closed as to how things work internally but we may guess quite a few things. Since iOS seems to be the only platform that uses your main buffer as a FBO (frame buffer object) I would expect the main frame buffer is inaccessible for development and your main FBO is actually redrawn to the main frame buffer when you present the render buffer. The last time I checked the method to present the render buffer will block your current thread and seems to be limited to the screen refresh rate (60FPS in most cases) which implies there is still some locking mechanism. Some additional test should be done but I would expect there is some sort of a pool of buffers which need to be redrawn to the main buffer where in the pool only one unique buffer id can be present at the time or the calling thread is blocked. This would result in the first call to the present render buffer would not be blocked at all but each sequential would be if the previous buffer has not yet been redrawn.
If this is true then yes, a double buffering is mandatory at some point since you may immediately continue drawing to your buffer. Since the render buffer has the same id over the frames it may not be swapped (for what I know) but it could be redrawn/copied to another buffer (most likely a texture) which can be done on the fly at any given time. In this procedure then when you first present the buffer you will copy the buffer to the texture which will be locked. When the screen refreshes the texture will be collected and unlocked. So if this texture is locked your presentation call will block the thread, otherwise it will continue smoothly. It is hard to say this is double buffering then. It has 2 buffers but it still works with a locking mechanism.
I do hope you may then understand why it works. It is pretty much the same procedure you would use when loading large data structures on the separate shared context which runs on a separate thread.
Still most of this is just guessing unfortunately.

overlaySKScene not rendered when using SCNRenderer

I'm using a SCNRenderer to render a scene into an existing OpenGL context (basically a modified version of the Vuforia ImageTarget sample) and the overlaySKScene to show annotations for the objects in the scene.
Since the update to iOS 9 the overlaySKScene is no longer rendered. None of the per frame actions (update:, didEvaluateActions, ...) are being called.
It worked with iOS 8 and the same SKScene still works with a SCNView in a different view controller in the same app.
Context setup:
self.context = [[EAGLContext alloc] initWithAPI:context.API sharegroup:scnViewContext.sharegroup];
OpenGL initialitaion (mostly copied from the Vuforia sample):
- (void)createFramebuffer
{
if (self.context) {
// Create default framebuffer object
glGenFramebuffers(1, &_defaultFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, self.defaultFramebuffer);
// Create colour renderbuffer and allocate backing store
glGenRenderbuffers(1, &_colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, self.colorRenderbuffer);
// Allocate the renderbuffer's storage (shared with the drawable object)
[self.context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer*)self.layer];
GLint framebufferWidth;
GLint framebufferHeight;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &framebufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &framebufferHeight);
// Create the depth render buffer and allocate storage
glGenRenderbuffers(1, &_depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, self.depthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, framebufferWidth, framebufferHeight);
// Attach colour and depth render buffers to the frame buffer
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, self.colorRenderbuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, self.depthRenderbuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, self.stencilRenderbuffer);
// Leave the colour render buffer bound so future rendering operations will act on it
glBindRenderbuffer(GL_RENDERBUFFER, self.colorRenderbuffer);
}
}
- (void)setFramebuffer{
// The EAGLContext must be set for each thread that wishes to use it. Set
// it the first time this method is called (on the render thread)
if (self.context != [EAGLContext currentContext]) {
[EAGLContext setCurrentContext:self.context];
}
if (!self.defaultFramebuffer) {
// Perform on the main thread to ensure safe memory allocation for the
// shared buffer. Block until the operation is complete to prevent
// simultaneous access to the OpenGL context
[self performSelectorOnMainThread:#selector(createFramebuffer) withObject:self waitUntilDone:YES];
}
glBindFramebuffer(GL_FRAMEBUFFER, self.defaultFramebuffer);
}
- (BOOL)presentFramebuffer
{
// setFramebuffer must have been called before presentFramebuffer, therefore
// we know the context is valid and has been set for this (render) thread
// Bind the colour render buffer and present it
glBindRenderbuffer(GL_RENDERBUFFER, self.colorRenderbuffer);
return [self.context presentRenderbuffer:GL_RENDERBUFFER];
}
SCNRenderer setup:
self.skScene = [[MarkerOverlayScene alloc] initWithSize:CGSizeMake(2048, 2048)];
self.renderer = [SCNRenderer rendererWithContext:self.context options:nil];
self.renderer.autoenablesDefaultLighting = NO;
self.renderer.delegate = self;
self.renderer.overlaySKScene = self.skScene;
self.renderer.playing = YES;
if (self.sceneURL) {
self.renderer.scene = [SCNScene sceneWithURL:self.sceneURL options:nil error:nil];
[self.renderer prepareObjects:#[self.renderer.scene] withCompletionHandler:^(BOOL success) {
}];
}
Vuforia render callback:
- (void)renderFrameQCAR
{
[self setFramebuffer];
// Clear colour and depth buffers
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// Render video background and retrieve tracking state
QCAR::State state = QCAR::Renderer::getInstance().begin();
QCAR::Renderer::getInstance().drawVideoBackground();
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
if(QCAR::Renderer::getInstance().getVideoBackgroundConfig().mReflection == QCAR::VIDEO_BACKGROUND_REFLECTION_ON) {
glFrontFace(GL_CW); //Front camera
} else {
glFrontFace(GL_CCW); //Back camera
}
for (int i = 0; i < state.getNumTrackableResults(); ++i) {
// Get the trackable
const QCAR::TrackableResult* result = state.getTrackableResult(i);
QCAR::Matrix44F modelViewMatrix = QCAR::Tool::convertPose2GLMatrix(result->getPose());
SCNMatrix4 matrix = [self convertARMatrix:modelViewMatrix];
matrix = SCNMatrix4Mult(SCNMatrix4MakeRotation(M_PI_2, 1, 0, 0), matrix);
matrix = SCNMatrix4Mult(SCNMatrix4MakeScale(10, 10, 10), matrix);
self.arTransform = matrix;
}
if (state.getNumTrackableResults() == 0) {
self.arTransform = SCNMatrix4Identity;
}
[self.renderer renderAtTime:CFAbsoluteTimeGetCurrent() - self.startTime];
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
QCAR::Renderer::getInstance().end();
[self presentFramebuffer];
}
Mess with the skScene. settings. When I switched to ios9 no touches were getting through. I had to add skScene.userInteractionEnabled = NO; which was not needed in iOS8. I guess the defaults have changed or setting it to an overlay no longer changes the defaults.
Regardless, the issue is likely with the skScene settings.

AVCaptureSession with multiple previews

I have an AVCaptureSession running with an AVCaptureVideoPreviewLayer.
I can see the video so I know it's working.
However, I'd like to have a collection view and in each cell add a preview layer so that each cell shows a preview of the video.
If I try to pass the preview layer into the cell and add it as a subLayer then it removes the layer from the other cells so it only ever displays in one cell at a time.
Is there another (better) way of doing this?
I ran into the same problem of needing multiple live views displayed at the same time. The answer of using UIImage above was too slow for what I needed. Here are the two solutions I found:
1. CAReplicatorLayer
The first option is to use a CAReplicatorLayer to duplicate the layer automatically. As the docs say, it will automatically create "...a specified number of copies of its sublayers (the source layer), each copy potentially having geometric, temporal and color transformations applied to it."
This is super useful if there isn't a lot of interaction with the live previews besides simple geometric or color transformations (Think Photo Booth). I have most often seen the CAReplicatorLayer used as a way to create the 'reflection' effect.
Here is some sample code to replicate a CACaptureVideoPreviewLayer:
Init AVCaptureVideoPreviewLayer
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[previewLayer setFrame:CGRectMake(0.0, 0.0, self.view.bounds.size.width, self.view.bounds.size.height / 4)];
Init CAReplicatorLayer and set properties
Note: This will replicate the live preview layer four times.
NSUInteger replicatorInstances = 4;
CAReplicatorLayer *replicatorLayer = [CAReplicatorLayer layer];
replicatorLayer.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height / replicatorInstances);
replicatorLayer.instanceCount = instances;
replicatorLayer.instanceTransform = CATransform3DMakeTranslation(0.0, self.view.bounds.size.height / replicatorInstances, 0.0);
Add Layers
Note: From my experience you need to add the layer you want to replicate to the CAReplicatorLayer as a sublayer.
[replicatorLayer addSublayer:previewLayer];
[self.view.layer addSublayer:replicatorLayer];
Downsides
A downside to using CAReplicatorLayer is that it handles all placement of the layer replications. So it will apply any set transformations to each instance and and it will all be contained within itself. E.g. There would be no way to have a replication of a AVCaptureVideoPreviewLayer on two separate cells.
2. Manually Rendering SampleBuffer
This method, albeit a tad more complex, solves the above mentioned downside of CAReplicatorLayer. By manually rendering the live previews, you are able to render as many views as you want. Granted, performance might be affected.
Note: There might be other ways to render the SampleBuffer but I chose OpenGL because of its performance. Code was inspired and altered from CIFunHouse.
Here is how I implemented it:
2.1 Contexts and Session
Setup OpenGL and CoreImage Context
_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
// Note: must be done after the all your GLKViews are properly set up
_ciContext = [CIContext contextWithEAGLContext:_eaglContext
options:#{kCIContextWorkingColorSpace : [NSNull null]}];
Dispatch Queue
This queue will be used for the session and delegate.
self.captureSessionQueue = dispatch_queue_create("capture_session_queue", NULL);
Init your AVSession & AVCaptureVideoDataOutput
Note: I have removed all device capability checks to make this more readable.
dispatch_async(self.captureSessionQueue, ^(void) {
NSError *error = nil;
// get the input device and also validate the settings
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *_videoDevice = nil;
if (!_videoDevice) {
_videoDevice = [videoDevices objectAtIndex:0];
}
// obtain device input
AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:self.videoDevice error:&error];
// obtain the preset and validate the preset
NSString *preset = AVCaptureSessionPresetMedium;
// CoreImage wants BGRA pixel format
NSDictionary *outputSettings = #{(id)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA)};
// create the capture session
self.captureSession = [[AVCaptureSession alloc] init];
self.captureSession.sessionPreset = preset;
:
Note: The following code is the 'magic code'. It is where we are create and add a DataOutput to the AVSession so we can intercept the camera frames using the delegate. This is the breakthrough I needed to figure out how to solve the problem.
:
// create and configure video data output
AVCaptureVideoDataOutput *videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
videoDataOutput.videoSettings = outputSettings;
[videoDataOutput setSampleBufferDelegate:self queue:self.captureSessionQueue];
// begin configure capture session
[self.captureSession beginConfiguration];
// connect the video device input and video data and still image outputs
[self.captureSession addInput:videoDeviceInput];
[self.captureSession addOutput:videoDataOutput];
[self.captureSession commitConfiguration];
// then start everything
[self.captureSession startRunning];
});
2.2 OpenGL Views
We are using GLKView to render our live previews. So if you want 4 live previews, then you need 4 GLKView.
self.livePreviewView = [[GLKView alloc] initWithFrame:self.bounds context:self.eaglContext];
self.livePreviewView = NO;
Because the native video image from the back camera is in UIDeviceOrientationLandscapeLeft (i.e. the home button is on the right), we need to apply a clockwise 90 degree transform so that we can draw the video preview as if we were in a landscape-oriented view; if you're using the front camera and you want to have a mirrored preview (so that the user is seeing themselves in the mirror), you need to apply an additional horizontal flip (by concatenating CGAffineTransformMakeScale(-1.0, 1.0) to the rotation transform)
self.livePreviewView.transform = CGAffineTransformMakeRotation(M_PI_2);
self.livePreviewView.frame = self.bounds;
[self addSubview: self.livePreviewView];
Bind the frame buffer to get the frame buffer width and height. The bounds used by CIContext when drawing to a GLKView are in pixels (not points), hence the need to read from the frame buffer's width and height.
[self.livePreviewView bindDrawable];
In addition, since we will be accessing the bounds in another queue (_captureSessionQueue), we want to obtain this piece of information so that we won't be accessing _videoPreviewView's properties from another thread/queue.
_videoPreviewViewBounds = CGRectZero;
_videoPreviewViewBounds.size.width = _videoPreviewView.drawableWidth;
_videoPreviewViewBounds.size.height = _videoPreviewView.drawableHeight;
dispatch_async(dispatch_get_main_queue(), ^(void) {
CGAffineTransform transform = CGAffineTransformMakeRotation(M_PI_2);
// *Horizontally flip here, if using front camera.*
self.livePreviewView.transform = transform;
self.livePreviewView.frame = self.bounds;
});
Note: If you are using the front camera you can horizontally flip the live preview like this:
transform = CGAffineTransformConcat(transform, CGAffineTransformMakeScale(-1.0, 1.0));
2.3 Delegate Implementation
After we have the Contexts, Sessions, and GLKViews set up we can now render to our views from the AVCaptureVideoDataOutputSampleBufferDelegate method captureOutput:didOutputSampleBuffer:fromConnection:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CMFormatDescriptionRef formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer);
// update the video dimensions information
self.currentVideoDimensions = CMVideoFormatDescriptionGetDimensions(formatDesc);
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:(CVPixelBufferRef)imageBuffer options:nil];
CGRect sourceExtent = sourceImage.extent;
CGFloat sourceAspect = sourceExtent.size.width / sourceExtent.size.height;
You will need to have a reference to each GLKView and it's videoPreviewViewBounds. For easiness, I will assume they are both contained in a UICollectionViewCell. You will need to alter this for your own use-case.
for(CustomLivePreviewCell *cell in self.livePreviewCells) {
CGFloat previewAspect = cell.videoPreviewViewBounds.size.width / cell.videoPreviewViewBounds.size.height;
// To maintain the aspect radio of the screen size, we clip the video image
CGRect drawRect = sourceExtent;
if (sourceAspect > previewAspect) {
// use full height of the video image, and center crop the width
drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0;
drawRect.size.width = drawRect.size.height * previewAspect;
} else {
// use full width of the video image, and center crop the height
drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0;
drawRect.size.height = drawRect.size.width / previewAspect;
}
[cell.livePreviewView bindDrawable];
if (_eaglContext != [EAGLContext currentContext]) {
[EAGLContext setCurrentContext:_eaglContext];
}
// clear eagl view to grey
glClearColor(0.5, 0.5, 0.5, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
// set the blend mode to "source over" so that CI will use that
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
if (sourceImage) {
[_ciContext drawImage:sourceImage inRect:cell.videoPreviewViewBounds fromRect:drawRect];
}
[cell.livePreviewView display];
}
}
This solution lets you have as many live previews as you want using OpenGL to render the buffer of images received from the AVCaptureVideoDataOutputSampleBufferDelegate.
3. Sample Code
Here is a github project I threw together with both soultions: https://github.com/JohnnySlagle/Multiple-Camera-Feeds
implement the AVCaptureSession delegate method which is
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
using this you can get the sample buffer output of each and every video frame. Using the buffer output you can create an image using the method below.
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0 orientation:UIImageOrientationRight];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
so you can add several imageViews to your view and add these lines inside the delegate method that i have mentioned before:
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
imageViewOne.image = image;
imageViewTwo.image = image;
Simply set the contents of the preview layer to another CALayer:
CGImageRef cgImage = (__bridge CGImage)self.previewLayer.contents;
self.duplicateLayer.contents = (__bridge id)cgImage;
You can do this with the contents of any Metal or OpenGL layer. There was no increase in memory usage or CPU load on my end, either. You're not duplicating anything but a tiny pointer. That's not so with these other "solutions."
I have a sample project that you can download that displays 20 preview layers at the same time from a single camera feed. Each layer has a different effect applied to our.
You can watch a video of the app running, as well as download the source code at:
https://demonicactivity.blogspot.com/2017/05/developer-iphone-video-camera-wall.html?m=1
Working in Swift 5 on iOS 13, I implemented a somewhat simpler version of the answer by #Ushan87. For testing purposes, I dragged a new, small UIImageView on top of my existing AVCaptureVideoPreviewLayer. In the ViewController for that window, I added an IBOutlet for the new view and a variable to describe the correct orientation for the camera being used:
#IBOutlet var testView: UIImageView!
private var extOrientation: UIImage.Orientation = .up
I then implemented the AVCaptureVideoDataOutputSampleBufferDelegate as follows:
// MARK: - AVCaptureVideoDataOutputSampleBufferDelegate
extension CameraViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let imageBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let ciimage : CIImage = CIImage(cvPixelBuffer: imageBuffer)
let image : UIImage = self.convert(cmage: ciimage)
DispatchQueue.main.sync(execute: {() -> Void in
testView.image = image
})
}
// Convert CIImage to CGImage
func convert(cmage:CIImage) -> UIImage
{
let context:CIContext = CIContext.init(options: nil)
let cgImage:CGImage = context.createCGImage(cmage, from: cmage.extent)!
let image:UIImage = UIImage.init(cgImage: cgImage, scale: 1.0, orientation: extOrientation)
return image
}
For my purposes, the performance was fine. I did not notice any lagginess in the new view.
You can't have multiple previews. Only one output stream as the Apple AVFundation says. I've tried many ways but you just can't.

Displaying CIImage after using CIFilter in GLKView

I always get an error message when trying to present an CIImage filtered by CIFilter inside of an GLKView. The Error is "CoreImage: EAGLContext framebuffer or renderbuffer incorrectly configured!
Invalid shader program, probably due to exceeding hardware resourcesCould not load the kernel!"
The following code I use to display the Image :
- (void)viewDidLoad
{
[super viewDidLoad];
EAcontext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if (!EAcontext) {
NSLog(#"Failed to create ES context");
}
GLKView *view = (GLKView *)self.view;
view.context = self.EAcontext;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
glGenRenderbuffers(1, &_renderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _renderBuffer);
glGenRenderbuffers(1, &_colorBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _colorBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB8_OES, 768, 1024);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _colorBuffer);
coreImageContext = [CIContext contextWithEAGLContext:self.EAcontext];
[self updateView];
}
- (void)updateView
{
UIImage *myimage = [UIImage imageNamed:#"Moskau1.jpg"];
CIImage *outputImage = [[CIImage alloc] initWithImage:myimage];
[coreImageContext drawImage:outputImage inRect:self.view.bounds fromRect:[outputImage extent]];
[EAcontext presentRenderbuffer:GL_RENDERBUFFER_OES];
}
The Viewcontroller is a GLKViewcontroller. EAContext is of type CIContext.
What could be causing this?
The "Invalid shader program, probably due to exceeding hardware resources" and "Could not load the kernel!" are actually distinct error, but the former seems to lack a linebreak. I got this problem yesterday, and it seems there are a few sources of this problem:
Check the frame buffer status to ensure it is complete - glCheckFramebufferStatus(GL_FRAMEBUFFER) should return GL_FRAMEBUFFER_COMPLETE - see the OpenGL ES programming guide for an example.
In my case, I had added a depth buffer to the framebuffer used by Core Image. Core image evidently didn't like this - once I removed the depth buffer renderbuffer, the both error messages went away, and Core Image did its thing.
I experienced the same problem and deleting of the depth buffer removed the error.

Resources