Rendering Videos in iOS using Metal / OpenGL ES - ios

How can I render Videos using Metal or OpenGL ES?
I'm talking about decoding and displaying the frames by myself.
I am very new to Metal and OpenGL ES and I don't know where to begin.

What you're asking isn't trivial for someone just getting started with this, so you might want to break this down into smaller parts. That said, I have done this and can describe the general process.
First, you'll start with an AVAssetReader and set up AVAssetReaderOutputs for the audio and video tracks. From those, you iterate through the CMSampleBufferRefs. For each video frame, you'll extract a CVImageBufferRef.
Uploading the video frames to OpenGL textures can go a couple of different ways. The most performant path is to work with YUV data and upload the Y and UV planes via iOS's texture caches. You'll then use a YUV -> RGB shader to combine these planes and convert to the RGB colorspace for processing or display.
You could also work with BGRA data from the movie file and upload that directly into a texture, but there's more overhead to that process. It is simpler, though, and avoids the need for a manual color conversion shader.
After that, you'll take your texture and render it to a quad using a passthrough vertex and fragment shader, or you could do shader-based processing on the video.
Similar processes and pathways exist for uploading to Metal, and the starting point is the same.
Now, if you don't want to implement all that by hand, I've written an open source framework called GPUImage that encapsulates this. It comes in Objective-C and Swift varieties. If you don't want to pull the entire framework, focus on the GPUImageMovie (for the former) or the MovieInput (for the latter) classes. They contain all the code needed to do this, so you can extract the implementations there and use them directly.

Here's the simplest, fastest way to get started:
#import UIKit;
#import AVFoundation;
#import CoreMedia;
#import <MetalKit/MetalKit.h>
#import <Metal/Metal.h>
#import <MetalPerformanceShaders/MetalPerformanceShaders.h>
#interface ViewController : UIViewController <MTKViewDelegate, AVCaptureVideoDataOutputSampleBufferDelegate> {
NSString *_displayName;
NSString *serviceType;
}
#property (retain, nonatomic) SessionContainer *session;
#property (retain, nonatomic) AVCaptureSession *avSession;
#end;
#import "ViewController.h"
#interface ViewController () {
MTKView *_metalView;
id<MTLDevice> _device;
id<MTLCommandQueue> _commandQueue;
id<MTLTexture> _texture;
CVMetalTextureCacheRef _textureCache;
}
#property (strong, nonatomic) AVCaptureDevice *videoDevice;
#property (nonatomic) dispatch_queue_t sessionQueue;
#end
#implementation ViewController
- (void)viewDidLoad {
NSLog(#"%s", __PRETTY_FUNCTION__);
[super viewDidLoad];
_device = MTLCreateSystemDefaultDevice();
_metalView = [[MTKView alloc] initWithFrame:self.view.bounds];
[_metalView setContentMode:UIViewContentModeScaleAspectFit];
_metalView.device = _device;
_metalView.delegate = self;
_metalView.clearColor = MTLClearColorMake(1, 1, 1, 1);
_metalView.colorPixelFormat = MTLPixelFormatBGRA8Unorm;
_metalView.framebufferOnly = NO;
_metalView.autoResizeDrawable = NO;
CVMetalTextureCacheCreate(NULL, NULL, _device, NULL, &_textureCache);
[self.view addSubview:_metalView];
self.sessionQueue = dispatch_queue_create( "session queue", DISPATCH_QUEUE_SERIAL );
if ([self setupCamera]) {
[_avSession startRunning];
}
}
- (BOOL)setupCamera {
NSLog(#"%s", __PRETTY_FUNCTION__);
#try {
NSError * error;
_avSession = [[AVCaptureSession alloc] init];
[_avSession beginConfiguration];
[_avSession setSessionPreset:AVCaptureSessionPreset640x480];
// get list of devices; connect to front-facing camera
self.videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (self.videoDevice == nil) return FALSE;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:self.videoDevice error:&error];
[_avSession addInput:input];
dispatch_queue_t sampleBufferQueue = dispatch_queue_create("CameraMulticaster", DISPATCH_QUEUE_SERIAL);
AVCaptureVideoDataOutput * dataOutput = [[AVCaptureVideoDataOutput alloc] init];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES];
[dataOutput setVideoSettings:#{(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)}];
[dataOutput setSampleBufferDelegate:self queue:sampleBufferQueue];
[_avSession addOutput:dataOutput];
[_avSession commitConfiguration];
} #catch (NSException *exception) {
NSLog(#"%s - %#", __PRETTY_FUNCTION__, exception.description);
return FALSE;
} #finally {
return TRUE;
}
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
{
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
CVMetalTextureRef texture = NULL;
CVReturn status = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _textureCache, pixelBuffer, NULL, MTLPixelFormatBGRA8Unorm, width, height, 0, &texture);
if(status == kCVReturnSuccess)
{
_metalView.drawableSize = CGSizeMake(width, height);
_texture = CVMetalTextureGetTexture(texture);
_commandQueue = [_device newCommandQueue];
CFRelease(texture);
}
}
}
- (void)drawInMTKView:(MTKView *)view {
// creating command encoder
if (_texture) {
id<MTLCommandBuffer> commandBuffer = [_commandQueue commandBuffer];
id<MTLTexture> drawingTexture = view.currentDrawable.texture;
// set up and encode the filter
MPSImageGaussianBlur *filter = [[MPSImageGaussianBlur alloc] initWithDevice:_device sigma:5];
[filter encodeToCommandBuffer:commandBuffer sourceTexture:_texture destinationTexture:drawingTexture];
// committing the drawing
[commandBuffer presentDrawable:view.currentDrawable];
[commandBuffer commit];
_texture = nil;
}
}
- (void)mtkView:(MTKView *)view drawableSizeWillChange:(CGSize)size {
}
#end

Related

AVCaptureSession captureOutput callback drops frames and throws an OutOfBuffers Error

I am attempting to take an AVCaptureSession from the back camera and transfer it to a texture, mapped onto a quad.
View the complete source here.
Regardless of any preset I use, the didDropSampleBuffer callback is reporting 'OutOfBuffers'. I have attempted to copy the sampleBuffer passed to didOutputSampleBuffer, but perhaps my implementation has an issue.
I have also tried to use a SERIAL_QUEUE, as I know captureSession's startRecording is a blocking function and it shouldn't be on the main queue. However, using the main queue is the only way I've been able to see any frames.
Here is my AV setup:
- (void)setupAV
{
_sessionQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL);
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, self.context, NULL, &_videoTextureCache);
if (err) {
NSLog(#"Couldn't create video cache.");
return;
}
self.captureSession = [[AVCaptureSession alloc] init];
if (!self.captureSession) {
return;
}
[self.captureSession beginConfiguration];
self.captureSession.sessionPreset = AVCaptureSessionPresetHigh;
AVCaptureDevicePosition devicePosition = AVCaptureDevicePositionBack;
AVCaptureDeviceDiscoverySession *deviceDiscoverySession = [AVCaptureDeviceDiscoverySession discoverySessionWithDeviceTypes:#[AVCaptureDeviceTypeBuiltInWideAngleCamera] mediaType:AVMediaTypeVideo position:devicePosition];
for (AVCaptureDevice *device in deviceDiscoverySession.devices) {
if (device.position == devicePosition) {
self.captureDevice = device;
if (self.captureDevice != nil) {
break;
}
}
}
NSError *captureDeviceError = nil;
AVCaptureDeviceInput *input = [[AVCaptureDeviceInput alloc] initWithDevice:self.captureDevice error:&captureDeviceError];
if (captureDeviceError) {
NSLog(#"Couldn't configure device input.");
return;
}
if (![self.captureSession canAddInput:input]) {
NSLog(#"Couldn't add video input.");
[self.captureSession commitConfiguration];
return;
}
[self.captureSession addInput:input];
self.videoOutput = [[AVCaptureVideoDataOutput alloc] init];
if (!self.videoOutput) {
NSLog(#"Error creating video output.");
[self.captureSession commitConfiguration];
return;
}
self.videoOutput.alwaysDiscardsLateVideoFrames = YES;
NSDictionary *settings = [[NSDictionary alloc] initWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey, nil];
self.videoOutput.videoSettings = settings;
[self.videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
if ([self.captureSession canAddOutput:self.videoOutput]) {
[self.captureSession addOutput:self.videoOutput];
} else {
NSLog(#"Couldn't add video output.");
[self.captureSession commitConfiguration];
return;
}
if (self.captureSession.isRunning) {
NSLog(#"Session is already running.");
[self.captureSession commitConfiguration];
return;
}
// NSError *configLockError;
// int frameRate = 24;
// [self.captureDevice lockForConfiguration:&configLockError];
// self.captureDevice.activeVideoMinFrameDuration = CMTimeMake(1, frameRate);
// self.captureDevice.activeVideoMaxFrameDuration = CMTimeMake(1, frameRate);
// [self.captureDevice unlockForConfiguration];
//
// if (configLockError) {
// NSLog(#"Error locking for configuration. %#", configLockError);
// }
[self.captureSession commitConfiguration];
}
And here is my captureOutput callback:
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
// if (_sampleBuffer) {
// CFRelease(_sampleBuffer);
// _sampleBuffer = nil;
// }
//
// OSStatus status = CMSampleBufferCreateCopy(kCFAllocatorDefault, sampleBuffer, &_sampleBuffer);
// if (noErr != status) {
// _sampleBuffer = nil;
// }
//
// if (!_sampleBuffer) {
// return;
// }
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (!_videoTextureCache) {
NSLog(#"No video texture cache");
return;
}
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
_rgbaTexture = nil;
// Periodic texture cache flush every frame
CVOpenGLESTextureCacheFlush(_videoTextureCache, 0);
// CVOpenGLESTextureCacheCreateTextureFromImage will create GLES texture
// optimally from CVImageBufferRef.
glActiveTexture(GL_TEXTURE0);
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RGBA,
(GLsizei)width,
(GLsizei)height,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&_rgbaTexture);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
if (err) {
NSLog(#"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}
if (_rgbaTexture) {
glBindTexture(CVOpenGLESTextureGetTarget(_rgbaTexture), CVOpenGLESTextureGetName(_rgbaTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
}
For completeness, here are the ivar and property declarations:
#interface AVViewController () <AVCaptureVideoDataOutputSampleBufferDelegate> {
CVOpenGLESTextureRef _rgbaTexture;
CVOpenGLESTextureCacheRef _videoTextureCache;
dispatch_queue_t _sessionQueue;
GLuint _program;
GLuint _vertexArray;
GLuint _vertexBuffer;
CMSampleBufferRef _sampleBuffer;
}
#property (nonatomic, strong) EAGLContext *context;
#property (nonatomic, strong) AVCaptureSession *captureSession;
#property (nonatomic, strong) AVCaptureDevice *captureDevice;
#property (nonatomic, strong) AVCaptureVideoDataOutput *videoOutput;
#property (readwrite) GLint vertexAttrib;
#property (readwrite) GLint textureAttrib;
#property (readwrite) GLint videoFrameUniform;
I have searched and searched and cannot find a solution to this. Any help would be greatly appreciated.

camera view does not appear in a swift/objective-c++ (opencv) project - ios 10.3 xcode 8

I would like to link OpenCV with swift/objective c++ to be able to develop applications for ios. I found that CocoaPods work reasonably well with OpenCV pods. So I used them as a starting point and tried some image stitching examples successfully. However, when I try to capture images from the camera, I cannot see the output at the display. The code run and loops around the captureOutput function but the camera image is not displayed. It seems that the code runs in the background:
Objective c++ code:
#interface VideoSource () <AVCaptureVideoDataOutputSampleBufferDelegate>
#property (strong, nonatomic) AVCaptureVideoPreviewLayer *previewLayer;
#property (strong, nonatomic) AVCaptureSession *captureSession;
#end
#implementation VideoSource
- (void)setTargetView:(UIView *)targetView {
if (self.previewLayer == nil) {
return;
}
self.previewLayer.contentsGravity = kCAGravityResizeAspectFill;
self.previewLayer.frame = targetView.bounds;
self.previewLayer.affineTransform = CGAffineTransformMakeRotation(M_PI / 2);
[targetView.layer addSublayer:self.previewLayer];
std::cout<<"VideoSource setTargetView ... done "<<std::endl;
}
- (instancetype)init
{
self = [super init];
if (self) {
_captureSession = [[AVCaptureSession alloc] init];
_captureSession.sessionPreset = AVCaptureSessionPreset640x480;
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [[AVCaptureDeviceInput alloc] initWithDevice:device error:&error];
[_captureSession addInput:input];
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
output.videoSettings = #{(NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA)};
output.alwaysDiscardsLateVideoFrames = YES;
[_captureSession addOutput:output];
dispatch_queue_t queue = dispatch_queue_create("VideoQueue", DISPATCH_QUEUE_SERIAL);
[output setSampleBufferDelegate:self queue:queue];
_previewLayer = [AVCaptureVideoPreviewLayer layer];
std::cout<<"VideoSource init ... done "<<std::endl;
}
return self;
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
uint8_t *base;
int width, height, bytesPerRow;
base = (uint8_t*)CVPixelBufferGetBaseAddress(imageBuffer);
width = (int)CVPixelBufferGetWidth(imageBuffer);
height = (int)CVPixelBufferGetHeight(imageBuffer);
bytesPerRow = (int)CVPixelBufferGetBytesPerRow(imageBuffer);
Mat mat = Mat(height, width, CV_8UC4, base);
//Processing here
[self.delegate processFrame:mat];
CGImageRef imageRef = [self CGImageFromCVMat:mat];
dispatch_sync(dispatch_get_main_queue(), ^{
self.previewLayer.contents = (__bridge id)imageRef;
});
CGImageRelease(imageRef);
CVPixelBufferUnlockBaseAddress( imageBuffer, 0 );
std::cout<<"VideoSource captureOutput ... done "<<std::endl;
}
- (void)start {
[self.captureSession startRunning];
std::cout<<"VideoSource start ... done "<<std::endl;
}
- (CGImageRef)CGImageFromCVMat:(Mat)cvMat {
if (cvMat.elemSize() == 4) {
cv::cvtColor(cvMat, cvMat, COLOR_BGRA2RGBA);
}
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(cvMat.cols,
//width
cvMat.rows,
//height
8,
//bits per component
8 * cvMat.elemSize(),
//bits per pixel
cvMat.step[0],
//bytesPerRow
colorSpace,
//colorspace
kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info
provider,
//CGDataProviderRef
NULL,
//decode
false,
//should interpolate
kCGRenderingIntentDefault
//intent
);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
//std::cout<<"VideoSource CGImageFromCVMat ... done "<<std::endl;
return imageRef;
}
#end
The swift side:
#IBOutlet var spinner:UIActivityIndicatorView!
#IBOutlet weak var previewView: UIView!
let wrapper = Wrapper()
and then in the call function:
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
self.view.backgroundColor = UIColor.darkGray
self.wrapper.setTargetView(self.previewView)
self.wrapper.start()
}
I resolve this problem. The solution was simply to connect the UI (main.storyboard) to the ViewController.swift by dragging the specific UI components.
Both paradigms work:
Source code posted above adapted from: https://github.com/akira108/MinimumOpenCVLiveCamera
This require to connect the UIView of the main.storyboard to the previewView (UIView) in the ViewController.swift (just drag and drop to create the connection)
Involving the CvVideoCameraDelegate class in the swift Viewcontroller (Video processing with OpenCV in IOS Swift project). Here, I inserted a UIImage object at the main.storyboard and connected this object to previewImage at the ViewController. Because this example requires to use specific opencv header within swift (cap_ios.h), I only tested with with OpenCV 2.4.

How to output a CIFilter to a Camera view?

I'm just starting out in Objective-C and I'm trying to create a simple app where it shows the camera view with a blur effect on it. I got the Camera output working with the AVFoundation framework. Now, I'm trying to hook up the Core image framework but to no knowledge how to, Apple documentation is confusing for me and searching for guides and tutorials online leads to no results. Thanks in advance for the help.
#import "ViewController.h"
#import <AVFoundation/AVFoundation.h>
#interface ViewController ()
#property (strong ,nonatomic) CIContext *context;
#end
#implementation ViewController
AVCaptureSession *session;
AVCaptureStillImageOutput *stillImageOutput;
-(CIContext *)context
{
if(!_context)
{
_context = [CIContext contextWithOptions:nil];
}
return _context;
}
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
}
-(void)viewWillAppear:(BOOL)animated{
session = [[AVCaptureSession alloc] init];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error;
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];
if ([session canAddInput:deviceInput]) {
[session addInput:deviceInput];
}
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CALayer *rootLayer = [[self view] layer];
[rootLayer setMasksToBounds:YES];
CGRect frame = self.imageView.frame;
[previewLayer setFrame:frame];
[previewLayer.connection setVideoOrientation:AVCaptureVideoOrientationLandscapeRight];
[rootLayer insertSublayer:previewLayer atIndex:0];
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
[session addOutput:stillImageOutput];
[session startRunning];
}
#end
Here's something to get you started. This is an updated version of the code from the following link.
https://gist.github.com/eladb/9662102
The trick is to use the AVCaptureVideoDataOutputSampleBufferDelegate.
With this delegate, you can use imageWithCVPixelBuffer to construct a CIImage from your camera buffer.
Right now though I'm trying to figure out how to reduce lag. I'll update asap.
Update: Latency is now minimal, and on some effects unnoticeable. Unfortunately, it seems that blur is one of the slowest. You may want to look into vImage.
#import "ViewController.h"
#import <CoreImage/CoreImage.h>
#import <AVFoundation/AVFoundation.h>
#interface ViewController () {
}
#property (strong, nonatomic) CIContext *coreImageContext;
#property (strong, nonatomic) AVCaptureSession *cameraSession;
#property (strong, nonatomic) AVCaptureVideoDataOutput *videoOutput;
#property (strong, nonatomic) UIView *blurCameraView;
#property (strong, nonatomic) CIFilter *filter;
#property BOOL cameraOpen;
#end
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
self.blurCameraView = [[UIView alloc]initWithFrame:[[UIScreen mainScreen] bounds]];
[self.view addSubview:self.blurCameraView];
//setup filter
self.filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[self.filter setDefaults];
[self.filter setValue:#(3.0f) forKey:#"inputRadius"];
[self setupCamera];
[self openCamera];
// Do any additional setup after loading the view, typically from a nib.
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (void)setupCamera
{
self.coreImageContext = [CIContext contextWithOptions:#{kCIContextUseSoftwareRenderer : #(YES)}];
// session
self.cameraSession = [[AVCaptureSession alloc] init];
[self.cameraSession setSessionPreset:AVCaptureSessionPresetLow];
[self.cameraSession commitConfiguration];
// input
AVCaptureDevice *shootingCamera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *shootingDevice = [AVCaptureDeviceInput deviceInputWithDevice:shootingCamera error:NULL];
if ([self.cameraSession canAddInput:shootingDevice]) {
[self.cameraSession addInput:shootingDevice];
}
// video output
self.videoOutput = [[AVCaptureVideoDataOutput alloc] init];
self.videoOutput.alwaysDiscardsLateVideoFrames = YES;
[self.videoOutput setSampleBufferDelegate:self queue:dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0)];
if ([self.cameraSession canAddOutput:self.videoOutput]) {
[self.cameraSession addOutput:self.videoOutput];
}
if (self.videoOutput.connections.count > 0) {
AVCaptureConnection *connection = self.videoOutput.connections[0];
connection.videoOrientation = AVCaptureVideoOrientationPortrait;
}
self.cameraOpen = NO;
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// turn buffer into an image we can manipulate
CIImage *result = [CIImage imageWithCVPixelBuffer:imageBuffer];
// filter
[self.filter setValue:result forKey:#"inputImage"];
// render image
CGImageRef blurredImage = [self.coreImageContext createCGImage:self.filter.outputImage fromRect:result.extent];
dispatch_async(dispatch_get_main_queue(), ^{
self.blurCameraView.layer.contents = (__bridge id)blurredImage;
CGImageRelease(blurredImage);
});
}
- (void)openCamera {
if (self.cameraOpen) {
return;
}
self.blurCameraView.alpha = 0.0f;
[self.cameraSession startRunning];
[self.view layoutIfNeeded];
[UIView animateWithDuration:3.0f animations:^{
self.blurCameraView.alpha = 1.0f;
}];
self.cameraOpen = YES;
}

How to capture frame-by-frame images from iPhone video recording in real time

I am trying to measure the saturation of a selected color in real-time, like this:
I am following this guide from Apple. I updated the code to work with ARC, and of course made my view controller an AVCaptureVideoDataOutputSampleBufferDelegate, but I don't know how to actually start capturing the data, as in starting up the camera to get some actual input.
Here is my code:
#import "ViewController.h"
#interface ViewController ()
#property (nonatomic, strong) AVCaptureSession *session;
#property (nonatomic, strong) AVCaptureVideoPreviewLayer *previewLayer;
#end
#implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib
[self setupCaptureSession];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetMedium;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[session addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// Start the session running to start the flow of data
[self startCapturingWithSession:session];
// Assign session to an ivar.
[self setSession:session];
}
- (void)startCapturingWithSession: (AVCaptureSession *) captureSession
{
//----- DISPLAY THE PREVIEW LAYER -----
//Display it full screen under out view controller existing controls
NSLog(#"Display the preview layer");
CGRect layerRect = [[[self view] layer] bounds];
[self.previewLayer setBounds:layerRect];
[self.previewLayer setPosition:CGPointMake(CGRectGetMidX(layerRect),
CGRectGetMidY(layerRect))];
//[[[self view] layer] addSublayer:[[self CaptureManager] self.previewLayer]];
//We use this instead so it goes on a layer behind our UI controls (avoids us having to manually bring each control to the front):
UIView *CameraView = [[UIView alloc] init];
[[self view] addSubview:CameraView];
[self.view sendSubviewToBack:CameraView];
[[CameraView layer] addSublayer:self.previewLayer];
//----- START THE CAPTURE SESSION RUNNING -----
[captureSession startRunning];
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
}
// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
#end
This did it for me, it was all about setting up a video preview:
#import "ViewController.h"
#interface ViewController ()
#property (nonatomic, strong) AVCaptureSession *session;
#property (nonatomic, strong) AVCaptureVideoPreviewLayer *previewLayer;
#end
#implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib
[self setupCaptureSession];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetMedium;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[session addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// Start the session running to start the flow of data
[self startCapturingWithSession:session];
// Assign session to an ivar.
[self setSession:session];
}
- (void)startCapturingWithSession: (AVCaptureSession *) captureSession
{
NSLog(#"Adding video preview layer");
[self setPreviewLayer:[[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession]];
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
//----- DISPLAY THE PREVIEW LAYER -----
//Display it full screen under out view controller existing controls
NSLog(#"Display the preview layer");
CGRect layerRect = [[[self view] layer] bounds];
[self.previewLayer setBounds:layerRect];
[self.previewLayer setPosition:CGPointMake(CGRectGetMidX(layerRect),
CGRectGetMidY(layerRect))];
//[[[self view] layer] addSublayer:[[self CaptureManager] self.previewLayer]];
//We use this instead so it goes on a layer behind our UI controls (avoids us having to manually bring each control to the front):
UIView *CameraView = [[UIView alloc] init];
[[self view] addSubview:CameraView];
[self.view sendSubviewToBack:CameraView];
[[CameraView layer] addSublayer:self.previewLayer];
//----- START THE CAPTURE SESSION RUNNING -----
[captureSession startRunning];
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
[connection setVideoOrientation:AVCaptureVideoOrientationLandscapeLeft];
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
}
// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
#end

Why can't I write out my opengl FBO to an AVAssetWriter?

I've been fighting for a week off and on to save out my opengl renderings (which I'm using for green screening) to video via an AVAssetWriter.
I have created a simple rig below to show what I'm doing.
I've asked on apple forums and received advice on the process, which is also described here:
allmybrain.com/2011/12/08/rendering-to-a-texture-with-ios-5-texture-cache-api/ and is used in the GPUImage library.
To my knowledge I am doing the exact same thing - I even use the method from GPUImage to create the FBO.
I have verified that the drawing is okay (I have drawing methods in this code too; which are disabled),
The FBO is created okay and returns success for : glCheckFramebufferStatus
There are no crashes, No exceptions, no warnings, the writer is in fine status, all texturecaches, buffers, etc are created without error.
However I still get BLACK for my video output.
If I set my glClear to white, then I get a white rectangle which is not at the video size I requested.
I never get my triangle rendere into my video.
#import <AVFoundation/AVFoundation.h>
#import <AssetsLibrary/AssetsLibrary.h>
#import "TestViewController.h"
/////////////////////////////////////////////////////////////////
// This data type is used to store information for each vertex
typedef struct
{
GLKVector3 positionCoords;
}
SceneVertex;
/////////////////////////////////////////////////////////////////
// Define vertex data for a triangle to use in example
static const SceneVertex vertices[] =
{
{{-1.0f, -1.0f, 1.0}}, // lower left corner
{{1.0f, -1.0f, 0.5}}, // lower right corner
{{1.0f, 1.0f, 0.0}} // upper left corner
};
#interface TestViewController ()
#property(nonatomic, readwrite, assign) CVOpenGLESTextureCacheRef videoTextureCache;
#property(strong, nonatomic) GLKTextureInfo *background;
#property(nonatomic, strong) AVAssetWriter *assetWriter;
#property(nonatomic) BOOL isRecording;
#property(nonatomic, strong) AVAssetWriterInput *assetWriterVideoInput;
#property(nonatomic, strong) AVAssetWriterInputPixelBufferAdaptor *assetWriterPixelBufferInput;
#property(nonatomic, assign) CFAbsoluteTime startTime;
#property(nonatomic, strong) GLKView *glkView;
#property(nonatomic, strong) GLKBaseEffect *screenGLEffect;
#property(nonatomic, strong) GLKBaseEffect *FBOGLEffect;
#property(nonatomic, strong) NSTimer *recordingTimer;
- (BOOL)isRetina;
#end
#implementation TestViewController
{
CVOpenGLESTextureCacheRef _writerTextureCache;
GLuint _writerRenderFrameBuffer;
GLuint vertexBufferID;
EAGLContext *_writerContext;
CVOpenGLESTextureRef _writerTexture;
}
- (GLKBaseEffect *)createBasicDrawingEffectInCurrentContext
{
GLKBaseEffect *basicGLEffect = [[GLKBaseEffect alloc] init];
basicGLEffect.useConstantColor = GL_TRUE;
basicGLEffect.constantColor = GLKVector4Make(
.5f, // Red
1.0f, // Green
.5f, // Blue
1.0f);// Alpha
// Set the background color stored in the current context
glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // background color
// Generate, bind, and initialize contents of a buffer to be
// stored in GPU memory
glGenBuffers(1, // STEP 1
&vertexBufferID);
glBindBuffer(GL_ARRAY_BUFFER, // STEP 2
vertexBufferID);
glBufferData( // STEP 3
GL_ARRAY_BUFFER, // Initialize buffer contents
sizeof(vertices), // Number of bytes to copy
vertices, // Address of bytes to copy
GL_STATIC_DRAW); // Hint: cache in GPU memory
return basicGLEffect;
}
/////////////////////////////////////////////////////////////////
//
- (void)viewDidUnload
{
[super viewDidUnload];
// Make the view's context current
GLKView *view = (GLKView *) self.view;
[EAGLContext setCurrentContext:view.context];
// Stop using the context created in -viewDidLoad
((GLKView *) self.view).context = nil;
[EAGLContext setCurrentContext:nil];
//////////////////////////////////////////////////////////////
#pragma mark AVWriter setup
//////////////////////////////////////////////////////////////
- (NSString *)tempFilePath
{
return [NSHomeDirectory() stringByAppendingPathComponent:#"Documents/output2.m4v"];
}
- (void)removeTempFile
{
NSString *path = [self tempFilePath];
NSFileManager *fileManager = [NSFileManager defaultManager];
BOOL exists = [fileManager fileExistsAtPath:path];
NSLog(#">>>remove %# Exists %d", path, exists);
NSError *error;
unlink([path UTF8String]);
NSLog(#">>>AFTER REMOVE %# Exists %d %#", path, exists, error);
}
- (void)createWriter
{
//My setup code is based heavily on the GPUImage project, https://github.com/BradLarson/GPUImage so some of these dictionary names and structure are similar to the code from that project - I recommend you check it out if you are interested in Video filtering/recording
[self removeTempFile];
NSError *error;
self.assetWriter = [[AVAssetWriter alloc]
initWithURL:[NSURL fileURLWithPath:[self tempFilePath]]
fileType:AVFileTypeQuickTimeMovie
error:&error];
if (error)
{
NSLog(#"Couldn't create writer, %#", error.localizedDescription);
return;
}
NSDictionary *outputSettings = #{
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : #640,
AVVideoHeightKey : #480
};
self.assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:outputSettings];
self.assetWriterVideoInput.expectsMediaDataInRealTime = YES;
NSDictionary *sourcePixelBufferAttributesDictionary = #{(id) kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA),
(id) kCVPixelBufferWidthKey : #640,
(id) kCVPixelBufferHeightKey : #480};
self.assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.assetWriterVideoInput
sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
self.assetWriterVideoInput.transform = CGAffineTransformMakeScale(1, -1);
if ([_assetWriter canAddInput:self.assetWriterVideoInput])
{
[_assetWriter addInput:self.assetWriterVideoInput];
} else
{
NSLog(#"can't add video writer input %#", self.assetWriterVideoInput);
}
/*
_assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:nil];
if ([_assetWriter canAddInput:_assetWriterAudioInput]) {
[_assetWriter addInput:_assetWriterAudioInput];
_assetWriterAudioInput.expectsMediaDataInRealTime = YES;
}
*/
}
- (void)writeMovieToLibraryWithPath:(NSURL *)path
{
NSLog(#"writing %# to library", path);
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library writeVideoAtPathToSavedPhotosAlbum:path
completionBlock:^(NSURL *assetURL, NSError *error) {
if (error)
{
NSLog(#"Error saving to library%#", [error localizedDescription]);
} else
{
NSLog(#"SAVED %# to photo lib", path);
}
}];
}
//////////////////////////////////////////////////////////////
#pragma mark touch handling
//////////////////////////////////////////////////////////////
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
[super touchesEnded:touches withEvent:event];
if (self.isRecording)
{
[self finishRecording];
} else
{
[self startRecording];
}
}
//////////////////////////////////////////////////////////////
#pragma mark recording
//////////////////////////////////////////////////////////////
- (void)startRecording;
{
NSLog(#"started recording");
#warning debugging startrecording
// NSLog(#"bypassing usual write method");
// if (![assetWriter startWriting]){
// NSLog(#"writer not started %#, %d", assetWriter.error, assetWriter.status);
// }
self.startTime = CFAbsoluteTimeGetCurrent();
[self createWriter];
[self.assetWriter startWriting];
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
NSAssert([self.assetWriterPixelBufferInput pixelBufferPool], #"writerpixelbuffer input has no pools");
if (!_writerContext)
{
_writerContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if (!_writerContext || ![EAGLContext setCurrentContext:_writerContext])
{
NSLog(#"Problem with OpenGL context.");
return;
}
}
[EAGLContext setCurrentContext:_writerContext];
NSLog(#"Creating FBO");
[self createDataFBOUsingGPUImagesMethod];
// [self createDataFBO];
self.isRecording = YES;
NSLog(#"Recording is started");
self.recordingTimer = [NSTimer scheduledTimerWithTimeInterval:1 / 30
target:self
selector:#selector(tick:)
userInfo:nil repeats:YES];
}
- (void)tick:(id)tick
{
[self drawBasicGLTOFBOForWriting];
}
- (void)finishRecording;
{
[self.recordingTimer invalidate];
self.recordingTimer = nil;
NSLog(#"finished recording");
if (self.assetWriter.status == AVAssetWriterStatusCompleted || !self.isRecording)
{
NSLog(#"already completed ingnoring");
return;
}
NSLog(#"Asset writer writing");
self.isRecording = NO;
// runOnMainQueueWithoutDeadlocking(^{
NSLog(#"markng inputs as finished");
//TODO - these cause an error
[self.assetWriterVideoInput markAsFinished];
__weak TestViewController *blockSelf = self;
[self.assetWriter finishWritingWithCompletionHandler:^{
if (self.assetWriter.error == nil)
{
NSLog(#"saved ok - writing to lib");
[self writeMovieToLibraryWithPath:[NSURL fileURLWithPath:[self tempFilePath]]];
} else
{
NSLog(#" did not save due to error %#", self.assetWriter.error);
}
}];
// });
}
- (void)drawBasicGLTOFBOForWriting
{
if (!self.isRecording)
{
return;
}
[EAGLContext setCurrentContext:_writerContext];
if (!self.FBOGLEffect)
{
self.FBOGLEffect = [self createBasicDrawingEffectInCurrentContext];
}
glDisable(GL_DEPTH_TEST);
glBindFramebuffer(GL_FRAMEBUFFER, _writerRenderFrameBuffer);
glClearColor(1, 1, 1, 1);
glClear(GL_COLOR_BUFFER_BIT);
[self.FBOGLEffect prepareToDraw];
// Clear Frame Buffer (erase previous drawing)
// Enable use of positions from bound vertex buffer
glEnableVertexAttribArray( // STEP 4
GLKVertexAttribPosition);
glVertexAttribPointer( // STEP 5
GLKVertexAttribPosition,
3, // three components per vertex
GL_FLOAT, // data is floating point
GL_FALSE, // no fixed point scaling
sizeof(SceneVertex), // no gaps in data
NULL); // NULL tells GPU to start at
// beginning of bound buffer
// Draw triangles using the first three vertices in the
// currently bound vertex buffer
glDrawArrays(GL_TRIANGLES, // STEP 6
0, // Start with first vertex in currently bound buffer
3); // Use three vertices from currently bound buffer
glFlush();
CFAbsoluteTime interval = (CFAbsoluteTimeGetCurrent() - self.startTime) * 1000;
CMTime currentTime = CMTimeMake((int) interval, 1000);
[self writeToFileWithTime:currentTime];
}
- (void)writeToFileWithTime:(CMTime)time
{
if (!self.assetWriterVideoInput.readyForMoreMediaData)
{
NSLog(#"Had to drop a video frame");
return;
}
if (kCVReturnSuccess == CVPixelBufferLockBaseAddress(_writerPixelBuffer,
kCVPixelBufferLock_ReadOnly))
{
uint8_t *pixels = (uint8_t *) CVPixelBufferGetBaseAddress(_writerPixelBuffer);
// process pixels how you like!
BOOL success = [self.assetWriterPixelBufferInput appendPixelBuffer:_writerPixelBuffer
withPresentationTime:time];
NSLog(#"wrote at %# : %#", CMTimeCopyDescription(NULL, time), success ? #"YES" : #"NO");
CVPixelBufferUnlockBaseAddress(_writerPixelBuffer, kCVPixelBufferLock_ReadOnly);
}
}
//////////////////////////////////////////////////////////////
#pragma mark FBO setup
//////////////////////////////////////////////////////////////
- (void)createDataFBOUsingGPUImagesMethod;
{
glActiveTexture(GL_TEXTURE1);
glGenFramebuffers(1, &_writerRenderFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _writerRenderFrameBuffer);
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, _writerContext, NULL, &_writerTextureCache);
if (err)
{
NSAssert(NO, #"Error at CVOpenGLESTextureCacheCreate %d", err);
}
// Code originally sourced from http://allmybrain.com/2011/12/08/rendering-to-a-texture-with-ios-5-texture-cache-api/
CVPixelBufferPoolCreatePixelBuffer(NULL, [self.assetWriterPixelBufferInput pixelBufferPool], &_writerPixelBuffer);
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _writerTextureCache, _writerPixelBuffer,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
480,
320,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&_writerTexture);
if (err)
{
NSAssert(NO, #"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}
glBindTexture(CVOpenGLESTextureGetTarget(_writerTexture), CVOpenGLESTextureGetName(_writerTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(_writerTexture), 0);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
NSAssert(status == GL_FRAMEBUFFER_COMPLETE, #"Incomplete filter FBO: %d", status);
}
#end
Four possibilities jump to mind:
Your Viewport isn't the right size/shape/in the right place. Try calling glViewport somewhere before drawing anything.
Your shader is broken. I see you don't have any kind of shader setup, so you might need to add a basic passthrough vertex and fragment shader pair that just multiplies position by perspective and modelview matrix, and draws using vertex color, or a fixed color.
Your Projection matrix isn't good. Try using a basic orthographic matrix at first.
Your Modelview matrix isn't good. If you can animate something, try starting with the identity matrix and then slowly rotating it through first the X axis then the Y axis.
Make sure _writerpixelbuffer is not NULL.

Resources