I'm trying to set up openGL for an iOS app I am building and I am receiving the following error:
-[EAGLContext renderbufferStorage:fromDrawable:]: invalid drawable
Here is a part of my code, located within a UIViewController class:
#implementation MyViewController
{
CAEAGLayer *_eaglLayer;
EAGLContext *_context;
GLuint _colorRenderBuffer;
}
+(Class)layerClass
{
return [CAEAGLLayer class];
}
-(void)setup
{
_eaglLayer = (CAEAGLLayer *)self.view.layer;
_eaglLayer.opaque = YES;
EAGLRenderingAPI api = kEAGLRenderingAPIOpenGLES2;
_context = [[EAGLContext alloc] initWithAPI:api];
glGenRenderbuffers(1, &_colorRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _colorRenderBuffer);
[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:_eaglLayer];
}
How do I fix this error?
Try making the context current after creating it. I believe that does not happen automatically despite all the "magic" that some of these helper classes do for you.
_context = [[EAGLContext alloc] initWithAPI:api];
[EAGLContext setCurrentContext: _context];
The first line is what you already have, the second line is the one you add.
Related
I'm using OpenTok and replaced their Publisher with my own subclassed version which incorporates GPUImage. My goal is to add filters.
The application builds and runs, but crashes here:
func willOutputSampleBuffer(sampleBuffer: CMSampleBuffer!) {
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
CVPixelBufferLockBaseAddress(imageBuffer!, 0)
videoFrame?.clearPlanes()
for var i = 0 ; i < CVPixelBufferGetPlaneCount(imageBuffer!); i++ {
print(i)
videoFrame?.planes.addPointer(CVPixelBufferGetBaseAddressOfPlane(imageBuffer!, i))
}
videoFrame?.orientation = OTVideoOrientation.Left
videoCaptureConsumer.consumeFrame(videoFrame) //comment this out to stop app from crashing. Otherwise, it crashes here.
CVPixelBufferUnlockBaseAddress(imageBuffer!, 0)
}
If I comment that line out, I'm able to run the app without crashing. In fact, I see the filter being applied correctly, but it's flickering. Nothings get published to Opentok.
My entire codebase can be downloaded. Click here to see the specific file: This is the specific file for the class. It's actually pretty easy to run - just do pod install before running it.
Upon inspection, it could be that videoCaptureConsumer is not initialized. Protocol reference
I have no idea what my code means. I translated it directly from this objective C file: Tokbox's sample project
I analyzed both, your Swift-project and the Objective-C-project as well.
I figured out, that neither is working.
With this post, i want to give a first update and show a really working demo of how to use GPU image filters with OpenTok.
Reasons why your GPUImage filer implementation is not woring with OpenTok
#1 Multiple target specifications
let sepia = GPUImageSepiaFilter()
videoCamera?.addTarget(sepia)
sepia.addTarget(self.view)
videoCamera?.addTarget(self.view) // <-- This is wrong and produces the flickering
videoCamera?.startCameraCapture()
Two sources try to render into the same view. Makes things flickering...
Part one is solved. Next up: Why is nothing pubilshed to OpenTok? To find the reason for this, i decided to start with the "working" Objective-C version.
#2 -The Objective-C original codebase
The orignal Objective-C version doesnt have the expected functionality. The publishing of the GPUImageVideoCamera to an OpenTok subscriber works but there is no filtering involved. And thats your core requirement.
The point is, that adding filters is not as trivial as someone would expect, because of differing image formats and differing mechanisms how to do asynchronous programming.
So reason #2, why your code isnt working as expected:
Your reference codebase for your porting work is not correct. It doesnt allow to put GPU filters in between the Publish - Subscriber pipeline.
A working Objective-C implementation
I modified the Objective-C version. Current results are looking like this:
[![enter image description here][1]][1]
Its running smoothly.
Final steps
This is the full code for the custom Tok publisher. Its basically the original code (TokBoxGPUImagePublisher) from [https://github.com/JayTokBox/TokBoxGPUImage/blob/master/TokBoxGPUImage/ViewController.m][2] with following notable modifications:
OTVideoFrame gets instantiated with a new format
...
format = [[OTVideoFormat alloc] init];
format.pixelFormat = OTPixelFormatARGB;
format.bytesPerRow = [#[#(imageWidth * 4)] mutableCopy];
format.imageWidth = imageWidth;
format.imageHeight = imageHeight;
videoFrame = [[OTVideoFrame alloc] initWithFormat: format];
...
Replace the WillOutputSampleBuffer callback mechanism
This callback only triggers, when sample buffers coming directly from the GPUImageVideoCamera are ready and NOT from your custom filters. GPUImageFilters don't provide such a callback / delegate mechanism. Thats why we put an GPUImageRawDataOutput in between and ask it for ready images. This pipeline is implemented in the initCapture method and looks like this:
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
sepiaImageFilter = [[GPUImageSepiaFilter alloc] init];
[videoCamera addTarget:sepiaImageFilter];
// Create rawOut
CGSize size = CGSizeMake(imageWidth, imageHeight);
rawOut = [[GPUImageRawDataOutput alloc] initWithImageSize:size resultsInBGRAFormat:YES];
// Filter into rawOut
[sepiaImageFilter addTarget:rawOut];
// Handle filtered images
// We need a weak reference here to avoid a strong reference cycle.
__weak GPUImageRawDataOutput* weakRawOut = self->rawOut;
__weak OTVideoFrame* weakVideoFrame = self->videoFrame;
__weak id<OTVideoCaptureConsumer> weakVideoCaptureConsumer = self.videoCaptureConsumer;
//
[rawOut setNewFrameAvailableBlock:^{
[weakRawOut lockFramebufferForReading];
// GLubyte is an uint8_t
GLubyte* outputBytes = [weakRawOut rawBytesForImage];
// About the video formats used by OTVideoFrame
// --------------------------------------------
// Both YUV video formats (i420, NV12) have the (for us) following important properties:
//
// - Two planes
// - 8 bit Y plane
// - 8 bit 2x2 subsampled U and V planes (1/4 the pixels of the Y plane)
// --> 12 bits per pixel
//
// Further reading: www.fourcc.org/yuv.php
//
[weakVideoFrame clearPlanes];
[weakVideoFrame.planes addPointer: outputBytes];
[weakVideoCaptureConsumer consumeFrame: weakVideoFrame];
[weakRawOut unlockFramebufferAfterReading];
}];
[videoCamera addTarget:self.view];
[videoCamera startCameraCapture];
Whole code (The really important thing is initCapture)
//
// TokBoxGPUImagePublisher.m
// TokBoxGPUImage
//
// Created by Jaideep Shah on 9/5/14.
// Copyright (c) 2014 Jaideep Shah. All rights reserved.
//
#import "TokBoxGPUImagePublisher.h"
#import "GPUImage.h"
static size_t imageHeight = 480;
static size_t imageWidth = 640;
#interface TokBoxGPUImagePublisher() <GPUImageVideoCameraDelegate, OTVideoCapture> {
GPUImageVideoCamera *videoCamera;
GPUImageSepiaFilter *sepiaImageFilter;
OTVideoFrame* videoFrame;
GPUImageRawDataOutput* rawOut;
OTVideoFormat* format;
}
#end
#implementation TokBoxGPUImagePublisher
#synthesize videoCaptureConsumer ; // In OTVideoCapture protocol
- (id)initWithDelegate:(id<OTPublisherDelegate>)delegate name:(NSString*)name
{
self = [super initWithDelegate:delegate name:name];
if (self)
{
self.view = [[GPUImageView alloc] initWithFrame:CGRectMake(0, 0, 1, 1)];
[self setVideoCapture:self];
format = [[OTVideoFormat alloc] init];
format.pixelFormat = OTPixelFormatARGB;
format.bytesPerRow = [#[#(imageWidth * 4)] mutableCopy];
format.imageWidth = imageWidth;
format.imageHeight = imageHeight;
videoFrame = [[OTVideoFrame alloc] initWithFormat: format];
}
return self;
}
#pragma mark GPUImageVideoCameraDelegate
- (void)willOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
[videoFrame clearPlanes];
for (int i = 0; i < CVPixelBufferGetPlaneCount(imageBuffer); i++) {
[videoFrame.planes addPointer:CVPixelBufferGetBaseAddressOfPlane(imageBuffer, i)];
}
videoFrame.orientation = OTVideoOrientationLeft;
[self.videoCaptureConsumer consumeFrame:videoFrame];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
#pragma mark OTVideoCapture
- (void) initCapture {
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480
cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
sepiaImageFilter = [[GPUImageSepiaFilter alloc] init];
[videoCamera addTarget:sepiaImageFilter];
// Create rawOut
CGSize size = CGSizeMake(imageWidth, imageHeight);
rawOut = [[GPUImageRawDataOutput alloc] initWithImageSize:size resultsInBGRAFormat:YES];
// Filter into rawOut
[sepiaImageFilter addTarget:rawOut];
// Handle filtered images
// We need a weak reference here to avoid a strong reference cycle.
__weak GPUImageRawDataOutput* weakRawOut = self->rawOut;
__weak OTVideoFrame* weakVideoFrame = self->videoFrame;
__weak id<OTVideoCaptureConsumer> weakVideoCaptureConsumer = self.videoCaptureConsumer;
//
[rawOut setNewFrameAvailableBlock:^{
[weakRawOut lockFramebufferForReading];
// GLubyte is an uint8_t
GLubyte* outputBytes = [weakRawOut rawBytesForImage];
// About the video formats used by OTVideoFrame
// --------------------------------------------
// Both YUV video formats (i420, NV12) have the (for us) following important properties:
//
// - Two planes
// - 8 bit Y plane
// - 8 bit 2x2 subsampled U and V planes (1/4 the pixels of the Y plane)
// --> 12 bits per pixel
//
// Further reading: www.fourcc.org/yuv.php
//
[weakVideoFrame clearPlanes];
[weakVideoFrame.planes addPointer: outputBytes];
[weakVideoCaptureConsumer consumeFrame: weakVideoFrame];
[weakRawOut unlockFramebufferAfterReading];
}];
[videoCamera addTarget:self.view];
[videoCamera startCameraCapture];
}
- (void)releaseCapture
{
videoCamera.delegate = nil;
videoCamera = nil;
}
- (int32_t) startCapture {
return 0;
}
- (int32_t) stopCapture {
return 0;
}
- (BOOL) isCaptureStarted {
return YES;
}
- (int32_t)captureSettings:(OTVideoFormat*)videoFormat {
videoFormat.pixelFormat = OTPixelFormatNV12;
videoFormat.imageWidth = imageWidth;
videoFormat.imageHeight = imageHeight;
return 0;
}
#end
In this super simple app that uses a GLKViewController to display a red screen the memory keeps growing.
ViewController.h:
#import <UIKit/UIKit.h>
#import <GLKit/GLKit.h>
#interface ViewController : GLKViewController
#end
ViewController.m:
#import "ViewController.h"
#interface ViewController ()
#end
#implementation ViewController {
EAGLContext* context;
}
- (void)viewDidLoad {
[super viewDidLoad];
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
GLKView* view = (GLKView*)self.view;
view.context = context;
view.drawableDepthFormat = GLKViewDrawableColorFormatRGBA8888;
[EAGLContext setCurrentContext:context];
self.preferredFramesPerSecond = 60;
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
glClearColor(1.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
}
#end
For each frame 9*64 bytes is allocated and never freed as seen in this image (note that the transient count is 0 for IOAccellResource):
This is what the allocation list and stacktrace looks like:
The memory "leak" is small but it still managed to use up 6.5 MB despite only running for less than 3 minutes.
Is there a bug in the EAGLContext or is there something I can do about this? I have noticed (I'm new to iOS development) that other parts of Apple's API uses zone allocators and the memory usage keeps growing when it really should have been in some kind of steady state mode. That makes me think I have missed something (I have tried to send it LowMemory but nothing happen).
So from your comment you say that you are using some other code that is written in C++ and basically all you need is the connection to the actual buffer to be presented. I will assume none of that "C++" code you mentioned is inflating your memory and you have actually try to create a new application only adding the code you posted just to be 100% sure...
To migrate off the GLKit is then very easy for you. Simply subclass the UIView which will be used to drawing and add a few methods:
This needs to be overridden so you can get the render buffer from the view
+ (Class)layerClass {
return [CAEAGLLayer class];
}
The context is already done correctly and needs to be used.
You need to setup the buffers manually. I use a custom class but I believe you will be able to see what is going on here and possibly remove the code you don't need.
- (void)loadBuffersWithView:(UIView *)view
{
self.view = view;
CAEAGLLayer *layer = (CAEAGLLayer *)view.layer;
layer.opaque = YES;
if ([[UIScreen mainScreen] respondsToSelector:#selector(displayLinkWithTarget:selector:)]) {
layer.contentsScale = [UIScreen mainScreen].scale;
}
GLuint frameBuffer; // will hold the generated ID
glGenFramebuffers(1, &frameBuffer); // generate only 1
self.frameBuffer = frameBuffer; // assign to store as the local variable
[self bindFrameBuffer];
GLuint renderBuffer; // will hold the generated ID
glGenRenderbuffers(1, &renderBuffer); // generate only 1
self.renderBuffer = renderBuffer; // assign to store as the local variable
[self bindRenderBuffer];
[self.context.glContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:layer];
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, self.renderBuffer);
GLint width, height;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &height);
self.bufferSize = CGSizeMake(width, height);
glViewport(0, 0, width, height);
[GlobalTools framebufferStatusValid];
[GlobalTools checkError];
}
On present you need to call presentRenderbuffer on your context with the appropriate render buffer id.
Rest of your code should be the same. But remember to also tear down the openGL when you do not need it and possibly explicitly delete all the buffers that were generated on the GPU.
I printed out the amount of resident memory to the screen and noticed that the memory didn't increase when the device was disconnected from the debugger.
It turns out that using the "Zombie instrument" was causing this... I swear that I saw the memory increase in the memory view of the debugger too, but now I can't repeat it (haven't changed anything in the scheme).
I have a MainMenuViewController and a GameViewController which is a GLKViewConrtroller.
The first time I go from the main menu to the GameViewController everything is rendered fine. If I go back to the main menu, the GameViewController and its view get dealloced (I logged it).
When now going back to the game, I see a blank screen, nothing gets rendered OpenGL-wise. The overlay test menu with UIKit is still there.
Thisis how I tear down OpenGL in the GameViewController's dealloc method, the last five lines were added as tries to make it work, so it doesn't work with or without them.
- (void)tearDownGL {
[EAGLContext setCurrentContext:self.context];
glDeleteBuffers(1, &_vertexBuffer);
glDeleteVertexArraysOES(1, &_vertexArray);
self.effect = nil;
_program = nil;
glBindVertexArrayOES(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
[EAGLContext setCurrentContext: nil];
}
I think that the problem is that you are not using a sharegroup - a place where OpenGL can share textures and shaders between contexts?
Here is code that will create a sharegroup that all your GLKViewController 's subclass. If you have multiple subclasses you will have to do something to make the shareGroup global, if that's appropriate.
- (void)viewDidLoad
{
[super viewDidLoad];
// Create an OpenGL ES context and assign it to the view loaded from storyboard
GLKView *view = (GLKView *)self.view;
// GLES 3 is not supported on iPhone 4s, 5. It may 'just work' to try 3, but stick with 2, as we don't use the new features, me thinks.
//view.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
//if (view.context == nil)
static EAGLSharegroup* shareGroup = nil;
view.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2 sharegroup:shareGroup];
if (shareGroup == nil)
shareGroup = view.context.sharegroup;
...
I am currently working on iOS with openvc,
I am trying to convert an cv::Mat to an UIImage.
But the app is crashing after a few seconds!
(Terminated due to Memory Error)
This is my code that I am currently using:
using namespace cv;
#import "ViewController.h"
#interface ViewController ()
#end
#implementation ViewController
{
CvVideoCamera* videoCamera;
CADisplayLink*run_loop;
UIImage*image2;
}
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
videoCamera = [[CvVideoCamera alloc] initWithParentView:_liveview];
videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionBack;
videoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPresetiFrame1280x720;
videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationLandscapeLeft;
videoCamera.defaultFPS = 30;
videoCamera.delegate = self;
[videoCamera start];
run_loop = [CADisplayLink displayLinkWithTarget:self selector:#selector(update)];
[run_loop setFrameInterval:2];
[run_loop addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSRunLoopCommonModes];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (void)update{
_smallliveview.image = image2;
}
- (UIImage *)UIImageFromMat:(cv::Mat)image
{
cvtColor(image, image, CV_BGR2RGB);
NSData *data = [NSData dataWithBytes:image.data length:image.elemSize()*image.total()];
CGColorSpaceRef colorSpace;
if (image.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);//CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(image.cols, //width
image.rows, //height
8, //bits per component
8 * image.elemSize(), //bits per pixel
image.step.p[0], //bytesPerRow
colorSpace, //colorspace
kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
//[self.imgView setImage:finalImage];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
#pragma mark - Protocol CvVideoCameraDelegate
#ifdef __cplusplus
- (void)processImage:(Mat&)image;
{
image2 = [self UIImageFromMat:image];
}
#endif
#end
What should i do?
It would be very nice if somebody can help me!? (;
Greetings David
Throw away every bit of what you wrote to create a UIImage and use the MatToUIImage() function instead. Simply pass the mat to the function, and you have your image.
Although you didn't ask, you shouldn't use a run loop or display link here. Time and initiate related methods to the processImage method called by OpenCV.
Also, make sure you're using the latest version. This has nothing to do with your problem, but it's good practice.
To import OpenCV 3 into your Xcode 8 project:
Install 'OpenCV2' with Cocoapods (it says '2', but it's still version 3). Don't install the 'devel' build.
Open your project in the workspace Cocoapods created for you — not the project file you created — and append every implementation file that uses OpenCV with .mm (versus .m). You'll get strange error messages if you don't.
I'm trying to set up a CAEAGLLayer subclass with a gl context. That is, instead of creating a UIView subclass which returns a CAEAGLLayer and binding a gl context to this layer from within the UIView subclass, I'm directly subclassing the layer and trying to setup the context in the layer's init, like so:
- (id)init
{
self = [super init];
if (self) {
self.opaque = YES;
_glContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
NSAssert([EAGLContext setCurrentContext:_glContext], #"");
glGenRenderbuffers(1, &_colorRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _colorRenderBuffer);
[_glContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:self];
glGenFramebuffers(1, &_framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _framebuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _colorRenderBuffer);
/// . . .
up to that point everything seems fine. However, I then try to create a shader program with a "pass-thru" vertex/fragment shader pair and while linking the program returns no errors, validation fails saying: "Current draw framebuffer is invalid."
The code that links and validates the shader program (after attaching the shaders) looks like so, just in case:
- (BOOL)linkAndValidateProgram
{
GLint status;
glLinkProgram(_shaderProgram);
#ifdef DEBUG
GLint infoLogLength;
GLchar *infoLog = NULL;
glGetProgramiv(_shaderProgram, GL_INFO_LOG_LENGTH, &infoLogLength);
if (infoLogLength > 0) {
infoLog = (GLchar *)malloc(infoLogLength);
glGetProgramInfoLog(_shaderProgram, infoLogLength, &infoLogLength, infoLog);
NSLog(#"Program link log:\n%s", infoLog);
free(infoLog);
}
#endif
glGetProgramiv(_shaderProgram, GL_LINK_STATUS, &status);
if (!status) {
return NO;
}
glValidateProgram(_shaderProgram);
#ifdef DEBUG
glGetProgramiv(_shaderProgram, GL_INFO_LOG_LENGTH, &infoLogLength);
if (infoLogLength > 0) {
infoLog = (GLchar *)malloc(infoLogLength);
glGetProgramInfoLog(_shaderProgram, infoLogLength, &infoLogLength, infoLog);
NSLog(#"Program validation log:\n%s", infoLog);
free(infoLog);
}
#endif
glGetProgramiv(_shaderProgram, GL_VALIDATE_STATUS, &status);
if (!status) {
return NO;
}
glUseProgram(_shaderProgram);
return YES;
}
I'm wondering if there might be some extra setup at some point throughout the lifecycle of CAEAGLLayer that I might be unaware of and might be skipping by trying to setup GL in init?
The problem was the layer has no dimensions at that point in init. Which in turn makes it where trying to set the render buffer storage to the layer implies a buffer of 0.
UPDATE: My current best thinking is that, instead of imposing a size on init (which worked fine for the purposes of testing but is kind hacky), I should just re set the buffer storage whenever the layer changes sizes. So I'm overriding -setBounds: like so:
- (void)setBounds:(CGRect)bounds
{
[super setBounds:bounds];
[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:self];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &someVariableToHoldWidthIfYouNeedIt);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &someVariableToHoldHeightIfYouNeedIt);
}
As far as I know you have to overwrite the layerClass method in the View, like this
+ (Class)layerClass
{
return [MYCEAGLLayer class];
}
Also you have to set the drawableProperties on the MYCEAGLLayer.