Reusing same buffer with glDrawElements (iOS) - ios

Is it possible to create a VBO and reuse it between calls to glDrawElements in the same rendering cycle? (I tried and obtained weird results). The example below is missing bindings, etc.
Init code (executed only once) *:*
glGenBuffers(...)
glBufferData(...)
Render frame code (executed for each frame):
glMapBufferOES(...)
//... Update buffer from index 0 to X
glDrawElements(...)
//... Update buffer from index 0 to Y
glDrawElements(...)
[context presentRenderbuffer:GL_RENDERBUFFER_OES];

You need to unmap your buffer before drawing with it. If you don't unmap, that's probably why you're seeing weird results with glDrawElements.
http://www.opengl.org/sdk/docs/man/xhtml/glMapBuffer.xml
After glDrawElements is called, you can remap your buffer and fill it in again.
You will probably get better performance by not reusing the buffer right away. Remapping right after the draw will probably block until the draw is completed.

Related

opencv 3.4.2 some questions regarding cv::mat

UMat frame,gray;
VideoCapture cap(0);
if(!cap.isOpened())
return -1;
for(i=0;i<10;i++)
{
cap >> frame;
Canny(frame, frame, 0, 50);
imshow("canny", frame);
}
return 0;
here my doubt is that if the loop is running for 10 times and in
line-11 I am applying the canny filter, but the src and dst are same(frame) so it will be inplace operation, so at each iteration what will happen in case of the memory
allocations and deallocation!!
will there will 9 memory locations with no header pointing to it,
or in every loop the memory occupied by frame matrix data will be
deallocated,
or in every loop i have to call release(), to manually deallocate the
matrix
when the canny filter is applied will the result data replace the old matrix data, or it will allocate a new set of memory for the result data and pointing to it, and if so what will happen to the old matrix data?
The following line:
UMat frame
does not allocate any significant image memory. It just creates a header on the stack with space for:
the number of rows,
and columns in the image,
the image type,
a reference count, and
a pointer that will eventually point to the image's pixels, but for the moment points to nothing.
On entry to the loop, the following line:
cap >> frame;
will allocate sufficient memory on the heap for the image's pixels, and initialise the dimensions, the reference count and make the data pointer point to the allocated chunk of image memory - obviously it will also fill the pixel-data from the video source.
When you call Canny with:
Canny(frame, frame, 0, 50);
it will see that the operation is in-place and re-use the same Mat that contains frame and over-write it. No allocation, nor releasing is necessary.
The second, and subsequent, times you go around the loop, the line:
cap >> frame;
will see that there is already sufficient space allocated and load the data from the video stream into the same Mat, thereby over-writing the results of the previous Canny().
When you return from the function at the end, the heap memory for the pixel data is released and the stack memory for the header is given up.
TLDR; There is nothing to worry about - memory allocation and releasing are taken care of for you!

How to draw to GLKit's OpenGL ES context asynchronously from a Grand Central Dispatch Queue on iOS

I'm trying to move lengthy OpenGL draw operations into a GCD queue so I can get other stuff done while the GPU grinds along. I would much rather prefer to do this with GCD vs. adding real threading to my app. Literally all I want to do is be able to
Not block on a glDrawArrays() call so the rest of the UI can stay responsive when GL rendering gets very slow.
Drop glDrawArrays() calls when we aren't finishing them anyway (don't build up a queue of frames that just grows and grows)
On Apple's website, the docs say:
GCD and NSOperationQueue objects can execute your tasks on a thread of their choosing. They may create a thread specifically for that task, or they may reuse an existing thread. But in either case, you cannot guarantee which thread executes the task. For an OpenGL ES app, that means:
Each task must set the context before executing any OpenGL ES commands.
Two tasks that access the same context may never execute simultaneously.
Each task should clear the thread’s context before exiting.
Sounds pretty straightforward.
For the sake of simplicity in this question, I'm starting with a new, bone-stock version of the Apple template that comes up in the "New Project" dialog for "OpenGL ES" game. When you instantiate it, compile, and run, you should see two cubes rotating on a grey field.
To that code, I have added a GCD queue. Starting with the interface section of ViewController.m:
dispatch_queue_t openGLESDrawQueue;
Then setting those up in ViewController viewDidLoad:
openGLESDrawQueue = dispatch_queue_create("GLDRAWINGQUEUE", NULL);
Finally, I make these very small changes to the drawInRect method that CADisplayLink ends up triggering:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
void (^glDrawBlock)(void) = ^{
[EAGLContext setCurrentContext:self.context];
glClearColor(0.65f, 0.65f, 0.65f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArrayOES(_vertexArray);
// Render the object with GLKit
[self.effect prepareToDraw];
glDrawArrays(GL_TRIANGLES, 0, 36);
// Render the object again with ES2
glUseProgram(_program);
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, 0, _modelViewProjectionMatrix.m);
glUniformMatrix3fv(uniforms[UNIFORM_NORMAL_MATRIX], 1, 0, _normalMatrix.m);
glDrawArrays(GL_TRIANGLES, 0, 36);
};
dispatch_async(openGLESDrawQueue, glDrawBlock);
}
This does not work. The drawing goes crazy. Drawing with the same block with dispatch_sync() works fine, though.
Let's double check Apple's list:
Each task must set the context before executing any OpenGL ES commands.
Ok. I'm setting the context. It's Objective-C object pointers that have a lifetime longer than the block anyway, so they should get closed over fine. Also, I can check them in the debugger and they are fine. Also, when I draw from dispatch_sync, it works. So this does not appear to be the problem.
Two tasks that access the same context may never execute simultaneously.
The only code accessing the GL context once it's set up is the code in this method, which is in turn, in this block. Since this a serial queue, only one instance of this should ever be drawing at a time anyway. Further, if I add a synchronized(self.context){} block, it doesn't fix anything. Also, in other code with very slow drawing I added a semaphore to skipping adding blocks to the queue when the previous one hadn't finished and it dropped frames fine (according to the NSLog() messages it was spitting out), but it didn't fix the drawing. HOWEVER, there is the possibility that some of the GLKit code that I can't see manipulates the context in ways I don't understand from the main thread. This is my second-highest rated theory right now, despite the fact that synchronized() doesn't change the problem and OpenGL Profiler doesn't show any thread conflicts.
Each task should clear the thread’s context before exiting.
I'm not totally clear on what this means. The GCD thread's context? That's fine. We're not adding anything to the queue's context so there's nothing to clean up. The EAGLContext that we're drawing to? I don't know what else we could do. Certainly not actually glClear it, that will just erase everything. Also, there's some code in Sunset Lake's Molecules that renders that looks like this:
Code:
dispatch_async(openGLESContextQueue, ^{
[EAGLContext setCurrentContext:context];
GLfloat currentModelViewMatrix[9];
[self convert3DTransform:&currentCalculatedMatrix to3x3Matrix:currentModelViewMatrix];
CATransform3D inverseMatrix = CATransform3DInvert(currentCalculatedMatrix);
GLfloat inverseModelViewMatrix[9];
[self convert3DTransform:&inverseMatrix to3x3Matrix:inverseModelViewMatrix];
GLfloat currentTranslation[3];
currentTranslation[0] = accumulatedModelTranslation[0];
currentTranslation[1] = accumulatedModelTranslation[1];
currentTranslation[2] = accumulatedModelTranslation[2];
GLfloat currentScaleFactor = currentModelScaleFactor;
[self precalculateAOLookupTextureForInverseMatrix:inverseModelViewMatrix];
[self renderDepthTextureForModelViewMatrix:currentModelViewMatrix translation:currentTranslation scale:currentScaleFactor];
[self renderRaytracedSceneForModelViewMatrix:currentModelViewMatrix inverseMatrix:inverseModelViewMatrix translation:currentTranslation scale:currentScaleFactor];
const GLenum discards[] = {GL_DEPTH_ATTACHMENT};
glDiscardFramebufferEXT(GL_FRAMEBUFFER, 1, discards);
[self presentRenderBuffer];
dispatch_semaphore_signal(frameRenderingSemaphore);
});
This code works, and I don't see any additional cleanup. I can't figure out what this code is doing differently than mine. One thing that's different is it looks like literally everything that touches the GL context is being done from the same GCD dispatch queue. However, when I make my code like this, it doesn't fix anything.
The last thing that's different is that this code does not appear to use GLKit. The code above (along with the code I'm actually interested in) does use GLKit.
At this point, I have three theories about this problem:
1. I am making a conceptual error about the interaction between blocks, GCD, and OpenGL ES.
2. GLKit's GLKViewController or GLKView do some drawing to or manipulation of the EAGLContext in between calls to drawInRect. While my drawInRect blocks are being worked on, this happens, messing things up.
3. The fact that I'm relying on the - (void)glkView:(GLKView *)view drawInRect:(CGRect)rect method is ITSELF the problem. I think of this method as, "Hey, you automatically have a CADisplayLink configured, and every time it wants a frame, it'll hit this method. Do whatever the hell you want. I mean, in normal code here you just issue glDrawArrays commands. It's not like I'm passing back a framebuffer object or a CGImageRef containing what I want to end up on the screen. I'm issuing GL commands. HOWEVER, this could be wrong. Maybe you just can't defer drawing in this method in anyway without causing problems. To test this theory, I moved all the draw code into a method called drawStuff and then replaced the body of the drawRect method with:
[NSTimer scheduledTimerWithTimeInterval:10 target:self selector:#selector(drawStuff) userInfo:nil repeats:NO];
The app comes up, displays the color the view is glClear'd to for ten seconds, and then draws like normal. So that theory doesn't look too strong either.
There is a similar question posted here that has one answer, which is upvoted and accepted:
The code in the dispatch block isn't going to work. By the time it gets executed, all of the OpenGL state for that frame will have long since been destroyed. If you were to put a call to glGetError() in that block, I'm sure it would tell you the same. You need to make sure that all your drawing code is done in that glkView method for the OpenGL state to be valid. When you do that dispatch, you're essentially shunting the execution of that drawing code out of the scope of that method.
I don't see why this should be true. But:
I'm only closing over references to things in the block that are going to outlive the block, and they're things like Objective-C pointers from the enclosing object scope.
I can check them in the debugger, they look fine.
I inserted a getGLError() call after every GL operation and it never returns anything but zero.
Drawing from a block with dispatch_sync works.
I tried a think where in the drawInRect method, I save the block to an ivar and then set a NSTimer to call drawStuff. In drawStuff I just invoke the block. Draws fine.
The NSTimer case draws asynchronously, but it does not involve drawing from another thread since AFAIK NSTimer invocations just get scheduled on the setting thread's runloop. So it has to do with threads.
Can anyone clue me in on what I'm missing here?
This is not working because as borrrden says, GLKit is calling presentRenderbuffer: immediately after - (void)glkView:(GLKView *)view drawInRect:(CGRect)rect completes.
This is working in your case of using a timer, because the drawStuff method is being called on the main thread at the beginning of a draw cycle. Your - (void)glkView:(GLKView *)view drawInRect:(CGRect)rect effectively does nothing but schedule this to happen on the main thread again in another 10 seconds, and the previously scheduled drawing call is then rendered at the end of the drawInRect: method. This is doing nothing except delaying the drawing by 10 seconds, everything is still occurring on the main thread.
If you want to go the route of rendering off the main thread, GLKit is not going to be a good match. You are going to need to set up your own thread with a runloop, hook up a CADisplayLink to this runloop, and then render from this queue. GLKViewController is configured to use the main runloop for this, and will always present the render buffer at the end of each frame, which will cause havoc with whatever you are doing on a different thread.
Depending on your GL needs you may find it simpler doing all the GL stuff on the main thread, and doing the "other stuff" off the main thread.

Can I flip the SDL buffer at an offset?

I render all my surfaces to a buffer, then at the end of the frame I flip the buffer.
However, when a certain event is happening in the game, I wanted to shake the buffer around to add intensity. Rather than blitting each surface at the offset individually, I thought I would just offset the entire buffer at the end of the frame, since I wanted everything of the buffer to shake.
Is there a way that I can render to buffer at on offset, or do I need to then blit the buffer to a second buffer and flip that?
You can make a function and put it right before your render.
It should get a random direction (up, down, left or right), and add the same small transformation for all textures being rendered at that frame (all textures should be moved a little bit down in that frame, for example).
In the next frame you get again a random direction, avoiding the last one picked.
The function should also have a timer (use SDL_GetTicks() ), so you can set how long is the shaking going to last.
I don't know if I was clear but, good luck anyway. :)

OpenGL ES 2.0, drawing using multiple vertex buffers

I can't find much info on whether drawing from multiple vertex buffers is supported on opengl es 2.0 (i.e use one vertex buffer for position data and another for normal, colors etc). This page http://developer.apple.com/library/ios/#documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/TechniquesforWorkingwithVertexData/TechniquesforWorkingwithVertexData.html and listing 9.4 in particular implies you should be able to, but I can't get it to work on my program. Code for the offending draw call:
glBindBuffer(GL_ARRAY_BUFFER, mPositionBuffer->openglID);
glVertexAttribPointer(0, 4, GL_FLOAT, 0, 16, NULL);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, mTexCoordBuffer->openglID);
glVertexAttribPointer(1, 2, GL_FLOAT, 0, 76, NULL);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mIndexBuffer->openglID);
glDrawElements(GL_TRIANGLES, 10788, GL_UNSIGNED_SHORT, NULL);
This draw call will stall or crash with EXC_BAD_ACCESS on the simulator, and gives very weird behavior on the device (opengl draws random triangles or presents previously rendered frames). No opengl call ever returns an error, and I've inspected the vertex buffers extensively and am confident they have the correct sizes and data.
Has anyone successfully rendered using multiple vertex buffers and can share their experience on why this might not be working? Any info on where to start debugging stalled/failed draw calls that don't return any error code would be greatly appreciated.
Access violations generally mean that you are trying to draw more triangles than you have allocated in a buffer. The way you've set up buffers is perfectly fine and should work, I would be checking if your parameters are set properly:
http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttribPointer.xml
http://www.opengl.org/sdk/docs/man/xhtml/glDrawElements.xml
I think your issue is either that you've switched offset and stride in your glVertexAttribPointer calls, or you've miscounted the number of indices you're drawing
Yes, you can use multiple vertex buffer objects (VBOs) for a single draw. The OpenGL ES 2.0 spec says so in section 2.9.1.
Do you really have all those hard-coded constants in your code? Where did that 76 come from?
If you want help debugging, you need to post the code that initializes your buffers (the code that calls glGenBuffers and glBufferData). You should also post the stack trace of EXC_BAD_ACCESS.
It might also be easier to debug if you drew something simpler, like one triangle, instead of 3596 triangles.

iOS and multiple OpenGL views

I'm currently developping an iPad app which is using OpenGL to draw some very simple (no more than 1000 or 2000 vertices) rotating models in multiple OpenGL views.
There are currently 6 view in a grid, each one running its own display link to update the drawing. Due to the simplicity of the models, it's by far the simplest method to do it, I don't have the time to code a full OpenGL interface.
Currently, it's doing well performance-wise, but there are some annoying glitches. The first 3 OpenGL views display without problems, and the last 3 only display a few triangles (while still retaining the ability to rotate the model). Also there are some cases where the glDrawArrays call is going straight into EXC_BAD_ACCESS (especially on the simulator), which tell me there is something wrong with the buffers.
What I checked (as well as double- and triple-checked) is :
Buffer allocation seems OK
All resources are freed on dealloc
The instruments show some warnings, but nothing that seems related
I'm thinking it's probably related to my having multiple views drawing at the same time, so is there any known thing I should have done there? Each view has its own context, but perhaps I'm doing something wrong with that...
Also, I just noticed that in the simulator, the afflicted views are flickering between the right drawing with all the vertices and the wrong drawing with only a few.
Anyway, if you have any ideas, thanks for sharing!
Okay, I'm going to answer my own question since I finally found what was going on. It was a small missing line that was causing all those problems.
Basically, to have multiple OpenGL views displayed at the same time, you need :
Either, the same context for every view. Here, you have to take care not to draw with multiple threads at the same time (i.e. lock the context somehow, as explained on this answer. And you have to re-bind the frame- and render-buffers every time on each frame.
Or, you can use different contexts for each view. Then, you have to re-set the context on each frame, because other display links, could (and would, as in my case) cause your OpenGL calls to use the wrong data. Also, there is no need for re-binding frame- and render-buffers since your context is preserved.
Also, call glFlush() after each frame, to tell the GPU to finish rendering each frame fully.
In my case (the second one), the code for rendering each frame (in iOS) looks like :
- (void) drawFrame:(CADisplayLink*)displayLink {
// Set current context, assuming _context
// is the class ivar for the OpenGL Context
[EAGLContext setCurrentContext:_context]
// Clear whatever you want
glClear (GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
// Do matrix stuff
...
glUniformMatrix4fv (...);
// Set your viewport
glViewport (0, 0, self.frame.size.width, self.frame.size.height);
// Bind object buffers
glBindBuffer (GL_ARRAY_BUFFER, _vertexBuffer);
glVertexAttribPointer (_glVertexPositionSlot, 3, ...);
// Draw elements
glDrawArrays (GL_TRIANGLES, 0, _currentVertexCount);
// Discard unneeded depth buffer
const GLenum discard[] = {GL_DEPTH_ATTACHMENT};
glDiscardFramebufferEXT (GL_FRAMEBUFFER, 1, discard);
// Present render buffer
[_context presentRenderbuffer:GL_RENDERBUFFER];
// Unbind and flush
glBindBuffer (GL_ARRAY_BUFFER, 0);
glFlush();
}
EDIT
I'm going to edit this answer, since I found out that running multiple CADisplayLinks could cause some issues. You have to make sure to set the frameInterval property of your CADisplayLink instance to something other than 0 or 1. Else, the run loop will only have time to call the first render method, and then it'll call it again, and again. In my case, that was why only one object was moving. Now, it's set to 3 or 4 frames, and the run loop has time to call all the render methods.
This applies only to the application running on the device. The simulator, being very fast, doesn't care about such things.
It gets tricky when you want multiple UIViews that are openGLViews,
on this site you should be able to read all about it: Using multiple openGL Views and uikit

Resources