Understanding Core Videos CVPixelBufferPool and CVOpenGLESTextureCache semantics - ios

I'm refactoring my iOS OpenGL-based rendering pipeline. My pipeline consist of many rendering steps, hence I need a lot of intermediate textures to render to and read from. Those textures are of various types (unsigned byte and half float) and may posses a different number of channels.
To save memory and allocation effort I recycled textures that were used by previous steps in the pipeline and are no longer needed. In my previous implementation I did that on my own.
In my new implementation I want to use the APIs provided by the Core Video framework for that; especially since they provide much faster access to the texture memory from the CPU. I understand that the CVOpenGLESTextureCache allows me to create OpenGL textures out of CVPixelBuffers that can be created directly or using a CVPixelBufferPool. However, I am unable to find any documentation describing how they really work and how they play together.
Here are the things I want to know:
For getting a texture from the CVOpenGLESTextureCache I always need to provide a pixel buffer. Why is it called "cache" if I need to provide the memory anyways and are not able to retrieve an old, unused texture?
The CVOpenGLESTextureCacheFlush function "flushes currently unused resources". How does the cache know if a resource is "unused"? Are textures returned to the cache when I release the corresponding CVOpenGLESTextureRef? The same question applies to the CVPixelBufferPool.
Am I able to maintain textures with different properties (type, # channels, ...) in one texture cache? Does it know if a textures can be re-used or needs to be created depending on my request?
CVPixelBufferPools seem only to be able to manage buffers of the same size and type. This means I need to create one dedicated pool for each texture configuration I'm using, correct?
I'd be really happy if at least some of those questions could be clarified.

Yes, well you will not actually be able to find anything. I looked and looked and the short answer is you just need to test things out to see how the implementation functions. You can find my blog post on the subject along with example code at opengl_write_texture_cache. Basically, it seems that the way it works is that the texture cache object "holds" on to the association between the buffer (in the pool) and the OpenGL texture that is bound when a triangle render is executed. The result is that the same buffer should not be returned by the pool until after OpenGL is done with it. In the weird case of some kind of race condition, the pool might get 1 buffer larger to account for a buffer that is held too long. What is really nice about the texture cache API is that one only needs to write to the data buffer once, as opposed to calling an API like glTexImage2D() which would "upload" the data to the graphics card.

Related

WebGL Constructive Solid Geometry to Static Vertices

I'm super new to 3D graphics (I've been trying to learn actual WebGL instead of using a framework) and I'm now in the constructive solid geometry phase. I know sites like tinkercad.com use CSG with WebGL, but they have things set so that your design is calculated every time you load the page instead of doing the subtraction, addition and intersection of primitive objects once and then storing those end design vertices for later use. I'm curious if anybody knows why they're doing things that way (maybe just to conserve resources on the server?) and if there isn't some straightforward way of extracting those vertices right before the draw call? Maybe a built in function of WebGL? Haven't found anything so far, when I try logging the object data from gl.bufferData() I'm getting multiple Float32Arrays (one for each object that was unionized together) instead of one complete set of vertices.
By the way, the only github I've found with CSG for WebGL is this https://github.com/evanw/csg.js/ and it's pretty straightforward, however it uses a framework and was curious if you know of any other CSG WebGL code out there that doesn't rely on a framework. I'd like to write it myself either way, but just being able to see what others have done would be nice.

OpenGL (ES 2.0) VBO Performances in a Shared Memory Architecture

I am a desktop GL developer, and I am starting to explore the mobile world.
To avoid misunderstandings, or welcome-but-trivial replies, I can humbly say that I am pretty well aware of the GL and GL|ES machinery.
The short question is: if we are using GL|ES 2.0 in a shared memory architecture, what's the point behind using VBOs against client-side arrays?
More in detail:
Vertex buffers are raw chunks of memory, the driver cannot in any way optimize anything because the access pattern depends on: 1) how the application configures vertex data layout, 2) how a vertex shader consumes buffer content, and 3) we can have lots of vertex shaders operating in different ways, and differently sourcing the same buffer.
Alignment: individual VBO storage could start at addresses that are optimal to the underlying GL system; what if I just force (e.g, respect alignment best practices) client-side arrays allocation to these boundaries?
Tile-Based Rendering vs. Immediate Mode architectures should not come into play: to my understanding, this is not related to my question (i.e., memory access).
I understand that using VBOs can have your code run better/faster in future platforms/hardware without modifying it, but this is not the focus of this question.
Alongside, I also realize that using VBOs in a shared memory architecture doubles memory usage (if you, for some reason, have to keep vertex data at your disposal), and it costs you a memcpy of the data.
As with interleaved vertex arrays, VBO usage has got a great "hype" in developers' forums/blogs/official_technotes without any data supporting those statements (i.e., benchmarks).
Is VBO usage worth it on shared memory architectures?
Do client-side arrays work well?
What do you think/know about this?
I can report that using of VBOs to store vertex data on Android devices gave me zero performance improvement. Tried it on Adreno, Mali400 and PowerVR GPUs. However, we use VBOs considering that it is the best practice for OpenGL ES.
You can find notes about this in our article (Vertex Buffer Objects paragraph).
According to this report, even holding SMA constant it depends on both the OpenGL implementation (some VBO work is secretly done on the CPU) and the size of the VBOs:
http://sarofax.wordpress.com/2011/07/10/vbo-vertex-buffer-object-limitations-on-ios-devices/
I will tell you, what i know about iOS platform.
VBO does really improve your performance.
VBO is perfect, if you have a static geometry - once copied, no additional overhead on every draw call. CA will do copy your data from client memory to "gpu memory" every drawcall. It may do data realigning, if you forgot about it.
VBO can be mapped to gpu vie glMapBuffer - it is an asynchronous operation, meaning, it almost has no overhead, but you should remember - when you are mapping\unmapping your buffer, it's better to use it 2 frames after unmap operation - to avoid synchronization
Apple engineers proclaim, that VBO will have better performance, than CA on SGX hardware, even if you'll reupload it every frame - i don't know the details.
VBO is a best practice. CA are deprecated. Better keep in pace with modern trends and stay as much cross-platform, as possible

iOS CGImageRef Pixel Shader

I am working on an image processing app for the iOS, and one of the various stages of my application is a vector based image posterization/color detection.
Now, I've written the code that can, per-pixel, determine the posterized color, but going through each and every pixel in an image, I imagine, would be quite difficult for the processor if the iOS. As such, I was wondering if it is possible to use the graphics processor instead;
I'd like to create a sort of "pixel shader" which uses OpenGL-ES, or some other rendering technology to process and posterize the image quickly. I have no idea where to start (I've written simple shaders for Unity3D, but never done the underlying programming for them).
Can anyone point me in the correct direction?
I'm going to come at this sideways and suggest you try out Brad Larson's GPUImage framework, which describes itself as "a BSD-licensed iOS library that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies". I haven't used it and assume you'll need to do some GL reading to add your own filtering but it'll handle so much of the boilerplate stuff and provides so many prepackaged filters that it's definitely worth looking into. It doesn't sound like you're otherwise particularly interested in OpenGL so there's no real reason to look into it.
I will add the sole consideration that under iOS 4 I found it often faster to do work on the CPU (using GCD to distribute it amongst cores) than on the GPU where I needed to be able to read the results back at the end for any sort of serial access. That's because OpenGL is generally designed so that you upload an image and then it converts it into whatever format it wants and if you want to read it back then it converts it back to the one format you expect to receive it in and copies it to where you want it. So what you save on the GPU you pay for because the GL driver has to shunt and rearrange memory. As of iOS 5 Apple have introduced a special mechanism that effectively gives you direct CPU access to OpenGL's texture store so that's probably not a concern any more.

OpenCL subbuffers, why is important?

I try to implement a multi-gpu OpenCL code. In my model, GPUs have to communicate and
exchange data.
I found (I don't remember where, it is been some time) that one solution is to deal with
subbuffers. Can anybody explain, as simple as possible, why subbuffers are important
in OpenCL? As far as I can understand, one can do exactly the same using only buffers.
Thanks a lot,
Giorgos
Supplementary Question:
What is the best way to exchange data between GPUs?
I am not sure(or I do not know) how sub-buffers will provide solutions to your problem when dealing with Multiple GPU's. AFAIK sub buffers provide a view into a buffer i.e a single buffer can be divided into chunks of smaller buffers(sub buffers) providing a layer of software abstraction, Sub-buffers are advantageous in same cases where in you need keep an offset first element to be zero.
To address multiGPU or MultiDevice problem OpenCL 1.2 provides API from where you can copy memory objects directly from One GPU to other using clEnqueueMigrateMemObjects OpenCL API call http://www.khronos.org/registry/cl/sdk/1.2/docs/man/xhtml/clEnqueueMigrateMemObjects.html

Newbie to GPU programming: what to learn?

I am rendering a certain scene to an off-screen frame buffer (FBO) and then I'm reading the rendered image using glReadPixels() for processing on the CPU. The processing involves some very simple scanning routines and extraction of data.
After profiling I realized that most of what my application does is spend time in glReadPixels() - more than 50% of the time. So the natural step is to move the processing to the GPU so that the data would not have to be copied.
So my question is - what would be the best way to program such a thing to the GPU?
GLSL?
CUDA?
Anything else I'm not currently aware of?
The main requirements is that it'll have access to The rendered off-screen frame bufferes (or texture data since it is possible to render to a texture) and to be able to output some information to the CPU, say in the order of 1-2Kb per frame.
You might find the answers in the "Intro to GPU programming" questions useful.
-Adam
There are a number of pointers to getting started with GPU programming in other questions, but if you have an application that is already built using OpenGL, then probably your question really is "which one will interoperate with OpenGL"?
After all, your whole point is to avoid the overhead of reading your FBO back from the GPU to the CPU with glReadPixels(). If, for example you had to read it back anyway, then copy the data into a CUDA buffer, then transfer it back to the gpu using CUDA APIs, there wouldn't be much point.
So you need a GPGPU package that will take your OpenGL FBO object as an input directly, without any extra copying.
That would probably rule out everything except GLSL.
I'm not 100% sure whether CUDA has any way of operating directly on an OpenGL buffer object, but I don't think it has that feature.
I am sure that ATI's Stream SDK doesn't do that. (Although it will interoperate with DirectX.)
I doubt that the DirectX 11 "technology preview" with compute shaders has that feature, either.
EDIT: Follow-up: it looks like CUDA, at least the most recent version, has some support for OpenGL interoperability. If so, that's probably your best bet.
I recently found this Modern GPU
You may find OpenAI Triton useful

Resources