Image processing using OpenCV on OpenGL graphics - opencv

As title says, I want to know if there is a way to process graphics using OpenCV created by OpenGL?
I am displaying thousands of points in real-time using OpenGL. Now I want to create clusters for those points and later point-tracking.
I have found this but couldn't understand it well.
Apart from that on this page a guy mentioned "OpenCV generally operates on real image data, and wouldn't operate on graphics generated by OpenGL."
Is it true?
Below is one of the screenshot of real-time output.

Use glReadPixels to copy the rendered result to the main memory. Then you can create a cv::Mat out of the buffer.

Related

MTLTexture vs CGImageRef

What is the main difference between MTLTexture vs CGImageRef? When do we need to use MTLTexture instead of CGImageRef (and vice versa)?
I have an app (say a video game) that draw everything by itself on a dedicated surface. this includes animation at 60fps (so I need to redraw the surface every 16ms). I don't know the most efficient way to do my app using Metal
First of all, MTLTexture comes from a low-level graphics API. MTLTexture refers to an "image" that resides in memory accessible to GPU (no necessarily on GPU itself). You can then write a program that uses Metal, specifically render (MTLRenderPipelineState) or compute (MTLComputePipelineState) pipeline states that contain shader (programs that run on GPU) to read textures, sample them, write to them and use them as attachments (output rendering results to them). Textures can also be copied to buffers (MTLBuffer) and other textures, if you want to read back texture data on the CPU. But MTLTexture is mostly intended to be used by GPU rather than CPU. Also, MTLTexture is not limited to being 2D, it can also be a cube texture or even a 3D texture.
CGImage, on the other hand, comes from a higher-level API (Core Graphics or Quartz 2D) that is intended for 2D use. You don't need shaders or GPU pipeines to create or modify CGImages and there are many functions to work with these images "out of the box".
I would say, if you have a 3D video game, you can check out Metal, but it's a low level API, and setting up Metal is a much more involved process than setting up OpenGL, for example. You can't use Core Graphics for 3D games as-is. If Metal seems too hard, you can check out higher-level APIs from Apple, such as SceneKit, which are also intended for game development.
I can't say much about 2D game development, but you can definitely use Metal for it, it might just be a bit "overkill".
In conclusion, you need to find a balance between complexity and control and chose what best suits you.

OpenCV Histogram Backprojection Alternative

I start with creating an initial mask of an object in an image. Using this mask, a histogram is created which is then used to process subsequent images.
I use the calcBackProject function to find pixels in the image that belong to the histogram. The problem I am having is that too much of the image is being accepted because certain objects are similar to the color of the initial object. Is there any alternative to calcBackProject? In my application, I can't afford to get objects that do not belong. All of this assumes that I have a perfect initial mask.
There are many ways to track an object, and it can be very difficult. Within OpenCV you may want to try the meanshift/camshift tracker to see if these are any better. If not then you may have to stray out of the opencv world and try tracking-learning-detection frameworks.
Meanshift/Camshift/etc in OpenCV
http://docs.opencv.org/modules/video/doc/video.html
http://docs.opencv.org/trunk/doc/py_tutorials/py_video/py_meanshift/py_meanshift.html
Tracking-Learning-Detection in C++:
STRUCK: http://www.samhare.net/research/struck (uses opencv)
Tracking-Learning-Detection in Matlab:
Preditor: http://personal.ee.surrey.ac.uk/Personal/Z.Kalal/tld.html

Rendering "layers" in OpenGL ES

I'm making an iOS app and I want to be able to render with individual "layers" so that I can do blending between them and use shaders on each individually before blending them all together and rendering to the screen.
I understand that I will be rendering to Textures and then rendering these textures on top of each other in the framebuffer, but I am not understanding clearly what code needs to be written to follow this procedure. In another answer I found what I want to do, but I don't know what code accomplishes this task: How to achieve multi-layered drawing with OpenGL ES on iOS? (For example how do I "Bind texture 1, then draw it"? What does it mean to "Attach texture 1"?)
I've also looked at Apple's documentation regarding this technique but it isn't very clear about the steps or code for the actual rendering part of the process.
How would I go about doing this? (hopefully with code examples of each step because I haven't understood spotty instructions that expect me to just know what is needed for each step)
Here is an example of what I want to do with this. The spheres would be rendered into a "layer" or Texture2D which I would then pass through the shader, then render on top of a already partially rendered scene. I don't know exactly what kind of openGL code could do that.
You're looking at wrong place. To use OpenGL, you need to study OpenGL, not anything else. Apple doesn't provide any OpenGL documentation because it's an open standard, so the specs are freely published. Apple assumes you're already familiar with it.
OpenGL ES 2.0 spec
manual pages
I think you are having trouble because you don't have understanding of GL specific terms. The spec describes them very well and clearly. So, please read the spec. That will save your time A LOT. Or you will keep the trouble.
Also, I like to introduce a site which has very nice conceptual description of OpenGL pipeline.
http://www.songho.ca/opengl/
This site is targeting desktop GL, and some API may be different a little. Please focus on conceptual understanding. For example, here's an illustration from the site.
For more tutorials, google with proper keyword like OpenGL ES 2.0 tutorials (or how-tos). Here's an example link, and would be helpful. There're also many more tutorials. If spec is too boring, it's also good to have some fun with tutors.
Update
I like to say one more. IMO, the OpenGL is all about drawing triangles. Everything is ultimately converted into triangles in 3D space to represent some shape. Anything else all exists only for optimization. And in most cases, GL chooses batch processing for major optimization strategy. Because overhead of each drawing call is not affordable for most games.
It's hard to start OpenGL ES because it's an optimized version of desktop GL, so all convenient or easy drawing features are stripped off. This is same even on recent version of desktop GL.
So there's no such drawOneTriangle function. Instead GL has something like
make a buffer,
put list of many triangles there
select the buffer for next drawing.
draw all triangles in current buffer at once
delete the buffer.
By using buffer, you don't need to dispatch duplicated data to GPU from CPU. And GL uses this approach everywhere. For example, you don't have such drawOneTriangleWithTexture function to use textures. Instead, you have to
make a buffer
put list of many pixels there (bitmap)
select the buffer for next drawing.
draw all triangles with the texture pixel data in current buffers.
delete the buffer.
Everything overly complex stuffs on GL are all exists for optimization. This may look weird at first, but there're usually very good reasons for the design.
Update 2
Now I think you're looking for render to texture feature. (well actually you already mentioned this…)
You can use a rendered image as a texture source. To do this,
you need to bind a frame-buffer with texture object rather then render-buffer object using some functions like glFramebufferTexture.
Once you render to a texture, and switch frame-buffer to final buffer, and bind the texture you drawn and others, and perform the final drawing. You need two frame buffers: one for render-to-texture, and one for final output.

Photo booth in iOS. Using OpenCV or OpenGL ES?

I want to make an application filtering videos like Apple's photo booth app
How can I make that??
Using OpenCV, OpenGL ES or anything else?
OpenCV and OpenGL have very different purposes:
OpenCV is a cross-platform computer vision library. It allows you to easily work with image and video files and presents several tools and methods to handle them and execute filters and several other image processing techniques and some more cool stuff in images.
OpenGL is a cross-platform API to produce 2D/3D computer graphics. It is used to draw complex three-dimensional scenes from simple primitives.
If you want to perform cool effects on images OpenCV is the way to go since it provides tools/effects that can be easily used together to achieve the desired effect you are looking for. And this approach doesn't stop you from processing the image with OpenCV and then render the result in a OpenGL window (if you have to). Remember, they have different purposes and every now and then somebody uses them together.
The point is that the effects you want to perform in the image should be done with OpenCV or any other image processing library.
Actually karlphillip, all though what you have said is correct, OpenGL can also be used to perform hardware accelerated image processing.
Apple even has an OpenGL sample project called GLImageProcessing that has hw accelerated brightness, contrast, saturation, hue and sharpness.

Example code for Resizing an image using DirectX

I know it is possible, and a lot faster than using GDI+. However I haven't found any good example of using DirectX to resize an image and save it to disk. I have implemented this over and over in GDI+, thats not difficult. However GDI+ does not use any hardware acceleration, and I was hoping to get better performance by tapping into the graphics card.
You can load the image as a texture, texture-map it onto a quad and draw that quad in any size on the screen. That will do the scaling. Afterwards you can grab the pixel-data from the screen, store it in a file or process it further.
It's easy. The basic texturing DirectX examples that come with the SDK can be adjusted to do just this.
However, it is slow. Not the rendering itself, but the transfer of pixel data from the screen to a memory buffer.
Imho it would be much simpler and faster to just write a little code that resizes an image using bilinear scaling from one buffer to another.
Do you really need to use DirectX? GDI+ does the job well for resizing images. In DirectX, you don't really need to resize images, as most likely you'll be displaying your images as textures. Since textures can only applies on 3d object (triangles/polygons/mesh), the size of the 3d object and view port determines the actual image size displayed. If you need to scale your texture within the 3d object, just play the texture coordinate or matrix.
To manipute the texture, you can use alpha blending, masking and all sort of texture manipulation technique, if that's what you're looking for. To manipulate individual pixel like GDI+, I still think GDI+ is the way to do. DirectX was never mend to do image manipulation.

Resources