Is there a way to load cube-map using giant image-strip in OpenGL-ES? (or desktop GL or extension, anything)
For example, GLKTextureLoader class offers loading cube-map at once if they're sequenced vertically. I want to know there's some GL functions for this feature or the class is just splitting textures when loading. Of course, I can use this class, but I want to know which is more efficient between loading long image-stip or separated 6 images for each side.
My guess is that this function is indeed just splitting up the image into 6 sets of faces after loading the image file and then using these to do standard cubemap generation via the standard cubemap calls:
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X...)
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Y...)
etc
Note that the GLKTextureLoader class also defines cubeMapWithContentsOfFiles which allows you to specify 6 individual image files to define the face textures.
You could check the time it takes to setup the cubemap using the 6-files input versus the cube strip input (cubeMapWithContentsOfFile). Whhich runs faster will depend on if the loading of 6 files is faster than loading one big file and having the method split it up. Otherwise, I bet all the rest of the code between the two functions is identical and uses the standard cubemap texture calls as above.
Since GLKit is Apple proprietary, we can't just look at the source as we can with most OpenGl functions.
Related
My question pertains to the best way to handle multiple textures. First some context:
I'm using DirectX-11 in a non-gaming application; the gui uses DirectX exclusively. I'm in the process of making the gui skinnable, so the user can customize the gui to their liking.
I've written the code in such a way that the gui layout and the size of each gui element can change based on a configuration file. The gui currently uses only DirectX primatives via DrawIndexedInstanced, but I'd like to support user supplied textures. The size of these textures can vary. There can be as many as two dozen of these different textures.
I can solve this problem by either:
Dynamically putting together a texture atlas, or...
Forcing all of the textures into a 2d texture array (by making all of the textures the same size via padding as needed), or ...
Splitting up the DrawIndexedInstanced calls so that there's one draw call for each of the different textures (i.e. multiple binds / draws).
I spent the afternoon looking for consensus. I didn't find it. Penny for your thoughts?
The approach that runs fastest is the texture atlas. This is why 2d games use sprite maps. Multiple binds / draws is the slowest approach.
I'm trying to extract some a data (a map image) from a PNG file which is tiled somehow. The file itself is only 256x256 pixels (according to 'get info' on the mac) but is is 23MB. It is from an iPad app called Mud Map and it contains a map that I purchased but I've lost the original that I converted to this format. When I view this file (renamed to a .PNG) I see one section of the map - 256x256px.
I'm asking this question on StackOverflow because I want to know more about these tiled images. How does one create a tiled PNG and what is the software that will open and or create these things. I'm interested in what metadata is required too. I'm loving the outdoors and mapping!!
The answer to this question, is that it cannot be done in manner I have described.
The images in the PNG are not tiled, the the files are just merged together which is no doubt an individual feature of the program as is it does not appear to be any kind of standard.
I have no access to application you mentioned in IPad. Just share some thought about possible situation here.
1) Map tiles are commonly used in GIS web application such as Google maps and so on. It is used to improve the performance especially when user pan very often. A map displayed typical map window is divided into for instance 4*4 separate calls. So maybe only 4 call will be made when user just pan a little bit instead of get the whole map for the 16 tiles.
The source image for this tiles can be in pre-generated tiles or just one static map.
2) Assemble separate images to one in GIS is called image mosaic function. GIS server can read a collection of images and mosaiced them into one with the overlapping part handle based on a certain rule. And the images are in pre defined grid format which are seamless and no overlapping, then it is called tiled images. We could pre-generated the tiles from one mosaiced image, or we can server it on the fly. Some GIS server/library/application does have the tile server function built in.
I am new to opencv,I did a project with opencv,
My project is tracking object with stereo camera,so I find where is the object and I want to represent it in (blender or with opengl or another one),so my situation is that I have point 3d in YML file and I want to represent them . I dont know what I will use ,can any one help me ??
Its possible to do in Blender, but for your simple purpose Opengl should be enough. To get started with Modern Opengl check this list of contents: link
In opengl, before drawing anything you must "send" data (vertices) to your GPU. One part of this process is called Vertex Buffer Object. (its very simple after your program by yourself). When you use VBO, you can specify what type of data you have: STATIC or DYNAMIC. Dynamic means that you (the artist) will change the data over time, the position of each vertex might change. And that is what you want.
I am creating a photo slide show with complex transitions between images on iOS. Core Animation doesn't suits the purpose as the possible transitions are limited, so I resort to Opengles 2.0. The problem is uploading images to GPU and creating texture is a time consuming operation & takes roughly 200 ms even for a 960x640 image, which is not suitable for real time playback scenario. And its not feasible to pre-create all the textures before hand as there could be 100s of them. I wonder how Core Animation deals with this problem and is smooth enough to run no matter how many CGImages you assign in animations ? (As long as images are presented at different times and not together).
Texture loading is time consuming and most of applications dealing with a large number of them are loading them on some initialisation. That is the simplest approach but surely most resource consuming. You must understand that what goes on in the back is reading an image file, decompressing it, creating a raw RGB(A) data on the CPU, allocating a memory on the GPU and sending the raw data to the GPU...
As the best approach of dealing with large number of textures is loading them in background preferably even before you need them. In your case as already mentioned in the comment you will need to create some smart cache of these textures. This will still not be enough since the loading itself might make your thread unresponsive. You will need to add a background task to handle those images.
What I suggest to you is creating 2 additional threads. First should load the image data to the CPU while the second will push the data to the GPU. The first thread is pretty straight forward while the second will need a bit of additional GL code to accomplish. Each thread will need its own openGL context to be able to communicate with the GPU, so once you create this thread you also need to create an extra context. These contexts are not aware of each others resources which leads to creating a texture in one context will make it unusable on the other context. For this you will need an extra parameter called a share group. So first you create the share group and then create both contexts with the same share group so the textures will be accessible. Do note that the context is preferably created on the thread you are supposed to be using it on (it might be enough to simply set it as current though).
Currently I am developing application for the Windows Store which does real time-image processing using Direct2D. It must support various sizes of images. The first problem I have faced is how to handle the situations when the image is larger than the maximum supported texture size. After some research and documentation reading I found the VirtualSurfaceImageSource as a solution. The idea was to load the image as IWICBitmap then to create render target with CreateWICBitmapRenderTarget (which as far as I know is not hardware accelerated). After some drawing operations I wanted to display the result to the screen by invalidating the corresponding region in the VirtualSurfaceImage source or when the NeedUpdate callback fires. I supposed that it is possible to do it by creating ID2D1Bitmap (hardware accelerated) and to call CopyFromRenderTarget with the render target created with CreateWICBitmapRenderTarget and the invalidated region as bounds, but the method returns D2DERR_WRONG_RESOURCE_DOMAIN as a result. Another reason for using IWICBitmap is one of the algorithms involved in the application which must have access to update the pixels of the image.
The question is why this logic doesn't work? Is this the right way to achieve my goal using Direct2D? Also as far as the render target created with CreateWICBitmapRenderTarget is not hardware accelerated if I want to do my image processing on the GPU with images larger than the maximum allowed texture size which is the best solution?
Thank you in advance.
You are correct that images larger than the texture limit must be handled in software.
However, the question to ask is whether or not you need that entire image every time you render.
You can use the hardware accel to render a portion of the large image that is loaded in a software target.
For example,
Use ID2D1RenderTarget::CreateSharedBitmap to make a bitmap that can be used by different resources.
Then create a ID2D1BitmapRenderTarget and render the large bitmap into that. (making sure to do BeginDraw, Clear, DrawBitmap, EndDraw). Both the bitmap and the render target can be cached for use by successive calls.
Then copy from that render target into a regular ID2D1Bitmap with the portion that will fit into the texture memory using the ID2D1Bitmap::CopyFromRenderTarget method.
Finally draw that to the real render target, pRT->DrawBitmap