Avoid gap between in textures while scaling in OpenGL ES 1.1 - ios

I am using glScale() to zoom out whole game scene. But at some scales I get a gap between textures:
How can I avoid this gap?
I have already tried to put upper texture little lower. But then I get a darker line between textures (because my textures have alpha channel).
I may scale down whole scene manually in CPU (by calculating vertices for scaled textures). But in this case I can't take advantage of VBOs, because vertices will change in every frame (zooming is very dynamic in my case).
What you can suggest to avoid this gap between textures, when I scale down the scene?

I wasn't able to found a solution with textures. Thus, I have created one more texture big enough to contain two these texture I wanted to draw. I have rendered to these two texture to this additional texture (using FBO). And finally I render the scene using this big one texture.

Related

Does the multisample texture need to be mipmapped if the main texture is mipmapped?

With metal, does the multisample texture need to be mipmapped if the main texture is mipmapped? I read the apple doc, but didn't get any information about it.
Mipmapping is for a texture from which you will take samples, typically a texture that is covering one of the models in your scene. Mipmapping is a mechanism for smoothing out sampling at varying levels of detail as an object moves closer or further from the camera (and so appears larger and more detailed, or smaller and less detailed).
Multisampling is for the texture that you will render to in a scene. This generally means the texture that is displayed on screen. Multisampling allows you to render to a texture that is larger than the screen, and then resolve that texture down to the screen resolution, in order to reduce aliasing (jagged lines).
So...in almost all cases, mipmapping and multi-sampling are mutually exclusive. Mipmaps are for a texture that is used as a source, and multisampling is for a texture that is used as a destination.
Some textures might be used as both a source and destination. These are textures that you render to dynamically (destination), say to create pattern,, and then sample from to cover a model in your scene (source).
So at first look it might seem conceivable that you might dynamically render to a texture using multisampling, and then want to sample from that texture using mipmapping. However, in this case, there is no point in making this texture multisampled. You would simply render to a larger texture, mipmap it, and sample from it. Multisampling this texture would take an additional resolve effort, and would not add anything.

D3D11 Copy one texture into one half of the other

I have a texture (size 1512x1680) that I would like to copy into the left half of my backbuffer (say in this case 920x540). Is there an easy way to do this?
CopySubresourceRegion can take a portion of my source texture, but it doesnt make it fit within my destination texture.
There is nothing easier than rendering a textured quad to draw the texture at the correct portion of the surface. But for such a simple thing, there is as many solutions as stars in the night sky, pick your favorite.
It has to involve a vertex shader and a pixel shader, that's the only constant. after that, how you setup everything else is up to your taste and convenience, can be with or without a source geometry, with or without a constant buffer, with or without a viewport tweak, etc.

How to batch sprites in iOS/OpenGL ES 2.0

I have developed my own sprite library on top of OpenGL ES 2.0. Right now, I am not doing any batching of draw calls; instead, each sprite has its own VBO/VAO of four textured vertices, drawn as a triangle strip (The VAO/VBO itself is managed by the Texture atlas, so identical sprites reuse the same VAO/VBO, which is 'reference counted' and hence deleted when no sprite instances reference it).
Before drawing each sprite, I'll bind its texture, upload its uniforms/attributes to the shader (modelview matrix, opacity - Projection matrix stays constant all along), bind its Vertex Array Object (4 textured vertices + four indices), and call glDrawElements(). I do cull off-screen sprites (based on position and bounds), but still it is one draw call per sprite, even if all sprites share the same texture. The vertex positions and texture coordinates for each sprite never change.
I must say that, despite this inefficiency, I have never experienced performance issues, even when drawing many sprites on screen. I do split the sprites into opaque/non-opaque, draw the opaque ones first, and the non-opaque ones after, back to front. I have seen performance suffer only when I overdraw (tax the fill rate).
Nevertheless, the OpenGL instruments in Xcode will complain that I draw too many small meshes and that I should consolidate my geometry into less objects. And in the Unity world everyone talks about limiting the number of draw calls as if they were the plague.
So, how should I go around batching very many sprites, each with a different transform and opacity value (but the same texture), into one draw call? One thing that comes to mind is to modify the vertex data every frame, and stream it: applying the modelview matrix of each sprite to all its vertices, assembling the transformed vertices for all sprites into one mesh, and submitting it to the GPU. This approach does not solve the problem of varying opacity between sprites.
Another idea that comes to mind is to have all the textured vertices of all the sprites assembled into a single mesh (VBO), treated as 'static' (same vertex format I am using now), and a separate array with the stuff that changes per sprite every frame (transform matrix and opacity), and only stream that data each frame, and pull it/apply it on the vertex shader side. That is, have a separate array where the 'attribute' being represented is the modelview matrix/alpha for the corresponding vertices. Still have to figure out the exact implementation in terms of data format/strides etc. In any case, there is the additional complication that arises whenever a new sprite is created/destroyed, the whole mesh has to be modified...
Or perhaps there is an ideal, 'textbook' solution to this problem out there that I haven't figured out? What does cocos2d do?
When I initially started reading you post I though that each quad used a different texture (since you stated "Before drawing each sprite, I'll bind its texture") but then you said that each sprite has "the same texture".
A possible easy win is to control the way you bind your textures during the draw since each call is a burden for the OpenGL driver. If (and I am not really sure abut this from your post) you use different textures, I suggest to go for a simple texture atlas where all the sprites are inside a single picture (preferably a power of 2 texture with mipmapping) and then you take the piece of the texture you need in the fragment using texture coordinates (this is the reason they exist in the end)
If the position of the sprites change over time (of course it does) at each frame, a possible advantage would be to pack the new vertex coordinates of your sprites at each frame and draw directly from memory (possibly over VAO. VBO could cost more since you need to build it each frame? to be tested in real scenario). This would be a good call pack operation and I am pretty sure it will bust the performances.
Consider that the VAO option could be feasible since we are talking about very small amount of data and the memory bandwidth should not represent a real bottleneck (each quad I guess uses 12 floats for vertex coordinates, 8 for textures and 12 for normals, 128 byte?), it shouldn't be a big problem over VAO.
About opacity, can't you play using an uniform to your fragment shader where you play with alpha? Am I wrong with it? It should work.
I hope this helps.
Ciao,
Maurizio

Loading texture in segments

I'm working on an Open GL app that uses 1 particularly large texture 2250x1000. Unfortunately, Open GL ES 2.0 doesn't support textures larger than 2048x2048. When I try to draw my texture, it appears black. I need a way to load and draw the texture in 2 segments (left, right). I've seen a few questions that touch on libpng, but I really just need a straight forward solution for drawing large textures in opengl es.
First of all the texture size support depends on device, I believe iPad 3 supports 4096x4096 but don't mind that. There is no way to push all those data as they are to most devices onto 1 texture. First you should ask yourself if you really need such a large texture, will it really make a difference if you resample it down to 2048x_. If the answer is NO you will need to break it at some point. You could cut it by half in width and append of the cut parts to the bottom of the texture resulting in 1125x2000 texture or simply create 2 or more textures and push to them certain parts of the texture image. In any of the cases you might have trouble with texture coordinates but this all heavily depends on what you are trying to do, what is on that texture (a single image or parts of a sophisticated model; color mapping or some data you can not interpolate; do you create it at load time or it is modified as it goes...). Maybe some more info could help us solve your situation more specifically.

Optimise OpenGL ES 2.0 2D drawing using dirty rectangles

Is it possible to optimise OpenGL ES 2.0 drawing by using dirty rectangles?
In my case, I have a 2D app that needs to draw a background texture (full screen on iPad), followed by the contents of several VBOs on each frame. The problem is that these VBOs can potentially contain millions of vertices, taking anywhere up to a couple of seconds to draw everything to the display. However, only a small fraction of the display would actually be updated each frame.
Is this optimisation possible, and how (or perhaps more appropriately, where) would this be implemented? Would some kind of clipping plane need to be passed into the vertex shader?
If you set an area with glViewport, clipping is adjusted accordingly. This however happens after the vertex shader stage, just before rasterization. As the GL cannot know the result of your own vertex program, it cannot sort out any vertex before applying the vertex program. After that, it does. How efficent it does depents on the actual GPU.
Thus you have to sort and split your objects to smaller (eg. rectangulary bounded) tiles and test them against the field of view by yourself for full performance.

Resources