D3D11 Copy one texture into one half of the other - directx

I have a texture (size 1512x1680) that I would like to copy into the left half of my backbuffer (say in this case 920x540). Is there an easy way to do this?
CopySubresourceRegion can take a portion of my source texture, but it doesnt make it fit within my destination texture.

There is nothing easier than rendering a textured quad to draw the texture at the correct portion of the surface. But for such a simple thing, there is as many solutions as stars in the night sky, pick your favorite.
It has to involve a vertex shader and a pixel shader, that's the only constant. after that, how you setup everything else is up to your taste and convenience, can be with or without a source geometry, with or without a constant buffer, with or without a viewport tweak, etc.

Related

Making GL_POINTS look like a 3D Rectangle

Suppose one has an array of GL_POINTS and wants to make each appear to have a distinct "height" or "depth", so instead of appearing like a flat scatter of squares they appear to be a scatter of 3D rectangles / right rectangular prisms.
Is there a technique in WebGL that will allow one to achieve this effect? One could of course use vertices that actually articulate those 3D rectangles, but my goal is to optimize for performance as I have ~100,000 of these rectangles to render, and I thought points would be the best primitive to use.
Right now I am thinking one could probably use a series of point sprites each with varying depth, then assign each point the sprite that corresponds most closely with the desired depth (effectively quantizing the depth data field). But is there a way to keep the depth field continuous?
Any pointers on this question would be greatly appreciated!
In my experience POINTS are not faster than making your own vertices. Also, if you use instanced drawing you can get away with almost the same amount of data. You need one quad and then position, width, and height for each rectangle. Not sure instancing is as fast as just making all the vertices though. Might depend on the GPU/driver
As pointed out 😄 in many other Q&As on points, the maximum point size is GPU/driver specific and allowed to be as low as 1 pixel. There are plenty of GPUs that only allow size >= 256 pixels (no idea why) and a few with only size >= 64. Yet another reason to not use POINTS
Otherwise though, POINTS always draw a square so you'd have to draw a square large enough that contains your rectangle and then in the fragment shader, discard the pixels outside of the rectangle.
That's unlikely to be good for speed though. Every pixel of the square will still need to be evaluated by the fragment shader which is slower than drawing a rectangle with vertices since then those pixels outside the rectangle are not even considered. Further, using discard in a shader is often slower than not using it. This is because, for example, things like setting the depth buffer, if there is no discard nothing needs to be checked, the depth buffer can be updated unconditionally separate from the shader. With discard the depth buffer can't be updated until the GPU knows if the shader kept or discarded the fragment.
As for making them appear 3D I'm not sure what you mean. Effectively points are just like drawing a square quad so you can put anything you want on that square. The majority of shaders on shadertoy can be adapted to draw themselves on points. I wouldn't recommend it as it would likely be slow but just pointing out that it's just a quad. Draw a texture on them, draw a procedural texture on them, draw a solid color on them, draw a procedural snail on them.
Another possible solution is you can apply a normal map to the quad and then do lighting calculations on those normals so each quad will have the correct lighting for its position relative to your light(s)

Does the multisample texture need to be mipmapped if the main texture is mipmapped?

With metal, does the multisample texture need to be mipmapped if the main texture is mipmapped? I read the apple doc, but didn't get any information about it.
Mipmapping is for a texture from which you will take samples, typically a texture that is covering one of the models in your scene. Mipmapping is a mechanism for smoothing out sampling at varying levels of detail as an object moves closer or further from the camera (and so appears larger and more detailed, or smaller and less detailed).
Multisampling is for the texture that you will render to in a scene. This generally means the texture that is displayed on screen. Multisampling allows you to render to a texture that is larger than the screen, and then resolve that texture down to the screen resolution, in order to reduce aliasing (jagged lines).
So...in almost all cases, mipmapping and multi-sampling are mutually exclusive. Mipmaps are for a texture that is used as a source, and multisampling is for a texture that is used as a destination.
Some textures might be used as both a source and destination. These are textures that you render to dynamically (destination), say to create pattern,, and then sample from to cover a model in your scene (source).
So at first look it might seem conceivable that you might dynamically render to a texture using multisampling, and then want to sample from that texture using mipmapping. However, in this case, there is no point in making this texture multisampled. You would simply render to a larger texture, mipmap it, and sample from it. Multisampling this texture would take an additional resolve effort, and would not add anything.

How to batch sprites in iOS/OpenGL ES 2.0

I have developed my own sprite library on top of OpenGL ES 2.0. Right now, I am not doing any batching of draw calls; instead, each sprite has its own VBO/VAO of four textured vertices, drawn as a triangle strip (The VAO/VBO itself is managed by the Texture atlas, so identical sprites reuse the same VAO/VBO, which is 'reference counted' and hence deleted when no sprite instances reference it).
Before drawing each sprite, I'll bind its texture, upload its uniforms/attributes to the shader (modelview matrix, opacity - Projection matrix stays constant all along), bind its Vertex Array Object (4 textured vertices + four indices), and call glDrawElements(). I do cull off-screen sprites (based on position and bounds), but still it is one draw call per sprite, even if all sprites share the same texture. The vertex positions and texture coordinates for each sprite never change.
I must say that, despite this inefficiency, I have never experienced performance issues, even when drawing many sprites on screen. I do split the sprites into opaque/non-opaque, draw the opaque ones first, and the non-opaque ones after, back to front. I have seen performance suffer only when I overdraw (tax the fill rate).
Nevertheless, the OpenGL instruments in Xcode will complain that I draw too many small meshes and that I should consolidate my geometry into less objects. And in the Unity world everyone talks about limiting the number of draw calls as if they were the plague.
So, how should I go around batching very many sprites, each with a different transform and opacity value (but the same texture), into one draw call? One thing that comes to mind is to modify the vertex data every frame, and stream it: applying the modelview matrix of each sprite to all its vertices, assembling the transformed vertices for all sprites into one mesh, and submitting it to the GPU. This approach does not solve the problem of varying opacity between sprites.
Another idea that comes to mind is to have all the textured vertices of all the sprites assembled into a single mesh (VBO), treated as 'static' (same vertex format I am using now), and a separate array with the stuff that changes per sprite every frame (transform matrix and opacity), and only stream that data each frame, and pull it/apply it on the vertex shader side. That is, have a separate array where the 'attribute' being represented is the modelview matrix/alpha for the corresponding vertices. Still have to figure out the exact implementation in terms of data format/strides etc. In any case, there is the additional complication that arises whenever a new sprite is created/destroyed, the whole mesh has to be modified...
Or perhaps there is an ideal, 'textbook' solution to this problem out there that I haven't figured out? What does cocos2d do?
When I initially started reading you post I though that each quad used a different texture (since you stated "Before drawing each sprite, I'll bind its texture") but then you said that each sprite has "the same texture".
A possible easy win is to control the way you bind your textures during the draw since each call is a burden for the OpenGL driver. If (and I am not really sure abut this from your post) you use different textures, I suggest to go for a simple texture atlas where all the sprites are inside a single picture (preferably a power of 2 texture with mipmapping) and then you take the piece of the texture you need in the fragment using texture coordinates (this is the reason they exist in the end)
If the position of the sprites change over time (of course it does) at each frame, a possible advantage would be to pack the new vertex coordinates of your sprites at each frame and draw directly from memory (possibly over VAO. VBO could cost more since you need to build it each frame? to be tested in real scenario). This would be a good call pack operation and I am pretty sure it will bust the performances.
Consider that the VAO option could be feasible since we are talking about very small amount of data and the memory bandwidth should not represent a real bottleneck (each quad I guess uses 12 floats for vertex coordinates, 8 for textures and 12 for normals, 128 byte?), it shouldn't be a big problem over VAO.
About opacity, can't you play using an uniform to your fragment shader where you play with alpha? Am I wrong with it? It should work.
I hope this helps.
Ciao,
Maurizio

Avoid gap between in textures while scaling in OpenGL ES 1.1

I am using glScale() to zoom out whole game scene. But at some scales I get a gap between textures:
How can I avoid this gap?
I have already tried to put upper texture little lower. But then I get a darker line between textures (because my textures have alpha channel).
I may scale down whole scene manually in CPU (by calculating vertices for scaled textures). But in this case I can't take advantage of VBOs, because vertices will change in every frame (zooming is very dynamic in my case).
What you can suggest to avoid this gap between textures, when I scale down the scene?
I wasn't able to found a solution with textures. Thus, I have created one more texture big enough to contain two these texture I wanted to draw. I have rendered to these two texture to this additional texture (using FBO). And finally I render the scene using this big one texture.

Distortion/Water in WebGL

I'm relatively new to WebGL, and OpenGL too for that matter, but in recent days I've filled up most my time writing a little game for it. However, when I wanted to implement something like heat waves, or any sort of distortion, I was left stuck.
Now, I can make a texture ripple using the fragment shader, but I feel like I'm missing something when it comes to distorting the content behind an object. Is there any way to grab the color of a pixel that's already been rendered within the fragment shader?
I've tried rendering to a texture and then having the texture of the object be that, but it appears if you choose to render your scene to a texture, you cannot render it to the screen also. And beyond that, if you want to render to a texture, that texture must be a power of two (which many screen resolutions do not quite fit into)
Any help would be appreciated.
You're going to have to render to a texture and draw that texture onto the screen while distorting it. Also, there's no requirement that framebuffer objects must be of a power-of-two size in OpenGL ES 2.0 (which is the graphics API WebGL uses). But non-power-of-two textures can't have mipmapping or texture-wrapping.
I believe you can modify individual canvas pixels directly. Might be a good way to ripple a small area, but might not be gpu-accelerated.

Resources