reflective texture for arbitrary shapes - webgl

I have been looking for a way to make a reflective (mirror) texture for arbitrary meshes. I understand that there are a couple of approaches the most popular seems to be rendering the scene from the perspective of the sides of a "reflective" cube and using the rendered images as textures for the object. Are there faster/more accurate approaches to the problem?

Related

If you have a object's orientation in an image, how do you find its translation?

I am attempting to estimate 6DOF pose of CAD models from a single image. The models are of simple, reflective objects (i.e. variously cut metal rectangular prisms). Using pose estimation code, I have its orientation in Euler angles but no translation. The images are incredibly simple, only the object on a monochromatic background, so they are easily segmented.
Challenges and Constraints:
The approach has to be generic enough to work on "simple" shapes like rectangular prisms and pyramids for example.
The objects are highly reflective and the location of their light source may change.
It does not need to be quick, but it does need to be within 10% of the actual value.
All the objects have well defined corners.
Thanks for any help!

What is the fastest way to render subsets of a single texture onto a WebGL canvas?

If you have a single power-of-two width/height texture (say 2048) and you want to blit out scaled and translated subsets of it (say 64x92-sized tiles scaled down) as quickly as possible onto another texture (as a buffer so it can be cached when not dirty), then draw that texture onto a webgl canvas, and you have no more requirements - what is the fastest strategy?
Is it first loading the source texture, binding an empty texture to a framebuffer, rendering the source with drawElementsInstancedANGLE to the framebuffer, then unbinding the framebuffer and rendering to the canvas?
I don't know much about WebGL and I'm trying to write a non-stateful version of https://github.com/kutuluk/js13k-2d (that just uses draw() calls instead of sprites that maintain state, since I would have millions of sprites). Before I get too far into the weeds, I'm hoping for some feedback.
There is no generic fastest way. The fastest way is different by GPU and also different by specifics.
Are you drawing lots of things the same size?
Are the parts of the texture atlas the same size?
Will you be rotating or scaling each instance?
Can their movement be based on time alone?
Will their drawing order change?
Do the textures have transparency?
Is that transparency 100% or not (0 or 1) or is it various values in between?
I'm sure there's tons of other considerations. For every consideration I might choose a different approach.
In general your idea if using drawElementsAngleInstanced seems fine but without knowing exactly what you're trying to do and on which device it's hard to know.
Here's some tests of drawing lots of stuff.

Monogame - how to have draw layer while on SpriteSortMode.Texture

I have a problem in which in my game I have to use SpriteSortMode.Texture because I have a lot of objects with few textures, so I cannot afford to use SpriteSortMode.BackToFront.
The thing is this means I cannot draw by layers, unless I do SpriteBatch.Begin with the exact same settings, which is what I'm currently doing.
I only have 3 draw layers I need - a Tileset surface, Objects like rocks or characters on the surface, and UI.
Other solutions I've found is using texture quads (which supposedly also improves tileset drawing performance), going 3D with orthogonal view which I haven't researched yet.
I'm hoping there's a better to make this work.
Why would having a lot of objects with few textures mean you have to use SpriteSortMode.Texture?
"This can improve performance when drawing non-overlapping sprites of uniform depth." says the MSDN page, and this is clearly not what you are doing.
Just use the default SpriteSortMode.Deferred and draw things back to front in order.

Best way to draw a cube with solid-coloured faces

I'm completely new to DirectX (11) so this question will be extremely basic. Sorry about that.
I'd like to draw a cube on screen that has solid-coloured faces. All of the examples that I've seen have 8 vertices, with a colour defined at each vertex (red, green, blue). The pixel shader then interpolates between these vertices to give a spectrum of colours. This looks nice, but isn't what I'm trying to achieve. I'd just like a cube with six, coloured faces.
Two ideas come to mind:
use 24 vertices, and have each vertex referenced only a single time, i.e. no sharing. This way I can define three different colours at each 3D position, one for each face.
use a texture for each face that 'stretches' to give the face the correct colour. I'm not very familiar with textures right now, so not all that sure about this idea.
What's the typical/canonical way to achieve this effect? I'm sure this 'problem' has been solved many, many times before.
For your particular problem, vertex coloring might be the easiest and best solution. But the more complex you models will become the more complicated is to create a proper vertex coloring, because you don't always want to limit you in your imagination to the underlying geometry.
In general 3D objects are colored with one or more textures. Therefore you create an UV-Mapping (wiki), which unwraps you three-dimensional surface onto a 2D-Plane, the texture. Now you can paint freely in any resolution you want colors on your object, which gives you the most freedom to have the model look as you want.
Of course each application has its own characteristics, so some projects would choose another approach, but I think this is the most popular way to colorize models.
Option 1 is the way to go if:
You want zero color bleed between faces
You want zero texture bleed between faces
You later want to use the color as a lighting scheme ala Minecraft
Caveats:
Could use more memory as more verts being used (There are some techniques around this depending on how large your object is and its spacial resolution. eg using 1 byte for x/y/z instead of a float)

Rendering multiple textures to a sphere based on game-generated values

I'm making a game with my friend that involves randomly generating planets based on certain properties. Originally this game was all 2D, but now we've decided to enhance the purpose of planets in the game and make it 2.5D, with planets being rendered as 3D spheres in an otherwise 2D world. Now, up to this point we had a pretty good thing going with the way planets looked. We used layered textures, one for each property (water, land, atmosphere) depending on how our algorithms created the planet. This looked pretty, but the planet surfaces were largely lame and didn't vary as they were all made from the same few textures.
Now that we are going 3D, I want to create a nice planetary map which will determine the topography of the planet based on its properties to make each planet have different bodies of water, land masses, etc. I also want to draw different textures on the surface of the planet based on that map, with them blending together at the edges.
I've considered two possibilities: rendering the textures based on the map to a RenderTarget and then wrapping that RenderTarget around my sphere model, or converting the map to vertex data and writing a shader to draw the textures with the proper weight.
The problem is, I'm a novice at both RenderTargets and HLSL (as a matter of fact, I don't even know if the RenderTarget method is possible), so I feel the need for some guidance here. What would be recommended for rendering multiple textures to a sphere model based on a generated terrain map? Also, are there any suggestions for what format to create the terrain map in (it would be some sort of data structure which would represent the type of terrain at any coordinate on the planet's surface)?
I have looked at other multi-texture tutorials, but they all seem based on a pre-determined texture or set of values. I need to be able to randomly generate the terrain in-game.

Resources