I want to create a function that takes multiple textures and append them and tiles them next to each other. Example, if I had imgA, imgB, imgC I can get an texture like this:
A A B
C B B
B C A
Also image do not have to be the same size so I might get something like this:
AAB C
C B B
BAC C
Does anyone how I can do this in HLSL, what functions I should be looking at? Do you have any syntax example?
Thank you :)
EDIT:
I am not quite satisfied with the answers yet, I will be exploring them more in depth, then coming back to this question
Running loops in HLSL pixel shaders is not the best idea. It's probably easier to stream the vertices corresponding to the desired tiled texture.
First, you would want to create a texture atlas, i.e., a big texture which contains all the textures you want to compose. Then you render one quad (2 triangles) after another in the desired arrangement.
You can use n Draw calls: one quad at a time.
You can make one big vertex buffer with pre-computed or partially computed tile positions and use one Draw call.
Or you can do one DrawInstanced call. This is how tile-based maps are rendered in most games.
If you don't want to create a texture atlas, you could pass each of the base textures to a separate sampler and then map the texture coordinates to the appropriate sampler. However, this adds branching to the pixel shader which is also going to cost performance.
That question is little to broad, there is far to many ways to do what you describe. Some solutions could probably use a large uber shader and embedded most of the logic in the hlsl, but it does not seems right and complex for nothing. It would be more affordable to mix with some generated geometry for each portion of the screen.
There is likely to have absolutely no performance penalty from binding each texture separately and render quads to the correct locations, even or the weakest hardware.
Related
I'm using python kivy to render meshes with opengl onto a canvas. I want to return vertex data from the fragment shader so i can build a collider (to use on my cpu event listeners after doing projection and model view transforms). I can replicate the matrix multiplications on the cpu (i guess that's the easy way out), but then i would have to do the same calculations twice (not good).
The only way I can think of doing this (after some browsing) is to imprint an object id onto my rendered mesh alpha channel (wouldn't affect much if i'd keep data coding near value 1 for alpha ). And create some kind of 'color picker' on the cpu side to decode it (I'm guessing that's not hard to do using kivy).
Anyone has a better idea to deal with this? Or a better approach?
First criterion here is: do you need collision for picking or for physics simulation?
If it is for physics: you almost never want the same mesh for rendering and for physics collisions. Typically, you use a very rough approximation for the physics shape, nearly always a convex shape, or a union of convex shapes. (Colliding arbitrary concave meshes is something that no physics engine can do well, and if they attempt it at all, performance will be poor.)
If it is for the purpose of picking an object with a mouse-click: you can go two different ways for this:
You replicate the geometry on the CPU, and use the mouse-location plus camera-view to create a ray that intersects this geometry, to see what is hit first.
After rendering your scene, you read back a single pixel from the depth buffer. (The pixel that your mouse is over.) With the depth value you get back, plus camera info, you can reconstruct a corresponding 3D position in your world. Once you have a 3D location, you can query your world to see which object is the closest to this point, and you will have your hit.
i wander how code a shader that output a texture T1 in which each texel store the coordinate of all pixels that are (for example) not black of a given texture T0.
It's a bit difficult to know if A,B,C in your question represent pixels or sub images. If you are trying to locate pixels with values and group them together.
You can implement that as a variation of a pixel sort algorithm. There are many ways to do this but you can see e.g.
https://bl.ocks.org/zz85/cafa1b8b3098b5a40e918487422d47f6
or
https://timseverien.com/posts/2017-08-17-sorting-pixels-with-webgl/
which uses per frame odd-even pair-wise comparison.
I have seen demos on WebGL that
color rectangular surface
attach textures to the rectangles
draw wireframes
have semitransparent textures
What I do not understand is how to combine these effects into a single program, and how to interact with objects to change their look.
Suppose I want to create a scene with all the above, and have the ability to change the color of any rectangle, or change the texture.
I am trying to understand the organization of the code. Here are some short, related questions:
I can create a vertex buffer with corresponding color buffer. Can I have some rectangles with texture and some without?
If not, I have to create one vertex buffer for all objects with colors, and another with textures. Can I attach a different texture to each rectangle in a vector?
For a case with some rectangles with colors, and others with textures, it requires two different shader programs. All the demos I see have only one, but clearly more complicated programs have multiple. How do you switch between shaders?
How to draw wireframe on and off? Can it be combined with textures? In other words, is it possible to write a shader that can turn features like wireframe on and off with a flag, or does it take two different calls to two different shaders?
All the demos I have seen use an index buffer with triangles. Is Quads no longer supported in WebGL? Obviously for some things triangles would be needed, but if I have a bunch of rectangles it would be nice not to have to create an index of triangles.
For all three of the above scenarios, if I want to change the points, the color, the texture, or the transparency, am I correct in understanding the glSubBuffer will allow replacing data currently in the buffer with new data.
Is it reasonable to have a single object maintaining these kinds of objects and updating color and textures, or is this not a good design?
The question you ask is not just about WebGL, but also about OpenGL and 3D.
The most used way to interact is setting attributes at the start and uniforms at the start and on the run.
In general, answer to all of your questions is "use engine".
Imagine it like you have javascript, CPU based lang, then you have WebGL, which is like a library of stuff for JS that allows low level comunication with GPU (remember, low level), and then you have shader which is GPU program you must provide, but it works only with specific data.
Do anything that is more then "simple" requires a tool, that will allow you to skip using WebGL directly (and very often also write shaders directly). The tool we call engine. Engine usually binds together some set of abilities and skips the others (difference betwen 2D and 3D engine for example). Engine functions call some WebGL preset functions with specific order, so you must not ever touch WebGL API again. Engine also provides very complicated logic to build only single pair, or few pairs of shaders, based just on few simple engine api calls. The reason is that during entire program, swapping shader program cost is heavy.
Your questions
I can create a vertex buffer with corresponding color buffer. Can I
have some rectangles with texture and some without? If not, I have to
create one vertex buffer for all objects with colors, and another with
textures. Can I attach a different texture to each rectangle in a
vector?
Lets have a buffer, we call vertex buffer. We put various data in vertex buffer. Data doesnt go as individuals, but as sets. Each unique data in set, we call attribute. The attribute can has any meaning for its vertex that vertex shader or fragment shader code decides.
If we have buffer full of data for triangles, it is possible to set for example attribute that says if specific vertex should texture the triangle or not and do the texturing logic in the shader. Anyway I think that data size of attributes for each vertex must be equal (so the textured triangles will eat same size as nontextured).
For a case with some rectangles with colors, and others with textures,
it requires two different shader programs. All the demos I see have
only one, but clearly more complicated programs have multiple. How do
you switch between shaders?
Not true, even very complicated programs might have only one pair of shaders (one WebGL program). But still it is possible to change program on the run:
https://www.khronos.org/registry/webgl/specs/latest/1.0/#5.14.9
WebGL API function useProgram
How to draw wireframe on and off? Can it be combined with textures? In
other words, is it possible to write a shader that can turn features
like wireframe on and off with a flag, or does it take two different
calls to two different shaders?
WebGL API allows to draw in wireframe mode. It is shader program independent option. You can switch it with each draw call. Anyway it is also possible to write shader that will draw as wireframe and control it with flag (flag might be both, uniform or attribute based).
All the demos I have seen use an index buffer with triangles. Is Quads
no longer supported in WebGL? Obviously for some things triangles
would be needed, but if I have a bunch of rectangles it would be nice
not to have to create an index of triangles.
WebGL supports only Quads and triangles. I guess it is because without quads, shaders are more simple.
For all three of the above scenarios, if I want to change the points,
the color, the texture, or the transparency, am I correct in
understanding the glSubBuffer will allow replacing data currently in
the buffer with new data.
I would say it is rare to update buffer data on the run. It slows a program a lot. glSubBuffer is not in WebGL (different name???). Anyway dont use it ;)
Is it reasonable to have a single object maintaining these kinds of
objects and updating color and textures, or is this not a good design?
Yes, it is called Scene graph and is widely used and might be also combined with other techniques like display list.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Rendering meshes with multiple indices
This is about using index buffers to render custom geometry, like from an OBJ file. I know a bit about 3d graphics conventions, but I haven't done much of anything with WebGL.
The short form of my question is "how do you use Index Buffers in WebGL?"
What I would like to do is, for a piece of custom geometry, build a list of the position vectors at play, and a list of the UV vectors at play (lets skip the normals). Then, when I go to draw the triangles, I just want to define each triangle with pointers to three of the existing position vectors, and pointers to three existing UV vectors. (simply because that's how OBJ's are set up)
From what I've read (I swear I googled this a hundred different ways and couldn't get a conclusive answer), you have to lump the UV and the position together as a vertex, and then the triangles are defined as pointers to three of these verticies. But what happens when the list of UVs is a different length than the list of positions?
Lets say I have a cube. That's eight position vectors. But given that each face has the same, square UV layout (each side should look the same when rendered). That's four (unique) UVs. Now what?
It's like I have to abandon this method, bite the bullet, and for all 12 triangles, define each position and UV-- at the "cost" of repeating position vectors along the cube edges and repeating uvs along the face diagonals. If this is the accepted practice, that's fine, I just want to be sure I'm going about this the right way.
Your arrays containing vertex positions, texture coordinates, normals, etc. must be the same length. That means redundant data in many cases. A cube is one example where the redundancy is especially bad. You'll actually have to pass in 24 vertices, and 24 texcoords.
You've already heard that you can't reuse cube vertices, but for some additional context, note that modern 3D content has a lot of smooth non-flat surfaces; thus most joins between triangles can share vertices since they have the same normal and other properties. A sharp edge where the attributes are discontinuous is the less common case, which therefore should not be optimized for.
I'm relatively new to 3D development and am currently using Actionscript, Stage3D and AGAL to learn. I'm trying to create a scene with a simple procedural mesh that is flat shaded. However, I'm stuck on exactly how I should be passing surface normals to the shader for the lighting. I would really like to just use a single surface normal for each triangle and do flat, even shading for each. I know it's easy to achieve better looking lighting with normals for each vertex, but this is the look I'm after.
Since the shader normally processes every vertex, not every triangle, is it possible for me to just pass a single normal per triangle, rather than one per vertex? Is my thinking completely off here? If anyone had a working example of doing simple, flat shading I'd greatly appreciate it.
I'm digging up an old question here since I stumbled on it via google and can see there is no accepted answer.
Stage3D does not have an equivalent "GL_FLAT" option for it's shader engine. What this means is that the fragment shader program always receives a "varying" or interpolated value from the output of the three respective vertices (via the vertex program). If you want flat shading, you have basically only one option:
Create three unique vertices for each triangle and set the normal for
each vertex to the face normal of the triangle. This way, each vertex
will calculate the same lighting and result in the same vertex color.
When the fragment shader interpolates, it will be interpolating three
identical values, resulting in flat shading.
This is pretty lame. The requirement of unique vertices per triangle means you can't share vertices between triangles. This will definitely increase your vertex count, causing increased delays during your VertexBuffer3D uploads as well as overall lower frame rates. However, I have not seen a better solution anywhere.