This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Rendering meshes with multiple indices
This is about using index buffers to render custom geometry, like from an OBJ file. I know a bit about 3d graphics conventions, but I haven't done much of anything with WebGL.
The short form of my question is "how do you use Index Buffers in WebGL?"
What I would like to do is, for a piece of custom geometry, build a list of the position vectors at play, and a list of the UV vectors at play (lets skip the normals). Then, when I go to draw the triangles, I just want to define each triangle with pointers to three of the existing position vectors, and pointers to three existing UV vectors. (simply because that's how OBJ's are set up)
From what I've read (I swear I googled this a hundred different ways and couldn't get a conclusive answer), you have to lump the UV and the position together as a vertex, and then the triangles are defined as pointers to three of these verticies. But what happens when the list of UVs is a different length than the list of positions?
Lets say I have a cube. That's eight position vectors. But given that each face has the same, square UV layout (each side should look the same when rendered). That's four (unique) UVs. Now what?
It's like I have to abandon this method, bite the bullet, and for all 12 triangles, define each position and UV-- at the "cost" of repeating position vectors along the cube edges and repeating uvs along the face diagonals. If this is the accepted practice, that's fine, I just want to be sure I'm going about this the right way.
Your arrays containing vertex positions, texture coordinates, normals, etc. must be the same length. That means redundant data in many cases. A cube is one example where the redundancy is especially bad. You'll actually have to pass in 24 vertices, and 24 texcoords.
You've already heard that you can't reuse cube vertices, but for some additional context, note that modern 3D content has a lot of smooth non-flat surfaces; thus most joins between triangles can share vertices since they have the same normal and other properties. A sharp edge where the attributes are discontinuous is the less common case, which therefore should not be optimized for.
Related
Thanks for taking your time to read this.
We have fixed stereo pairs of cameras looking into a closed volume. We know the dimensions of the volume and have the intrinsic and extrinsic calibration values
for the camera pairs. The objective being to be able to identify the 3d positions of multiple duplicate objects accurately.
Which naturally leads to what is described as the correspondence problem in litrature. We need a fast technique to match ball A from image 1 with Ball A from image 2 and so on.
At the moment we use the properties of epipolar geomentry (Fundamental matrix) to match the balls from different views in a crude way and works ok when the objects are sparse,
but gives a lot of false positives if the objects are densely scattered. Since ball A in image 1 can lie anywhere on the epipolar line going across image 2, it leads to mismatches
when multiple objects lie on that line and look similar.
Is there a way to re-model this into a 3d line intersection problem or something? Since the ball A in image 1 can only take a bounded limit of 3d values, Is there a way to represent
it as a line in 3d? and do a intersection test to find the closest matching ball in image 2?
Or is there a way to generate a sparse list of 3d values which correspond to each 2d grid of pixels in image 1 and 2, and do a intersection test
of these values to find the matching objects across two cameras?
Because the objects can be identical, OpenCV feature matching algorithms like FLANN, ORB doesn't work.
Any ideas in the form of formulae or code is welcome.
Thanks!
Sak
You've set yourself quite a difficult task. Because one point can occlude another in a view, it's not generally possible even to count the number of points. If each view has two points, but those points fall on the same epipolar line on the other view, then you can count anywhere between 2 and 4 points.
Assuming you want to minimize the points, this starts to look like Minimum Vertex Cover in a dense bipartite graph, with each edge representing the association of a point from each view, and the weight of each edge taken from the registration error of associating the corresponding points (vertices) from each view. MVC is, of course, NP-hard, and if you treat the problem as a general MVC problem then you'll never do better than O(n^2) because that's how many edges there are to examine.
Your particular MVC problem might have structure that can be exploited to perform a more efficient approximation. In particular, I might suggest calculating the epipolar lines in one view, ordering them by angle from the epipole, and similarly sorting the points in that view from the epipole. You can then iterate over the two sorted lists roughly in parallel, greedily associating each point with a nearby epipolar line. Then you can do the same in the other view, but only looking at points in that view which had not yet been associated during the previous pass. I think that a more regimented and provably optimal approach might be possible with dynamic programming (particularly if you strictly bound the registration error) which wouldn't require the second pass, but I can't sketch it out offhand.
For different types of objects it's easy- to find the match using sum-of-absolute-differences. For similar objects, the idea(s) could lead to publish a good paper. Anyway here's one quick algorithm:
detect the two balls in first image (using object detection methods).
divide the image into two segments cantaining two balls.
repeat steps 1 & 2 for second image also.
the direction of segments in two images should give correspondence of the two balls.
Try this, it should work for two balls.
I want to create a function that takes multiple textures and append them and tiles them next to each other. Example, if I had imgA, imgB, imgC I can get an texture like this:
A A B
C B B
B C A
Also image do not have to be the same size so I might get something like this:
AAB C
C B B
BAC C
Does anyone how I can do this in HLSL, what functions I should be looking at? Do you have any syntax example?
Thank you :)
EDIT:
I am not quite satisfied with the answers yet, I will be exploring them more in depth, then coming back to this question
Running loops in HLSL pixel shaders is not the best idea. It's probably easier to stream the vertices corresponding to the desired tiled texture.
First, you would want to create a texture atlas, i.e., a big texture which contains all the textures you want to compose. Then you render one quad (2 triangles) after another in the desired arrangement.
You can use n Draw calls: one quad at a time.
You can make one big vertex buffer with pre-computed or partially computed tile positions and use one Draw call.
Or you can do one DrawInstanced call. This is how tile-based maps are rendered in most games.
If you don't want to create a texture atlas, you could pass each of the base textures to a separate sampler and then map the texture coordinates to the appropriate sampler. However, this adds branching to the pixel shader which is also going to cost performance.
That question is little to broad, there is far to many ways to do what you describe. Some solutions could probably use a large uber shader and embedded most of the logic in the hlsl, but it does not seems right and complex for nothing. It would be more affordable to mix with some generated geometry for each portion of the screen.
There is likely to have absolutely no performance penalty from binding each texture separately and render quads to the correct locations, even or the weakest hardware.
I'm completely new to DirectX (11) so this question will be extremely basic. Sorry about that.
I'd like to draw a cube on screen that has solid-coloured faces. All of the examples that I've seen have 8 vertices, with a colour defined at each vertex (red, green, blue). The pixel shader then interpolates between these vertices to give a spectrum of colours. This looks nice, but isn't what I'm trying to achieve. I'd just like a cube with six, coloured faces.
Two ideas come to mind:
use 24 vertices, and have each vertex referenced only a single time, i.e. no sharing. This way I can define three different colours at each 3D position, one for each face.
use a texture for each face that 'stretches' to give the face the correct colour. I'm not very familiar with textures right now, so not all that sure about this idea.
What's the typical/canonical way to achieve this effect? I'm sure this 'problem' has been solved many, many times before.
For your particular problem, vertex coloring might be the easiest and best solution. But the more complex you models will become the more complicated is to create a proper vertex coloring, because you don't always want to limit you in your imagination to the underlying geometry.
In general 3D objects are colored with one or more textures. Therefore you create an UV-Mapping (wiki), which unwraps you three-dimensional surface onto a 2D-Plane, the texture. Now you can paint freely in any resolution you want colors on your object, which gives you the most freedom to have the model look as you want.
Of course each application has its own characteristics, so some projects would choose another approach, but I think this is the most popular way to colorize models.
Option 1 is the way to go if:
You want zero color bleed between faces
You want zero texture bleed between faces
You later want to use the color as a lighting scheme ala Minecraft
Caveats:
Could use more memory as more verts being used (There are some techniques around this depending on how large your object is and its spacial resolution. eg using 1 byte for x/y/z instead of a float)
This is an image from apple's documentation. They show a transform from a cube to sphere and also to some random geometry.
Only a few lines lower they state:
A morpher and its target geometries may be loaded from a scene file or
created programmatically. The base geometry and all target geometries
must be topologically identical—that is, they must contain the same
number and structural arrangement of vertices.
Could someone explain this paragraph because apparently I don't understand it.
Since a sphere will never have the same structural arrangement of vertices as cube(at least I think so), it is impossible to make transformation. But hey, we all see it in the picture. I also tried do to the transformation and I don't get the expected results. So how do you go from sphere to cube or vice versa?
"Topologically identical" means that the relationships between vertices in a mesh must be preserved, but their locations in space can change. Here's an example of that in 2D:
These two meshes have the same eight vertices, connected to each other in the same ways, but their positions (and thus the shape they form) differ.
To do the same in 3D with SceneKit, you need custom vertex data — the primitive shapes that SceneKit can generate for you (like SCNSphere, SCNBox, and whatnot) all have different topologies, so they can't be used as morpher targets.
If you want to morph a box into a sphere, you'll need to generate your own box and sphere with identical topology. The "some random shape" in Apple's illustration is a hint at how you might do that — it appears to be one of the variants of a superellipsoid. If you use the equations in that Wikipedia page you can generate a set of points that can be either on a sphere or on a cube depending on other parameters. Vary those parameters to generate a couple of meshes, create SCNGeometry from those meshes, and you've got valid SCNMorpher targets.
You can see a simpler example of morphing in Apple's SceneKit WWDC 2014 Slides sample app.
You can't presume the locations of each vertex in the given images; the cube doesn't neccesarily have eight and the left-most doesn't guaruntee to have 6.
Admittedly, I've not played with SCNMorpher but from that description I imagine it will interpolate on a per-vertex basis (so they will have to match up).
If it helps, picture the sphere as having a lot of 'dots' spread equally along its surface and those are pushed or squeezed to make the other surfaces
Basically, I'm trying to cover a slot machine reel (white cylinder model) with multiple evenly spaced textures around the exterior. The program will be Windows only and the textures will be dynamically loaded at run-time instead of using the content pipeline. (Windows based multi-screen setup with XNA from the Microsoft example)
Most of the examples I can find online are for XNA3 and are still seemingly gibberish to me at this point.
So I'm looking for any help someone can provide on the subject of in-game texturing of objects like cylinders with multiple textures.
Maybe there is a good book out there that can properly describe how texturing works in XNA (4.0 specifically)?
Thanks
You have a few options. It depends two things: whether the model is loaded or generated at runtime, and whether your multiple textures get combined into one or kept individual.
If you have art skills or know an artist, probably the easiest approach is to get them to texture map the cylinder with as many textures as you want (multiple materials). You'd want your Model to have one mesh (ModelMesh) and one material (ModelMeshPart) per texture required. This is assuming the cylinders always have a fixed number of textures!. Then, to swap the textures at runtime you'd iterate through the ModelMesh.Effects collection, cast each to a BasicEffect and set it's Texture property.
If you can't modify the model, you'll have to generate it. There's an example of this on the AppHub site: http://create.msdn.com/en-US/education/catalog/sample/primitives_3d. It probably does not generate texture coordinates so you'd need to add them. If you wanted 5 images per cylinder, you should make sure the number of segments is a multiple of 5 and the V coordinate should go from 0 to 1, 5 times as it wraps around the cylinder. To keep your textures individual with this technique, you'd need to draw the cylinder in 5 chunks, each time setting the GraphicsDevice.Textures[0] to your current texture.
With both techniques it would be possible to draw the cylinder in a single draw call, but you'd need to merge your textures into a single one using Texture2D.GetData and Texture2D.SetData. This is going to be more efficient, but really isn't worth the trouble. Well not unless you making some kind of crazy slot machine particle system anyway.