I'm trying to find a way to do something similar to this on iOS:
Does anyone know a simple way to do it?
I don't know of a oneliner to do this, but you can use OpenGL to render a textured grid with quads, which has the texture coordinates equally distributed.
Exampe of 2x2 grid:
{0.0,1.0} {0.33333,1.0} {1.0,1.0}
{0.0,0.33333} {0.33333,0.33333} {1.0,0.33333}
{0.0,0.0} {0.33333,0.0} {1.0,0.0}
If you move shared vertices of adjacent quads (like in your example) while texture coords remain, you get a warp effect. You need a trivial vertex and fragment shader when using OpenGL ES, especially if you want to smoothen the warp effect, which is linearly interpolated per quad/triangle in its simple form.
Related
I have a WebGL scene that wants to draw both point and line primitives, and am wondering: Is it possible to draw multiple WebGL primitives inside a single draw call?
My hunch is this is not possible, but WebGL is constantly surprising me with tricks one can do to accomplish strange edge cases, and searching has not let me confirm whether this is possible or not.
I'd be grateful for any insight others can offer on this question.
You can't draw WebGL lines, points, and triangles in the same draw call. You can generate points and lines from triangles and then just draw triangles in one draw call that happens to have triangles that make points and triangles that draw lines and triangles that draw other stuff all one draw call.
Not a good example but for fun here's a vertex shader than generates points and lines from triangles on the fly.
There's also this for an example of making lines from triangles
How creative you want to get with your shaders vs doing things on the CPU is up to you but it's common to draw lines with triangles as the previous article points out since WebGL lines can generally only be a single pixel thick.
It's also common to draw points with triangles since
WebGL is only required to support points of size 1
By drawing with triangles that limit is removed
WebGL points are always aligned with the screen
Triangle based points are far more flexible. You can rotate the point for example and or orient them in 3D. Here's a bunch of points made from triangles
Triangle based points can be scaled in 3D with no extra work
In other words a triangle based point in 3d space will scale with distance from the camera using standard 3D math. A WebGL point requires you to compute the size the point should be so you can set gl_PointSize and so requires extra work if you want it to scale with the scene.
It's not common to mix points, lines, and triangles in a single draw call but it's not impossible by any means.
I am working on an Android application that slims or fatten faces by detecting it. Currently, I have achieved that by using the Thin-plate spline algorithm.
http://ipwithopencv.blogspot.com.tr/2010/01/thin-plate-spline-example.html
The problem is that the algorithm is not fast enough for me so I decided to change it to OpenGL. After some research, I see that the lookup table texture is the best option for this. I have a set of control points for source image and new positions of them for warp effect.
How should I create lookup table texture to get warp effect?
Are you really sure you need a lookup texture?
Seems that it`d be better if you had a textured rectangular mesh (or a non-rectangular mesh, of course, as the face detection algorithm you have most likely returns a face-like mesh) and warped it according to the algorithm:
Not only you`d be able to do that in a vertex shader, thus processing each mesh node in parallel, but also it`s less values to process compared to dynamic texture generation.
The most compatible method to achieve that is to give each mesh point a Y coordinate of 0 and X coordinate where the mesh index would be stored, and then pass a texture (maybe even a buffer texture if target devices support it) to the vertex shader, where at the needed index the R and G channels contain the desired X and Y coordinates.
Inside the vertex shader, the coordinates are to be loaded from the texture.
This approach allows for dynamic warping without reloading geometry, if the target data texture is properly updated — for example, inside a pixel shader.
I am trying to write a little script to apply texture to rectangular cuboids. To accomplish this, I run through the scenegraph, and wherever I find the SoIndexedFaceSet Nodes, I insert a SoTexture2 Node before that. I put my image file in the SoTexture2 Node. The problem I am facing is that the texture is applied correctly to 2 of the faces(say face1 and face2), in the Y-Z plane, but for the other 4 planes, it just stretches the texture at the boundaries of the two faces(1 and 2).
It looks something like this.
The front is how it should look, but as you can see, on the other two faces, it just extrapolates the corner values of the front face. Any ideas why this is happening and any way to avoid this?
Yep, assuming that you did not specify texture coordinates for your SoIndexedFaceSet, that is exactly the expected behavior.
If Open Inventor sees that you have applied a texture image to a geometry and did not specify texture coordinates, it will automatically compute some texture coordinates. Of course it's not possible to guess how you wanted the texture to be applied. So it computes the bounding box then computes texture coordinates that stretch the texture across the largest extent of the geometry (XY, YZ or XZ). If the geometry is a cuboid you can see the effect clearly as in your image. This behavior can be useful, especially as a quick approximation.
What you need to make this work the way you want, is to explicitly assign texture coordinates to the geometry such that the texture is mapped separately to each face. In Open Inventor you can actually still share the vertices between faces because you are allowed to specify different vertex indices and texture coordinate indices (of course this is only more convenient for the application because OpenGL doesn't support this and Open Inventor has to re-shuffle the data internally). If you applied the same texture to an SoCube node you would see that the texture is mapped separately to each face as expected. That's because SoCube defines texture coordinates for each face.
I am trying to draw a series of squares in XNA. I am looking at all these articles about TriangleStrips and DynamicVertexBuffers. But, not sure where to begin.
Current step
I am able to draw 1 square using VertexPositionColor, TriangleList and indices. Now I want to draw a series of squares with varying colors.
End Goal
Something to keep in mind is the number of such squares that I would like to be able to draw, eventually. If we assume a 5px width, on a 1920x1080 screen, we can calculate the number of squares to be (1920 * 1080) / 25 = 82944.
Any pointers on how to accomplish this would be great!
Generally, you can draw more squares in the same way you draw the first one. However, there will be a significant loss in performance.
Instead, you can add all triangles to one vertex buffer / index buffer. You already are able to draw two triangles as a triangle list. You should be able to easily adjust this routine to draw more than two triangles. Just add the according vertices and indices to the buffers and modify the draw call.
If you need vertices at the same position with different colors, you need to add two vertices to the buffer.
This way, the performance loss is very little, because you draw everything with only one draw call. Although the amount of triangles should be no problem for most graphic cards, some smaller or older ones can get into trouble. If so, you should consider changing your drawing strategy. Maybe it is not even necessary to draw that much triangles. But you can think about that, if the resulting performance is too low...
If you don't care about 3D, just 2D - you can use SpriteBatch to draw squares/rectangles on the screen. This will handle batching all the vertex/index buffer management for you.
In Photoshop you can control how pictures are scaled up and down as 'image interpolation', it has different options like 'Bicubic', 'Bilinear', 'Nearest Neighbour' and such.
I was wondering if I could do something similar in DirectX? Basically if I slap a texture on a quad and stretch the quad how can I control how the texture on the quad is represented?
Thanks for any help!
If you are using fixed function pipeline :
http://msdn.microsoft.com/en-us/library/ee421769(VS.85).aspx
Setting the D3DSAMP_MAGFILTER, D3DSAMP_MINFILTER , D3DSAMP_MIPFILTER values.
Otherwise set the FILTER option of the sampler object if you're using HLSL.
There are 4 type of filtering. NONE, POINT, LINEAR, ANISOTROPIC.