Projective texture mapping in WebGL - webgl

I wrote two simple WebGL demos which use a 512x512 image as texture. But the result is not what I want. I know the solution is to use projective texture mapping(or any other solutions?) but I have no idea how to implement it in my simple demos. Anyone can help?
The results are as follows(both of them are incorrect):
Codes of demos are here: https://github.com/jiazheng/WebGL-Learning/tree/master/texture
note: Both the model and texture could not be modified in my case.

In order to get perspective-correct texture mapping, you must actually be using perspective. That is, instead of narrowing the top of your polygon along the x axis, move it backwards along the z axis, and apply a standard perspective projection matrix.
I'm a little hazy on the details myself, but my understanding is that the way the perspective matrix maps the z coordinate into the w coordinate is the key to getting the GPU to interpolate along the surface “correctly”.
If you have already-perspective-warped 2D geometry, then you will have to implement some method of restoring it to 3D data, computing appropriate z values. There is no way in WebGL to get a perspective quadrilateral, because the primitives are triangles and there is not enough information in three points to define the texture mapping you're looking for unambiguously — your code must use the four points to work out the corresponding depths. Unfortunately, I don't have enough grasp of the math to advise you on the details.

You must specify vec4 texture coordinates not vec2. The 4th field in each vec4 will be homogeneous w that when divided into x and y produce your desired coordinate. This in turn should allow the perspective correction division in hardware to give you a non affine mapping within the triangle provided your numbers are correct. Now, if you use a projection matrix to transform a vec4 with w=1 in your vertex shader you should get the correct vec4 numbers ready for perspective correction going into setup and rasterization for your fragment shader. If this is unclear then you need to seek out tutorials on projective texture transformation and homogeneous coordinates in projection.

Related

OpenGL Image warping using lookup table

I am working on an Android application that slims or fatten faces by detecting it. Currently, I have achieved that by using the Thin-plate spline algorithm.
http://ipwithopencv.blogspot.com.tr/2010/01/thin-plate-spline-example.html
The problem is that the algorithm is not fast enough for me so I decided to change it to OpenGL. After some research, I see that the lookup table texture is the best option for this. I have a set of control points for source image and new positions of them for warp effect.
How should I create lookup table texture to get warp effect?
Are you really sure you need a lookup texture?
Seems that it`d be better if you had a textured rectangular mesh (or a non-rectangular mesh, of course, as the face detection algorithm you have most likely returns a face-like mesh) and warped it according to the algorithm:
Not only you`d be able to do that in a vertex shader, thus processing each mesh node in parallel, but also it`s less values to process compared to dynamic texture generation.
The most compatible method to achieve that is to give each mesh point a Y coordinate of 0 and X coordinate where the mesh index would be stored, and then pass a texture (maybe even a buffer texture if target devices support it) to the vertex shader, where at the needed index the R and G channels contain the desired X and Y coordinates.
Inside the vertex shader, the coordinates are to be loaded from the texture.
This approach allows for dynamic warping without reloading geometry, if the target data texture is properly updated — for example, inside a pixel shader.

What are the correct cube map coordinates for openGL ES 2.0 on iOS after loading the cube map with GLKit?

In order to debug my shader, I am trying to display the just the front face of the cube map.
The cube map is a 125x750 image with the 6 faces on top of each other:
First, I load the cube map with GLKit:
_cubeTexture = [GLKTextureLoader cubeMapWithContentsOfFile:[[NSBundle mainBundle] pathForResource:#"uffizi_cube_map_ios" ofType:#"png"] options:kNilOptions error:&error];
Then I load it into the shader:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_CUBE_MAP, self.cubeTexture.name);
glUniform1i( glGetUniformLocation( self.shaderProgram, "cube"), 0);
Then in the fragment shader:
gl_FragColor = textureCube(cube, vec3(-1.0+2.0*(gl_FragCoord.x/resolution.x),-1.0+2.0*(gl_FragCoord.y/resolution.y),1.0));
This displays a distorted image which seems to be a portion of the top of the cube map:
It shouldn't be distorted, and it should show the right face, not the top face.
I can't find any documentation that describes how the coordinates map to the cube, so what am I doing wrong?
It seems that there is a problem with cubeMapWithContentsOfFile. The cubeMapWithContentsOfFiles method (the one that takes an array of 6 images) works perfectly on the simulator. (There is a different issue with both methods on device).
To visualize how texture coordinates work for cube maps, picture a cube centered at the origin, with the faces at distance 1 from the origin, and with the specified cube map image on each face.
The texture coordinates can then be seen as direction vectors. Starting at the origin, the 3 components define a vector that can point in any direction. The ray defined by the vector will then intersect one of the 6 cube faces at a given point. This is the point where the corresponding cube map image is sampled during texturing.
For example, take a vector that points in a direction that is closest to the positive z axis. The ray defined by this vector intersects the top face of the cube. Therefore, the top (POSITIVE_Z) image of the cube map is sampled, at the point where the ray intersects the face.
Equivalent rules applies to all other directions. The face corresponding to the largest absolute value of one of the vector components determines which face is sampled, and the intersection point determines the position within the image.
The exact rules and formula can be found in the spec document. For example in the latest spec (OpenGL 4.5), see Section 8.13 "Cube Map Texture Selection", with the matching table 8.19. But as long as you understand that the texture coordinates define a direction vector, you have the main aspect covered.
How you determine the texture coordinates really depends on what you want to achieve. Common cases include:
Using normal vector as the cube map texture coordinates. This can for example be used for pre-computed lighting effects, where the content of the cube map image contains pre-computed lighting results for each possible normal direction.
Using the reflection vector as the cube map texture coordinate. This supports the implementation of environment mapping. The content of the cube map is a picture of the environment.

HLSL vertex shader

I've been studying shaders in HLSL for an XNA project (so no DX10-DX11) but almost all resouces I found were tutorial of effects where the most part of the work was done in the pixel shader. For istance in lights the vertex shader is used only to serve to the pixel one normals and other things like that.
I'd like to make some effect based on the vertex shader rather than the pixel one, like deformation for istance. Could someone suggest me a book or a website? Even the bare effect name would be useful since than I could google it.
A lot of lighting, etc. is done in the pixel shader because the resulting image quality will be much better.
Imagine a sphere that is created by subdividing a cube or icosahedron. If lighting calculations are done in the vertex shader, the resulting values will be interpolated between face edges, which can lead to a flat or faceted appearance.
Things like blending and morphing are done in the vertex shader because that's where you can manipulate the vertices.
For example:
matrix World;
matrix View;
matrix Projection;
float WindStrength;
float3 WindDirection;
VertexPositionColor VS(VertexPositionColor input)
{
VertexPositionColor output;
matrix wvp = mul(mul(World,View),Projection);
float3 worldPosition = mul(World,input.Position);
worldPosition += WindDirection * WindStrength * worldPosition.y;
output.Position = mul(mul(View,Projection),worldPositioninput);
output.Color = input.Color;
return output;
}
(Pseudo-ish code since I'm writing this in the SO post editor.)
In this case, I'm offsetting vertices that are "high" on the Y axis with a wind direction and strength. If I use this when rendering grass, for instance, the tops of the blades will lean in the direction of the wind, while the vertices that are closer to the ground (ideally with a Y of zero) will not move at all. The math here should be tweaked a bit to take into account really tall things that would cause unacceptable large changes, and the wind should not be uniformly applied to all blades, but it should be clear that here the vertex shader is modifying the mesh in a non-uniform way to get an interesting effect.
No matter the effect you are trying to achieve - morphing, billboards (so the item you're drawing always faces the camera), etc., you're going to wind up passing some parameters into the VS that are then selectively applied to vertices as they pass through the pipeline.
A fairly trivial example would be "inflating" a model into a sphere, based on some parameter.
Pseudocode again,
matrix World;
matrix View;
matrix Projection;
float LerpFactor;
VertexShader(VertexPositionColor input)
float3 normal = normalize(input.Position);
float3 position = lerp(input.Position,normal,LerpFactor);
matrix wvp = mul(mul(World,View),Projection);
float3 outputVector = mul(wvp,position);
....
By stepping the uniform LerpFactor from 0 to 1 across a number of frames, your mesh (ideally a convex polyhedron) will gradually morph from its original shape to a sphere. Of course, you could include more explicit morph targets in your vertex declaration and morph between two model shapes, collapse it to a less complex version of a model, open the lid on a box (or completely unfold it), etc. The possibilites are endless.
For more information, this page has some sample code on generating and using morph targets on the GPU.
If you need some good search terms, look for "xna bones," "blendweight" and "morph targets."

Is it possible to model polar coordinates in WebGL

I want to convert a set of coordinates into polar coordinates (the easy part), and then model them on a polar 3D grid. Is this possible using WebGL or are only Cartesian coordinates supported?
You can implement any type of coordinate transform you want in shaders. However, there is an important restriction:
If you draw two connected vertices (i.e. a straight line, or the edge of a triangle), the result will always be a straight line on the screen — it is not possible to do otherwise in OpenGL.
The Cartesian-polar transform turns straight lines into curved lines. This means that if you want to transform a straight-sided shape and get the “right” curved result, you must draw it using a sequence of closely-spaced vertices — as many as you need to produce the “resolution” of smooth curvature you want. This is generally not hard to program, but it is something to be aware of.
You can use shaders in webgl so you can give your inputs as polar coordinates and have webgl transform them to cartesian internally.

Smooth textured line with OpenGL ES 2.0 shaders

We have an iOS drawing app. Currently, the drawing is implemented with OpenGL ES 1.1. We use some algorithms to smooth the lines such as Bezier curves. So, when touch events occur, we get some set of points out of touch event points (based on algorithms) and draw these points. We also use brush texture for points to have more natural look.
I wonder if it's possible to implement these algorithms in OpenGL ES 2.0 shaders. Something like to call an OpenGL function to draw lines made of touch points and on output have smoothed brush-textured curve rendered.
Points P0, P1, ... P4 here are touch events and the points on red curve - output points, with such step for T so that the distance between two neighbor points on curve is not greater than 1 pixel.
And here is the link with Bezier algorithm explanation:
Bézier curve - Wikipedia, the free encyclopedia
Any help is much appreciated.
Thanks.
You cannot generate new vertices inside the vertex shader (you can do it in the geometry shader, which ES doesn't have). The number of output vertices is always the same as the number of input vertices, you can only change their positions (and ohter attributes of course).
So you would have to draw a line strip made out of enough vertices to guarantee a smooth enough curve. What you can do is put in always the same line strip, having the curve parameter values T as 1D vertex positions. In the shader you then use this input position (the parameter value) to compute the actual 2D/3D position on the curve using the DeCasteljau algorithm (or whatever) and the points P0 to P4 which you put into the shader as constants (uniform variables in GLSL terms).
But I'm not sure if that would really buy you anything over just computing those points on the CPU and putting them into a dynamic VBO. What you save is the copying of the curve points from CPU to GPU and the computation on the CPU, but on the other hand your vertex shader is much more complex. It needs to be evaluated which is the better approach. If you need to compute the curve points each frame (because the control points change each frame) and the curve is rather high detail, it might not be that bad an idea. But otherwise I don't think it really pays. And also your shader won't be adaptable that easily to a changing number of control points/curve degree at runtime.
But once again, you cannot put in 5 control points and generate N curve points on the GPU. The vertex shader always works on a single vertex and results in a single vertex, the same as the fragment shader always works on a single fragment (say pixel, though it isn't one yet) and result in a single (or no) fragment.

Resources