I have been learning opengl es from the opengl es 2.0 programming guide. They have a particle effect that looks like an explosion. I am trying to enhance their example code by adding a mat4 projection matrix to the vertex shader, the shader compiles and works, but I am having problems getting the effect to position taking the projection into account. The code I have is as follows
const char* ParticleExplosionVertexShader = STRINGIFY (
uniform float u_time;
uniform vec3 u_centerPosition;
uniform mat4 Projection;
attribute float a_lifetime;
attribute vec3 a_startPosition;
attribute vec3 a_endPosition;
varying float v_lifetime;
void main()
{
if ( u_time <= a_lifetime )
{
gl_Position.xyz = a_startPosition + (u_time * a_endPosition);
gl_Position.xyz += u_centerPosition;
gl_Position.w = 1.0;
}
else
gl_Position = vec4( -1000, -1000, 0, 0 );
v_lifetime = 1.0 - ( u_time / a_lifetime );
v_lifetime = clamp ( v_lifetime, 0.0, 1.0 );
gl_PointSize = ( v_lifetime * v_lifetime ) * 40.0;
}
);
I am able to add the projection to the line without any errors, but unfortunately here its not really required as that code is placing the object of d=screen at the end of its lifetime
gl_Position = Projection * vec4( -1000, -1000, 0, 0 );
I have also tried changing the line
gl_Position.xyz += u_centerPosition;
to
gl_Position += Projection * u_centerPosition;
But I have had no luck getting it to place as I want it
Am I doing something wrong? Or is there a reason the book didn't have a projection matrix such as its not something someone should do with point sprites?
Any help or pointers to what I should look into will be appreciated
Thanks
Edit: Please let me know if you need more information from me
What about multiplying the whole gl_Position by modelview-projection matrix, as with any normal geometry?
Also, you will probably need to modify the line that calculates gl_PointSize, for example try to divide it by gl_Position.w (after multiplication by modelview-projection), otherwise the sprites will all have the same size (is that what you are trying to fix?).
Related
Instead of giving -1 to 1 values to my shaders, I would prefer giving them pixel values like for the 2D canvas context. So according to what I read, I did add a uniform variable which I set to the size of the canvas, and I divide.
But I must be missing something. The rendering is way too big...
gl_.resolutionLocation = gl.getUniformLocation( gl_.program , "u_resolution" );
gl.uniform4f(gl_.resolutionLocation , game.w , game.h , game.w , game.h );
My vertex shader :
attribute vec4 position;
attribute vec2 texcoord;
uniform vec4 u_resolution;
uniform mat4 u_matrix;
varying vec3 v_texcoord;
void main() {
vec4 zeroToOne = position / u_resolution ;
gl_Position = u_matrix * zeroToOne ;
v_texcoord = vec3(texcoord.xy, 1) * abs(position.x);
v_texcoord = v_texcoord/u_resolution.xyz ;
}
My fragment shader :
precision mediump float;
varying vec3 v_texcoord;
uniform sampler2D tex;
uniform float alpha;
void main()
{
gl_FragColor = texture2DProj(tex, v_texcoord);
gl_FragColor.rgb *= gl_FragColor.a ;
}
If you want to stay in pixels with code like the code you have then you'd want to apply the conversion to clip space after you've done everything in pixels.
In other words the code would be something like
rotatedPixelPosition = rotationMatrix * pixelPosition
clipSpacePosition = (rotatedPixelPosition / resolution) * 2.0 - 1.0;
So in other words you'd want
vec4 rotatedPosition = u_matrix * position;
vec2 zeroToOne = rotatedPosition.xy / u_resolution.xy;
vec2 zeroToTwo = zeroToOne * 2.0;
vec2 minusOneToPlusOne = zeroToTwo - 1.0;
vec2 clipspacePositiveYDown = minusOneToPlusOne * vec2(1, -1);
gl_Position = vec4(clipspacePositiveYDown, 0, 1);
If you do that and you set u_matrix to the identity then if position is in pixels you should see those positions at pixel positions. If u_matrix is strictly a rotation matrix the positions will rotate around the top left corner since rotation always happens around 0 and the conversion above puts 0 at the top left corner.
But really here's no reason to convert to from pixels to clip space by hand. You can instead convert and rotate all in the same matrix. This article covers that process. It starts with translate, rotation, scale, and converting from pixels to clip space with no matrices and converts it to something that does all of that combined using a single matrix.
Effectively
matrix = scaleYByMinusMatrix *
subtract1FromXYMatrix *
scaleXYBy2Matrix *
scaleXYBy1OverResolutionMatrix *
translationInPixelSpaceMatrix *
rotationInPixelSpaceMatrix *
scaleInPixelSpaceMatrix;
And then in your shader you only need
gl_Position = u_matrix * vec4(position, 0, 1);
Those top 4 matrixes are easy to compute as a single matrix, often called an orthographic projection in which case it simplifies to
matrix = projectionMatrix *
translationInPixelSpaceMatrix *
rotationInPixelSpaceMatrix *
scaleInPixelSpaceMatrix;
There's also this article which reproduces the matrix stack from canvas2D in WebGL
I have a requirement to implement an iOS UIImage filter / effect which is a copy of Photoshop's Distort Wave effect. The wave has to have multiple generators and repeat in a tight pattern within a CGRect.
Photos of steps are attached.
I'm having problems creating the glsl code to reproduce the sine wave pattern. I'm also trying to smooth the edge of the effect so that the transition to the area outside the rect is not so abrupt.
I found some WebGL code that produces a water ripple. The waves produced before the center point look close to what I need, but I can't seem to get the math right to remove the water ripple (at center point) and just keep the repeating sine pattern before it:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp float time;
uniform highp vec2 center;
uniform highp float angle;
void main() {
highp vec2 cPos = -1.0 + 2.0 * gl_FragCoord.xy / center.xy;
highp float cLength = length(cPos);
highp vec2 uv = gl_FragCoord.xy/center.xy+(cPos/cLength)*cos(cLength*12.0-time*4.0)*0.03;
highp vec3 col = texture2D(inputImageTexture,uv).xyz;
gl_FragColor = vec4(col,1.0);
}
I have to process two Rect areas, one at top and one at the bottom. So being able to process two Rect areas in one pass would be ideal. Plus the edge smoothing.
Thanks in advance for any help.
I've handled this in the past by generating an offset table on the CPU and uploading it as an input texture. So on the CPU, I'd do something like:
for (i = 0; i < tableSize; i++)
{
table [ i ].x = amplitude * sin (i * frequency * 2.0 * M_PI / tableSize + phase);
table [ i ].y = 0.0;
}
You may need to add in more sine waves if you have multiple "generators". Also, note that the above code offsets the x coordinate of each pixel. You could do Y instead, or both, depending on what you need.
Then in the glsl, I'd use that table as an offset for sampling. So it would be something like this:
uniform sampler2DRect table;
uniform sampler2DRect inputImage;
//... rest of your code ...
// Get the offset from the table
vec2 coord = glTexCoord [ 0 ].xy;
vec2 newCoord = coord + texture2DRect (table, coord);
// Sample the input image at the offset coordinate
gl_FragColor = texture2DRect (inputImage, newCoord);
I have just completed the first version of my iOS app, Corebox, and am now working on some new features.
One of the new features is a "small" tweak to the OpenGL rendering to force some objects to never be drawn smaller than a minimum size. All of the objects needing this treatment are simple 2 point lines drawn with GL_LINES.
This annotated screenshot explains what I'm after. Ignore the grey lines, the only objects I'm interested in altering are the yellow wider lines.
I have googled this extensively and it seems what I need to do is alter the geometry of the lines using a vertex shader. I'm quite new to GLSL and most shader examples I can find deal with applying lighting and other effects, eg: GLSL Heroku Editor and KicksJS shader editor.
My current vertex shader is extremely basic:
// GL_LINES vertex shader
uniform mat4 Projection;
uniform mat4 Modelview;
attribute vec4 Position;
attribute vec4 SourceColor;
varying vec4 DestinationColor;
void main(void) {
DestinationColor = SourceColor;
gl_Position = Projection * Modelview * Position;
}
As is my fragment shader:
// GL_LINES fragment shader
varying lowp vec4 DestinationColor;
void main(void) {
gl_FragColor = DestinationColor;
}
My guess as to what is required:
Determine the distance between the viewer (camera position) and the object
Determine how big the object is on the screen, based on its size and distance from camera
If the object will be too small then adjust its vertices such that it becomes large enough to easily see on the screen.
Caveats and other notes:
But if you zoom out won't this cause the model to be just a blob of orange on the screen? Yes, this is exactly the effect I'm after.
Edit: Here is the final working version implementing suggestions by mifortin
uniform mat4 Projection;
uniform mat4 Modelview;
uniform float MinimumHeight;
attribute vec4 Position;
attribute vec4 ObjectCenter;
attribute vec4 SourceColor;
varying vec4 DestinationColor;
void main(void) {
// screen-space position of this vertex
vec4 screenPosition = Projection * Modelview * Position;
// screen-space mid-point of the object this vertex belongs to
vec4 screenObjectCenter = Projection * Modelview * ObjectCenter;
// Z should be 0 by this time and the projective transform in w.
// scale so w = 1 (these two should be in screen-space)
vec2 newScreenPosition = screenPosition.xy / screenPosition.w;
vec2 newObjectCenter = screenObjectCenter.xy / screenObjectCenter.w;
float d = distance(newScreenPosition, newObjectCenter);
if (d < MinimumHeight && d > 0.0) {
// Direction of this object, this really only makes sense in the context
// of a line (eg: GL_LINES)
vec2 towards = normalize(newScreenPosition - newObjectCenter);
// Shift the center point then adjust the vertex position accordingly
// Basically this converts: *--x--* into *--------x--------*
newObjectCenter = newObjectCenter + towards * MinimumHeight;
screenPosition.xy = newObjectCenter.xy * screenPosition.w;
}
gl_Position = screenPosition;
DestinationColor = SourceColor;
}
Note that I didn't test the code, but it should illustrate the solution.
If you want to use shaders, add in another uniform vec4 that is the center position of your line. Then you can do something similar to (note center could be precomputed on the CPU once):
uniform float MIN; //Minimum size of blob on-screen
uniform vec4 center; //Center of the line / blob
...
vec4 screenPos = Projection * Modelview * Position;
vec4 center = Projection * Modelview * Position;
//Z should be 0 by this time and the projective transform in w.
//scale so w = 1 (these two should be in screen-space)
vec2 nScreenPos = screenPos.xy / screenPos.w;
vec2 nCenter = center.xy / center.w;
float d = distance(nScreenPos, nCenter);
if (d < MIN && d > 0)
{
vec2 towards = normalize(nScreenPos - nCenter);
nCenter = nCenter + towards * MIN;
screenPos.xy = nCenter.xy * screenPos.w;
}
gl_Position = screenPos;
Find where on the screen the vertex would be drawn, then from the center of the blob stretch it if needed to ensure a minimum size.
This example is for round objects. For corners, you could make MIN an attribute so the distance from the center varies on a per-vertex basis.
If you just want something more box-like, check that the minimum distance of the x and y coordinates separately.
On the CPU, you could compute the coordinates in screen-space and scale accordingly before submitting to the GPU.
I succesfully implement shadow volume on iOS.
However I got the following issue how can I clip the vertex position to the far plane like NV_depth_clamp is doing in GLSL? this is my vertex shader code:
void main( void ) {
highp vec3 eyepos = vec3( MODELVIEW * vec4( VERTEX, 1.0 ) );
normal = normalize( NORMALMATRIX * NORMAL );
highp vec3 ldir = normalize( LIGHTPOS - eyepos );
highp float ndotl = max( dot( normal, ldir ), 0.0 );
// How can I clip that to the far plane automatically!??!!?
if( ndotl > 0.0 ) gl_Position = PROJECTION * vec4( eyepos + ( ldir * -2000.0 ), 1.0 );
else gl_Position = PROJECTION * vec4( eyepos, 1.0 );
}
Second, while searching for the issue above, I found that the shadow volume zfail method (which is what I implement) is patented is that true? does that mean I can't use it in a commercial application on the App Store?
TIA!
Cheers, at the far clip plane, z/w = 1. So you need to transform both eyepos and ldir by projection, and then add as much ldir to eyepos so that it ends up at the far plane. This might be tricky though, because the far clip plane may clip your polygons if they lie exactly on it, so some tweaking might be required.
Is it possible to access the surface normal - the normal associated with the plane of a fragment - from within a fragment shader? Or perhaps this can be done in the vertex shader?
Is all knowledge of the associated geometry lost when we go down the shader pipeline or is there some clever way of recovering that information in either the vertex of fragment shader?
Thanks in advance.
Cheers,
Doug
twitter: #dugla
The surface normal vector can be calculated approximately by the partial derivative of the view space position in the frgament shader. The partial derivative can be get by the functions dFdx and dFdy. For this is required OpenGL es 3.0 or the OES_standard_derivatives extension:
in vec3 view_position;
void main()
{
vec3 normalvector = cross(dFdx(view_position), dFdy(view_position));
nv = normalize(normalvector * sign(normalvector.z));
.....
}
In general it is possible to calculate the normal vector of a surface in a geometry shader (since OpenGL ES 3.2).
For example if you draw triangles you get all three points in the geometry shader.
Three points define a plane from which the normal vector can be calculated.
You just have to be careful if the points are arranged clockwise or counterclockwise.
The normal vector of a triangle is the normalized cross product of 2 vectors defined
by the corner points of the triangle.
See the folowing example which for counterclockwise triangles:
Vertex shader
#version 400
layout (location = 0) in vec3 inPos;
out vec3 vertPos;
uniform mat4 u_projectionMat44;
uniform mat4 u_modelViewMat44;
void main()
{
vec4 viewPos = u_modelViewMat44 * vec4( inPos, 1.0 );
vertPos = viewPos.xyz;
gl_Position = u_projectionMat44 * viewPos;
}
Geometry shader
#version 400
layout( triangles ) in;
layout( triangle_strip, max_vertices = 3 ) out;
in vec3 vertPos[];
out vec3 geoPos;
out vec3 geoNV;
void main()
{
vec3 leg1 = vertPos[1] - vertPos[0];
vec3 leg2 = vertPos[2] - vertPos[0];
geoNV = normalize( cross( leg1, leg2 ) );
geoPos = vertPos[0];
EmitVertex();
geoPos = vertPos[1];
EmitVertex();
geoPos = vertPos[2];
EmitVertex();
EndPrimitive();
}
Fragment shader
#version 400
in vec3 geoPos;
in vec3 geoNV;
void main()
{
// ...
}
Of course you can calculate the normalvector also in the tesselation shaders (since OpenGL ES 3.2).
But this makes sense only if you already required tessellation shader for other reasons and additionally calculate
the normal vector of the face:
Vertex shader
The vertex shader is the same as above.
Tessellation control shader
#version 400
layout( vertices=3 ) out;
in vec3 vertPos[];
out vec3 tctrlPos[];
void main()
{
tctrlPos[gl_InvocationID] = vertPos[gl_InvocationID];
if ( gl_InvocationID == 0 )
{
gl_TessLevelOuter[0] = ;
gl_TessLevelOuter[1] = ;
gl_TessLevelOuter[2] = ;
gl_TessLevelInner[0] = ;
}
}
Tessellation evaluation shader
#version 400
layout(triangles, ccw) in;
in vec3 tctrlPos[];
out vec3 tevalPos;
out vec3 tevalNV;
void main()
{
vec3 leg1 = tctrlPos[1] - tctrlPos[0];
vec3 leg2 = tctrlPos[2] - tctrlPos[0];
tevalNV = normalize( cross( leg1, leg2 ) );
tevalPos = tctrlPos[0] * gl_TessCoord.x + tctrlPos[1] * gl_TessCoord.y + tctrlPos[2] * gl_TessCoord.z;
}
Fragmant shader
#version 400
in vec3 tevalPos;
in vec3 tevalNV;
void main()
{
// ...
}
You can get per-pixel normals interpolated from vertex normales by just using a "varying" (in newer OpenGL it is just in/out) variable. But do not forget to normalize this normal! Interpolated normals must not have a length of 1 any longer. These normals also give bad results on sharp edges.
If you want to use custom normals with a higher resolution a commonly used technique are normal maps. You create a texture with baked normals for your object. Then you can access the normal in the fragment texture using a textur look-up.
If you pass the vertex normal through to the fragment shader in a "varying" then you will get an interpolated fragment normal.
EDIT: You will have to calculate the normals in your application, and pass them into your shader as an attribute for each vertex of your triangle.
The usual way to calculate the normal for a triangle is with a cross product.
Call the three points making up the triangle P1, P2, and P3.
Calculate V1, the vector from P1 to P2.
Calculate V2, the vector from P1 to P3.
Calculate the cross product of V1 and V2.
This will give you the normal to the plane of the triangle. V2 should be "to the left of" V1, or your normal will point "in" instead of "out". See the Wikipedia article on cross products for details.
FURTHER EDIT: Right, I understand your problem now. Yes, it's true that with shared vertices you can't really have more than one normal per vertex.
The only other thing that I can think of is that maybe a geometry shader could help, because it gets passed all three vertices for a triangle. I don't have any experience with them though.