Computing texture coordinates in fragment shader (iOS/OpenGL ES 2.0) - ios

I am finding that in my fragment shader, these 2 statements give identical output:
// #1
// pos is set from gl_Position in vertex shader
highp vec2 texc = ((pos.xy / pos.w) + 1.0) / 2.0;
// #2 - equivalent?
highp vec2 texc2 = gl_FragCoord.xy/uWinDims.xy;
If this is correct, could you please explain the math? I understand #2, which is what I came up with, but saw #1 in a paper. Is this an NDC (normalized device coordinate) calculation?
The context is that I am using the texture coordinates with an FBO the same size as the viewport. It's all working, but I'd like to understand the math.
Relevant portion of vertex shader:
attribute vec4 position;
uniform mat4 modelViewProjectionMatrix;
varying lowp vec4 vColor;
// transformed position
varying highp vec4 pos;
void main()
{
gl_Position = modelViewProjectionMatrix * position;
// for fragment shader
pos = gl_Position;
vColor = aColor;
}
Relevant portion of fragment shader:
// transformed position - from vsh
varying highp vec4 pos;
// viewport dimensions
uniform highp vec2 uWinDims;
void main()
{
highp vec2 texc = ((pos.xy / pos.w) + 1.0) / 2.0;
// equivalent?
highp vec2 texc2 = gl_FragCoord.xy/uWinDims.xy;
...
}

(pos.xy / pos.w) is the coordinate value in normalized device coordinates (NDC). This value ranges from -1 to 1 in each dimension.
(NDC + 1.0)/2.0 changes the range from (-1 to 1) to (0 to 1) (0 on the left of the screen, and 1 on the right, similar for top/bottom).
Alternatively, gl_FragCoord gives the coordinate in pixels, so it ranges from (0 to width) and (0 to height).
Dividing this value by width and height (uWinDims), gives the position again from 0 on the left side of the screen, to 1 on the right side.
So yes they appear to be equivalent.

Related

converting pixels to clipspace

Instead of giving -1 to 1 values to my shaders, I would prefer giving them pixel values like for the 2D canvas context. So according to what I read, I did add a uniform variable which I set to the size of the canvas, and I divide.
But I must be missing something. The rendering is way too big...
gl_.resolutionLocation = gl.getUniformLocation( gl_.program , "u_resolution" );
gl.uniform4f(gl_.resolutionLocation , game.w , game.h , game.w , game.h );
My vertex shader :
attribute vec4 position;
attribute vec2 texcoord;
uniform vec4 u_resolution;
uniform mat4 u_matrix;
varying vec3 v_texcoord;
void main() {
vec4 zeroToOne = position / u_resolution ;
gl_Position = u_matrix * zeroToOne ;
v_texcoord = vec3(texcoord.xy, 1) * abs(position.x);
v_texcoord = v_texcoord/u_resolution.xyz ;
}
My fragment shader :
precision mediump float;
varying vec3 v_texcoord;
uniform sampler2D tex;
uniform float alpha;
void main()
{
gl_FragColor = texture2DProj(tex, v_texcoord);
gl_FragColor.rgb *= gl_FragColor.a ;
}
If you want to stay in pixels with code like the code you have then you'd want to apply the conversion to clip space after you've done everything in pixels.
In other words the code would be something like
rotatedPixelPosition = rotationMatrix * pixelPosition
clipSpacePosition = (rotatedPixelPosition / resolution) * 2.0 - 1.0;
So in other words you'd want
vec4 rotatedPosition = u_matrix * position;
vec2 zeroToOne = rotatedPosition.xy / u_resolution.xy;
vec2 zeroToTwo = zeroToOne * 2.0;
vec2 minusOneToPlusOne = zeroToTwo - 1.0;
vec2 clipspacePositiveYDown = minusOneToPlusOne * vec2(1, -1);
gl_Position = vec4(clipspacePositiveYDown, 0, 1);
If you do that and you set u_matrix to the identity then if position is in pixels you should see those positions at pixel positions. If u_matrix is strictly a rotation matrix the positions will rotate around the top left corner since rotation always happens around 0 and the conversion above puts 0 at the top left corner.
But really here's no reason to convert to from pixels to clip space by hand. You can instead convert and rotate all in the same matrix. This article covers that process. It starts with translate, rotation, scale, and converting from pixels to clip space with no matrices and converts it to something that does all of that combined using a single matrix.
Effectively
matrix = scaleYByMinusMatrix *
subtract1FromXYMatrix *
scaleXYBy2Matrix *
scaleXYBy1OverResolutionMatrix *
translationInPixelSpaceMatrix *
rotationInPixelSpaceMatrix *
scaleInPixelSpaceMatrix;
And then in your shader you only need
gl_Position = u_matrix * vec4(position, 0, 1);
Those top 4 matrixes are easy to compute as a single matrix, often called an orthographic projection in which case it simplifies to
matrix = projectionMatrix *
translationInPixelSpaceMatrix *
rotationInPixelSpaceMatrix *
scaleInPixelSpaceMatrix;
There's also this article which reproduces the matrix stack from canvas2D in WebGL

WebGL custom shadow mapping shader code not working

I've seen different methods for shadow mapping all around the internet but I've only seen one method for mapping the shadows cast by a point light source, i.e. Cube mapping. Even though I've heard of it I've never seen an actual explanation of it.
I started writing this code before I had heard of cube mapping. My goal with this code was to map the shadow depths from spherical coordinates to a 2D texture.
I've simplified the coloring of the fragments for now in order to better visualize what's happening.
But, basically the models are a sphere of radius 2.0 at coordinates (0.0, 0.0, -5.0) and a hyperboloid of height 1.0 at (0.0, 0.0, -2.0) with the light source at (0.0, 0.0, 8.0).
If I scale(written in the code) the depth values by an inverse factor of less than 9.6 they both appear completely colored as the ambient color. Greater than 9.6 and they slowly become normally textured. I tried to make an example in jsfiddle but I couldn't get textures to work.
The method isn't working all together and I'm lost.
<script id="shadow-vs" type="x-shader/x-vertex">
attribute vec3 aVertexPosition;
varying float vDepth;
uniform vec3 uLightLocation;
uniform mat4 uMMatrix;
void main(void){
const float I_PI = 0.318309886183790671537767; //Inverse pi
vec4 aPos = uMMatrix * vec4(aVertexPosition, 1.0); //The actual position of the vertex
vec3 position = aPos.xyz - uLightLocation; //The position of the vertex relative to the light source i.e. "the vector"
float len = length(position);
float theta = 2.0 * acos(position.y/len) * I_PI - 1.0; //The angle of the vector from the xz plane bound between -1 and 1
float phi = atan(position.z, position.x) * I_PI; //The angle of the vector on the xz plane bound between -1 and 1
vDepth = len; //Divided by some scale. The depth of the vertex from the light source
gl_Position = vec4(phi, theta, len, 1.0);
}
</script>
<script id="shadow-fs" type="x-shader/x-fragment">
precision mediump float;
varying float vDepth;
void main(void){
gl_FragColor = vec4(vDepth, 0.0, 0.0, 1.0); //Records the depth in the red channel of the fragment color
}
</script>
<script id="shader-vs" type="x-shader/x-vertex">
attribute vec3 aVertexPosition;
attribute vec3 aVertexNormal;
attribute vec2 aTextureCoord;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
uniform mat4 uMMatrix;
uniform mat3 uNMatrix;
varying vec2 vTextureCoord;
varying vec3 vTransformedNormal;
varying vec4 vPosition;
varying vec4 aPos;
void main(void) {
aPos = uMMatrix * vec4(aVertexPosition, 1.0); //The actual position of the vertex
vPosition = uMVMatrix * uMMatrix * vec4(aVertexPosition, 1.0);
gl_Position = uPMatrix * vPosition;
vTextureCoord = aTextureCoord;
vTransformedNormal = normalize(uNMatrix * mat3(uMMatrix) * aVertexNormal);
}
</script>
<script id="shader-fs" type="x-shader/x-fragment">
precision mediump float;
varying vec2 vTextureCoord;
varying vec3 vTransformedNormal;
varying vec4 vPosition;
varying vec4 aPos;
uniform sampler2D uSampler;
uniform sampler2D uShSampler;
uniform vec3 uLightLocation;
uniform vec3 uAmbientColor;
uniform vec4 uLightColor;
void main(void) {
const float I_PI = 0.318309886183790671537767;
vec3 position = aPos.xyz - uLightLocation; //The position of the vertex relative to the light source i.e. "the vector"
float len = length(position);
float theta = acos(position.y/len) * I_PI; //The angle of the vector from the xz axis bound between 0 and 1
float phi = 0.5 + 0.5 * atan(position.z, position.x) * I_PI; //The angle of the vector on the xz axis bound between 0 and 1
float posDepth = len; //Divided by some scale. The depth of the vertex from the light source
vec4 shadowMap = texture2D(uShSampler, vec2(phi, theta)); //The color at the texture coordinates of the current vertex
float shadowDepth = shadowMap.r; //The depth of the vertex closest to the light source
if (posDepth > shadowDepth){ //Check if this vertex is further away from the light source than the closest vertex
gl_FragColor = vec4(uAmbientColor, 1.0);
}
else{
gl_FragColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
}
}
</script>

How to modify GPUImageGaussianSelectiveBlurFilter to show a rectangular area rather than circular [duplicate]

I have used the GPUImage framework for a blur effect similar to that of the Instagram application, where I have made a view for getting a picture from the photo library and then I put an effect on it.
One of the effects is a selective blur effect in which only a small part of the image is clear the rest is blurred. The GPUImageGaussianSelectiveBlurFilter chooses the circular part of the image to not be blurred.
How can I alter this to make the sharp region be rectangular in shape instead?
Because Gill's answer isn't exactly correct, and since this seems to be getting asked over and over, I'll clarify my comment above.
The fragment shader for the selective blur by default has the following code:
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform lowp float excludeCircleRadius;
uniform lowp vec2 excludeCirclePoint;
uniform lowp float excludeBlurSize;
uniform highp float aspectRatio;
void main()
{
lowp vec4 sharpImageColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 blurredImageColor = texture2D(inputImageTexture2, textureCoordinate2);
highp vec2 textureCoordinateToUse = vec2(textureCoordinate2.x, (textureCoordinate2.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp float distanceFromCenter = distance(excludeCirclePoint, textureCoordinateToUse);
gl_FragColor = mix(sharpImageColor, blurredImageColor, smoothstep(excludeCircleRadius - excludeBlurSize, excludeCircleRadius, distanceFromCenter));
}
This fragment shader takes in a pixel color value from both the original sharp image and a Gaussian blurred version of the image. It then blends between these based on the logic of the last three lines.
The first and second of these lines calculate the distance from the center coordinate that you specify ((0.5, 0.5) in normalized coordinates by default for the dead center of the image) to the current pixel's coordinate. The last line uses the smoothstep() GLSL function to smoothly interpolate between 0 and 1 when the distance from the center point travels between two thresholds, the inner clear circle, and the outer fully blurred circle. The mix() operator then takes the output from the smoothstep() and fades between the blurred and sharp color pixel colors to produce the appropriate output.
If you just want to modify this to produce a square shape instead of the circular one, you need to adjust the two center lines in the fragment shader to base the distance on linear X or Y coordinates, not a Pythagorean distance from the center point. To do this, change the shader to read:
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform lowp float excludeCircleRadius;
uniform lowp vec2 excludeCirclePoint;
uniform lowp float excludeBlurSize;
uniform highp float aspectRatio;
void main()
{
lowp vec4 sharpImageColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 blurredImageColor = texture2D(inputImageTexture2, textureCoordinate2);
highp vec2 textureCoordinateToUse = vec2(textureCoordinate2.x, (textureCoordinate2.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
textureCoordinateToUse = abs(excludeCirclePoint - textureCoordinateToUse);
highp float distanceFromCenter = max(textureCoordinateToUse.x, textureCoordinateToUse.y);
gl_FragColor = mix(sharpImageColor, blurredImageColor, smoothstep(excludeCircleRadius - excludeBlurSize, excludeCircleRadius, distanceFromCenter));
}
The lines that Gill mentions are just input parameters for the filter, and don't control its circularity at all.
I leave modifying this further to produce a generic rectangular shape as an exercise for the reader, but this should provide a basis for how you could do this and a bit more explanation of what the lines in this shader do.
Did it ... the code for the rectangular effect is just in these 2 lines
blurFilter = [[GPUImageGaussianSelectiveBlurFilter alloc] init];
[(GPUImageGaussianSelectiveBlurFilter*)blurFilter setExcludeCircleRadius:80.0/320.0];
[(GPUImageGaussianSelectiveBlurFilter*)blurFilter setExcludeCirclePoint:CGPointMake(0.5f, 0.5f)];
// [(GPUImageGaussianSelectiveBlurFilter*)blurFilter setBlurSize:0.0f]; [(GPUImageGaussianSelectiveBlurFilter*)blurFilter setAspectRatio:0.0f];

Optimising GLSL code in fragment shader (iOS 5 + OpenGL ES 2.0)

I have some computations (below) in my fragment shader function which is called a huge number of times. I'd like to know if it is possible to optimize this code. I took a look at the OpenGL.org glsl optimisation page, and made some modifications, but it is possible to make this code faster?
uniform int mn;
highp float Nx;
highp float Ny;
highp float Nz;
highp float invXTMax;
highp float invYTMax;
int m;
int n;
highp vec4 func(in highp vec3 texCoords3D)
{
// tile index
int Ti = int(texCoords3D.z * Nz);
// (r, c) position of tile withn texture unit
int r = Ti / n; // integer division
int c = Ti - r * n;
// x/y offsets in pixels of tile origin with the texture unit
highp float xOff = float(c) * Nx;
highp float yOff = float(r) * Ny;
// 2D texture coordinates
highp vec2 texCoords2D;
texCoords2D.x = (Nx * texCoords3D.x + xOff)*invXTMax;
texCoords2D.y = (Ny * texCoords3D.y + yOff)*invYTMax;
return texture2D(uSamplerTex0, texCoords2D);
}
Edit:
To give some context, func() is used as part of a ray casting setup. It is called up to
300 times from main() for each fragment.
It is very easy to vectorize the code as follows:
highp vec3 N;
highp vec2 invTMax;
highp vec4 func(in highp vec3 texCoords3D)
{
// tile index
int Ti = int(texCoords3D.z * N.z);
// (r, c) position of tile within texture unit
int r = Ti / n;
int c = Ti - r * n;
// x/y offsets in pixels of tile origin with the texture unit
highp vec2 Off = vec2( float(c), float(r) ) * N;
// 2D texture coordinates
highp vec2 texCoords2D = ( N * texCoords3D.xy + Off ) * invTMax;
return texture2D(uSamplerTex0, texCoords2D);
}
To make sure the similar calculations run in parallel.
Modifying the texture coordinates instead of using the ones passed into the fragment shader creates a dynamic texture read and the largest performance hit on earlier hardware.
Check the last section on Dynamic Texture Lookups
https://developer.apple.com/library/ios/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/BestPracticesforShaders/BestPracticesforShaders.html
They suggest moving the texture coordinates up into the fragment shader. Looks like you can without much issue if I understand the intent of the code correctly. Your adding offset and tile support for fine adjustments, scaling, and animation on your UVs (and thus textures) ? Thought so. Use this.
//
// Vertex Shader
//
attribute vec4 position;
attribute vec2 texture;
uniform mat4 modelViewProjectionMatrix;
// tiling parameters:
// -- x and y components of the Tiling (x,y)
// -- x and y components of the Offset (w,z)
// a value of vec4(1.0, 1.0, 0.0, 0.0) means no adjustment
uniform vec4 texture_ST;
// UV calculated in the vertex shader, GL will interpolate over the pixels
// and prefetch the texel to avoid dynamic texture read on pre ES 3.0 hw.
// This should be highp in the fragment shader.
varying vec2 uv;
void main()
{
uv = ((texture.xy * texture_ST.xy) + texture_ST.zw);
gl_Position = modelViewProjectionMatrix * position;
}

Minimum size of rendered object using GL_LINES in iOS Open GL ES

I have just completed the first version of my iOS app, Corebox, and am now working on some new features.
One of the new features is a "small" tweak to the OpenGL rendering to force some objects to never be drawn smaller than a minimum size. All of the objects needing this treatment are simple 2 point lines drawn with GL_LINES.
This annotated screenshot explains what I'm after. Ignore the grey lines, the only objects I'm interested in altering are the yellow wider lines.
I have googled this extensively and it seems what I need to do is alter the geometry of the lines using a vertex shader. I'm quite new to GLSL and most shader examples I can find deal with applying lighting and other effects, eg: GLSL Heroku Editor and KicksJS shader editor.
My current vertex shader is extremely basic:
// GL_LINES vertex shader
uniform mat4 Projection;
uniform mat4 Modelview;
attribute vec4 Position;
attribute vec4 SourceColor;
varying vec4 DestinationColor;
void main(void) {
DestinationColor = SourceColor;
gl_Position = Projection * Modelview * Position;
}
As is my fragment shader:
// GL_LINES fragment shader
varying lowp vec4 DestinationColor;
void main(void) {
gl_FragColor = DestinationColor;
}
My guess as to what is required:
Determine the distance between the viewer (camera position) and the object
Determine how big the object is on the screen, based on its size and distance from camera
If the object will be too small then adjust its vertices such that it becomes large enough to easily see on the screen.
Caveats and other notes:
But if you zoom out won't this cause the model to be just a blob of orange on the screen? Yes, this is exactly the effect I'm after.
Edit: Here is the final working version implementing suggestions by mifortin
uniform mat4 Projection;
uniform mat4 Modelview;
uniform float MinimumHeight;
attribute vec4 Position;
attribute vec4 ObjectCenter;
attribute vec4 SourceColor;
varying vec4 DestinationColor;
void main(void) {
// screen-space position of this vertex
vec4 screenPosition = Projection * Modelview * Position;
// screen-space mid-point of the object this vertex belongs to
vec4 screenObjectCenter = Projection * Modelview * ObjectCenter;
// Z should be 0 by this time and the projective transform in w.
// scale so w = 1 (these two should be in screen-space)
vec2 newScreenPosition = screenPosition.xy / screenPosition.w;
vec2 newObjectCenter = screenObjectCenter.xy / screenObjectCenter.w;
float d = distance(newScreenPosition, newObjectCenter);
if (d < MinimumHeight && d > 0.0) {
// Direction of this object, this really only makes sense in the context
// of a line (eg: GL_LINES)
vec2 towards = normalize(newScreenPosition - newObjectCenter);
// Shift the center point then adjust the vertex position accordingly
// Basically this converts: *--x--* into *--------x--------*
newObjectCenter = newObjectCenter + towards * MinimumHeight;
screenPosition.xy = newObjectCenter.xy * screenPosition.w;
}
gl_Position = screenPosition;
DestinationColor = SourceColor;
}
Note that I didn't test the code, but it should illustrate the solution.
If you want to use shaders, add in another uniform vec4 that is the center position of your line. Then you can do something similar to (note center could be precomputed on the CPU once):
uniform float MIN; //Minimum size of blob on-screen
uniform vec4 center; //Center of the line / blob
...
vec4 screenPos = Projection * Modelview * Position;
vec4 center = Projection * Modelview * Position;
//Z should be 0 by this time and the projective transform in w.
//scale so w = 1 (these two should be in screen-space)
vec2 nScreenPos = screenPos.xy / screenPos.w;
vec2 nCenter = center.xy / center.w;
float d = distance(nScreenPos, nCenter);
if (d < MIN && d > 0)
{
vec2 towards = normalize(nScreenPos - nCenter);
nCenter = nCenter + towards * MIN;
screenPos.xy = nCenter.xy * screenPos.w;
}
gl_Position = screenPos;
Find where on the screen the vertex would be drawn, then from the center of the blob stretch it if needed to ensure a minimum size.
This example is for round objects. For corners, you could make MIN an attribute so the distance from the center varies on a per-vertex basis.
If you just want something more box-like, check that the minimum distance of the x and y coordinates separately.
On the CPU, you could compute the coordinates in screen-space and scale accordingly before submitting to the GPU.

Resources