I'm using webgl to bind a matte image to a regular image. The matte image gives the regular image transparency.
I was able to successfully upload both images as textures, but when I run the program my color texture and matte texture don't line up. As you can see in the picture below the black should be all gone, but instead it seems scaled and flipped.
I find that very strange as both the "color" channel and the "alpha" channel are using the same texture.
My question is how can I rotate/resize the alpha chanel within the shader? Or will I have to make a new texture coordinate plane to map the alpha channel onto.
For reference this is my code below:
vertexShaderScript = [
'attribute vec4 vertexPos;',
'attribute vec4 texturePos;',
'varying vec2 textureCoord;',
'void main()',
'{',
' gl_Position = vertexPos;',
' textureCoord = texturePos.xy;',
'}'
].join('\n');
fragmentShaderScript = [
'precision highp float;',
'varying highp vec2 textureCoord;',
'uniform sampler2D ySampler;',
'uniform sampler2D uSampler;',
'uniform sampler2D vSampler;',
'uniform sampler2D aSampler;',
'uniform mat4 YUV2RGB;',
'void main(void) {',
' highp float y = texture2D(ySampler, textureCoord).r;',
' highp float u = texture2D(uSampler, textureCoord).r;',
' highp float v = texture2D(vSampler, textureCoord).r;',
' highp float a = texture2D(aSampler, textureCoord).r;',
' gl_FragColor = vec4(y, u, v, a);',
'}'
].join('\n');
If all you want to do is flip the alpha texture coordinate
const vertexShaderScript = `
attribute vec4 vertexPos;
attribute vec4 texturePos;
varying vec2 textureCoord;
void main()
{
gl_Position = vertexPos;
textureCoord = texturePos.xy;
}
`;
const fragmentShaderScript = `
precision highp float;
varying highp vec2 textureCoord;
uniform sampler2D ySampler;
uniform sampler2D uSampler;
uniform sampler2D vSampler;
uniform sampler2D aSampler;
uniform mat4 YUV2RGB;
void main(void) {
highp float y = texture2D(ySampler, textureCoord).r;
highp float u = texture2D(uSampler, textureCoord).r;
highp float v = texture2D(vSampler, textureCoord).r;
highp float a = texture2D(aSampler, vec2(textureCoord.x, 1. - textureCoord.y).r;
gl_FragColor = vec4(y, u, v, a);
}
`;
But in the general case it's up to you to decide how to supply or generate texture coordinates. You can manipulate them anyway you want. It's kind of asking how do I make the value 3 and I can answer 3, 1 + 1 + 1, 2 + 1, 5 - 2, 15 / 5, 300 / 100, 7 * 30 / 70, 4 ** 2 - (3 * 4 + 1)
The most generic way to change texture coordinates is to multiply them by a matrix just like positions
uniform mat3 texMatrix;
attribute vec2 texCoords;
...
vec2 newTexCoords = (texMatrix * vec3(texCoords, 1)).xy;
And then using the same type of matrix math you'd use for positions for texture coordinates instead.
This article gives some examples of manipulating texture cooridinates by a texture matrix
Related
I have been trying to program a basic webgl spotlight shader, but no matter how hard I try, I cannot get the spotlights position to be relative to the world.The code that I am currently using is below, I have tried almost every coordinate frame that I can to get this working, but no matter what I do, I only get partially correct results.
For example if I switch to world coordinates the spotlights position will be correct, but it will only reflect off one object, or if I use the view space the light will work but it's position is relative to the camera.
In it's current state, the spotlight seems to be relative to each objects frame. (Not sure why.) Any help in solving helping this issue is greatly appreciated.
Vertex Shader:
attribute vec4 vPosition;
attribute vec4 vNormal;
attribute vec2 vTexCoord;
varying vec3 L, E, N,D;
varying vec2 fTexCoord;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform mat4 NormalMatrix;
uniform mat4 View;
uniform vec4 lightPosition;
uniform vec4 lightDirection;
uniform vec3 Eye;
void main(){
L= (modelViewMatrix * lightPosition).xyz; //Light position in eye coordinates
E = (modelViewMatrix * vPosition).xyz; //Vertex position in eye coordinates.
//Normal position in eye coordinate. Transpose(Inverse(modelViewMatrix) * vNormal.
N=(NormalMatrix * vNormal).xyz;
D=lightDirection.xyz;//Light direction
fTexCoord=vTexCoord;
gl_Position = projectionMatrix * modelViewMatrix * vPosition;
}
Fragment Shader:
precision mediump float;
uniform vec4 lDiffuseColor;
uniform vec4 lSpecular;
uniform vec4 lAmbientColor;
uniform float lShininess;
varying vec3 L,E,N,D;
const float lExponent=2.0;
const float lCutoff=0.867;
vec3 lWeight=vec3(0,0,0);
void main(){
vec3 vtoLS=normalize(L - E);//Vector to light source from vertex.
float Ks=pow(max(dot(normalize(N),vtoLS),0.0),lShininess);
vec3 specular=Ks* lSpecular.xyz;
float diffuseWeight=max(dot(normalize(N), -vtoLS),0.0);
vec3 diffuse=diffuseWeight * lDiffuseColor.xyz;
if(diffuseWeight >0.0){
float lEffect= dot(normalize(D),normalize(-vtoLS));
if(lEffect > lCutoff){
lEffect= pow(lEffect,Ks);
vec3 reflection= normalize(reflect(-vtoLS,normalize(N)));
vec3 vEye=-normalize(E);
float rdotv=max(dot(reflection,vEye),0.0);
float specularWeight=pow(rdotv,lShininess);
lWeight= (lEffect * diffuse.xyz + lEffect * specular.xyz) + vec3(0.5,0,0);
}
}
lWeight+=lAmbientColor.xyz;
gl_FragColor=vec4(lWeight.rgb,1);
}
Current Output: http://sta.sh/012uh5hwwlse
I have used the GPUImage framework for a blur effect similar to that of the Instagram application, where I have made a view for getting a picture from the photo library and then I put an effect on it.
One of the effects is a selective blur effect in which only a small part of the image is clear the rest is blurred. The GPUImageGaussianSelectiveBlurFilter chooses the circular part of the image to not be blurred.
How can I alter this to make the sharp region be rectangular in shape instead?
Because Gill's answer isn't exactly correct, and since this seems to be getting asked over and over, I'll clarify my comment above.
The fragment shader for the selective blur by default has the following code:
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform lowp float excludeCircleRadius;
uniform lowp vec2 excludeCirclePoint;
uniform lowp float excludeBlurSize;
uniform highp float aspectRatio;
void main()
{
lowp vec4 sharpImageColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 blurredImageColor = texture2D(inputImageTexture2, textureCoordinate2);
highp vec2 textureCoordinateToUse = vec2(textureCoordinate2.x, (textureCoordinate2.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp float distanceFromCenter = distance(excludeCirclePoint, textureCoordinateToUse);
gl_FragColor = mix(sharpImageColor, blurredImageColor, smoothstep(excludeCircleRadius - excludeBlurSize, excludeCircleRadius, distanceFromCenter));
}
This fragment shader takes in a pixel color value from both the original sharp image and a Gaussian blurred version of the image. It then blends between these based on the logic of the last three lines.
The first and second of these lines calculate the distance from the center coordinate that you specify ((0.5, 0.5) in normalized coordinates by default for the dead center of the image) to the current pixel's coordinate. The last line uses the smoothstep() GLSL function to smoothly interpolate between 0 and 1 when the distance from the center point travels between two thresholds, the inner clear circle, and the outer fully blurred circle. The mix() operator then takes the output from the smoothstep() and fades between the blurred and sharp color pixel colors to produce the appropriate output.
If you just want to modify this to produce a square shape instead of the circular one, you need to adjust the two center lines in the fragment shader to base the distance on linear X or Y coordinates, not a Pythagorean distance from the center point. To do this, change the shader to read:
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform lowp float excludeCircleRadius;
uniform lowp vec2 excludeCirclePoint;
uniform lowp float excludeBlurSize;
uniform highp float aspectRatio;
void main()
{
lowp vec4 sharpImageColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 blurredImageColor = texture2D(inputImageTexture2, textureCoordinate2);
highp vec2 textureCoordinateToUse = vec2(textureCoordinate2.x, (textureCoordinate2.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
textureCoordinateToUse = abs(excludeCirclePoint - textureCoordinateToUse);
highp float distanceFromCenter = max(textureCoordinateToUse.x, textureCoordinateToUse.y);
gl_FragColor = mix(sharpImageColor, blurredImageColor, smoothstep(excludeCircleRadius - excludeBlurSize, excludeCircleRadius, distanceFromCenter));
}
The lines that Gill mentions are just input parameters for the filter, and don't control its circularity at all.
I leave modifying this further to produce a generic rectangular shape as an exercise for the reader, but this should provide a basis for how you could do this and a bit more explanation of what the lines in this shader do.
Did it ... the code for the rectangular effect is just in these 2 lines
blurFilter = [[GPUImageGaussianSelectiveBlurFilter alloc] init];
[(GPUImageGaussianSelectiveBlurFilter*)blurFilter setExcludeCircleRadius:80.0/320.0];
[(GPUImageGaussianSelectiveBlurFilter*)blurFilter setExcludeCirclePoint:CGPointMake(0.5f, 0.5f)];
// [(GPUImageGaussianSelectiveBlurFilter*)blurFilter setBlurSize:0.0f]; [(GPUImageGaussianSelectiveBlurFilter*)blurFilter setAspectRatio:0.0f];
I have used the GPUImage framework for a blur effect similar to that of the Instagram application, where I have made a view for getting a picture from the photo library and then I put an effect on it.
One of the effects is a selective blur effect in which only a small part of the image is clear the rest is blurred. The GPUImageGaussianSelectiveBlurFilter chooses the circular part of the image to not be blurred.
How can I alter this to make the sharp region be rectangular in shape instead?
Because Gill's answer isn't exactly correct, and since this seems to be getting asked over and over, I'll clarify my comment above.
The fragment shader for the selective blur by default has the following code:
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform lowp float excludeCircleRadius;
uniform lowp vec2 excludeCirclePoint;
uniform lowp float excludeBlurSize;
uniform highp float aspectRatio;
void main()
{
lowp vec4 sharpImageColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 blurredImageColor = texture2D(inputImageTexture2, textureCoordinate2);
highp vec2 textureCoordinateToUse = vec2(textureCoordinate2.x, (textureCoordinate2.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp float distanceFromCenter = distance(excludeCirclePoint, textureCoordinateToUse);
gl_FragColor = mix(sharpImageColor, blurredImageColor, smoothstep(excludeCircleRadius - excludeBlurSize, excludeCircleRadius, distanceFromCenter));
}
This fragment shader takes in a pixel color value from both the original sharp image and a Gaussian blurred version of the image. It then blends between these based on the logic of the last three lines.
The first and second of these lines calculate the distance from the center coordinate that you specify ((0.5, 0.5) in normalized coordinates by default for the dead center of the image) to the current pixel's coordinate. The last line uses the smoothstep() GLSL function to smoothly interpolate between 0 and 1 when the distance from the center point travels between two thresholds, the inner clear circle, and the outer fully blurred circle. The mix() operator then takes the output from the smoothstep() and fades between the blurred and sharp color pixel colors to produce the appropriate output.
If you just want to modify this to produce a square shape instead of the circular one, you need to adjust the two center lines in the fragment shader to base the distance on linear X or Y coordinates, not a Pythagorean distance from the center point. To do this, change the shader to read:
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform lowp float excludeCircleRadius;
uniform lowp vec2 excludeCirclePoint;
uniform lowp float excludeBlurSize;
uniform highp float aspectRatio;
void main()
{
lowp vec4 sharpImageColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 blurredImageColor = texture2D(inputImageTexture2, textureCoordinate2);
highp vec2 textureCoordinateToUse = vec2(textureCoordinate2.x, (textureCoordinate2.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
textureCoordinateToUse = abs(excludeCirclePoint - textureCoordinateToUse);
highp float distanceFromCenter = max(textureCoordinateToUse.x, textureCoordinateToUse.y);
gl_FragColor = mix(sharpImageColor, blurredImageColor, smoothstep(excludeCircleRadius - excludeBlurSize, excludeCircleRadius, distanceFromCenter));
}
The lines that Gill mentions are just input parameters for the filter, and don't control its circularity at all.
I leave modifying this further to produce a generic rectangular shape as an exercise for the reader, but this should provide a basis for how you could do this and a bit more explanation of what the lines in this shader do.
Did it ... the code for the rectangular effect is just in these 2 lines
blurFilter = [[GPUImageGaussianSelectiveBlurFilter alloc] init];
[(GPUImageGaussianSelectiveBlurFilter*)blurFilter setExcludeCircleRadius:80.0/320.0];
[(GPUImageGaussianSelectiveBlurFilter*)blurFilter setExcludeCirclePoint:CGPointMake(0.5f, 0.5f)];
// [(GPUImageGaussianSelectiveBlurFilter*)blurFilter setBlurSize:0.0f]; [(GPUImageGaussianSelectiveBlurFilter*)blurFilter setAspectRatio:0.0f];
I am finding that in my fragment shader, these 2 statements give identical output:
// #1
// pos is set from gl_Position in vertex shader
highp vec2 texc = ((pos.xy / pos.w) + 1.0) / 2.0;
// #2 - equivalent?
highp vec2 texc2 = gl_FragCoord.xy/uWinDims.xy;
If this is correct, could you please explain the math? I understand #2, which is what I came up with, but saw #1 in a paper. Is this an NDC (normalized device coordinate) calculation?
The context is that I am using the texture coordinates with an FBO the same size as the viewport. It's all working, but I'd like to understand the math.
Relevant portion of vertex shader:
attribute vec4 position;
uniform mat4 modelViewProjectionMatrix;
varying lowp vec4 vColor;
// transformed position
varying highp vec4 pos;
void main()
{
gl_Position = modelViewProjectionMatrix * position;
// for fragment shader
pos = gl_Position;
vColor = aColor;
}
Relevant portion of fragment shader:
// transformed position - from vsh
varying highp vec4 pos;
// viewport dimensions
uniform highp vec2 uWinDims;
void main()
{
highp vec2 texc = ((pos.xy / pos.w) + 1.0) / 2.0;
// equivalent?
highp vec2 texc2 = gl_FragCoord.xy/uWinDims.xy;
...
}
(pos.xy / pos.w) is the coordinate value in normalized device coordinates (NDC). This value ranges from -1 to 1 in each dimension.
(NDC + 1.0)/2.0 changes the range from (-1 to 1) to (0 to 1) (0 on the left of the screen, and 1 on the right, similar for top/bottom).
Alternatively, gl_FragCoord gives the coordinate in pixels, so it ranges from (0 to width) and (0 to height).
Dividing this value by width and height (uWinDims), gives the position again from 0 on the left side of the screen, to 1 on the right side.
So yes they appear to be equivalent.
In my simple 2D game I have 2x framerate drop when using ES 2.0 implementation for drawing. Is it OK or 2.0 should be faster if used properly?
P.S. If you are interested in details. I use very simple shaders:
vertex program:
uniform vec2 u_xyscale;
uniform vec2 u_st_to_uv;
attribute vec2 a_vertex;
attribute vec2 a_texcoord;
attribute vec4 a_diffuse;
varying vec4 v_diffuse;
varying vec2 v_texcoord;
void main(void)
{
v_diffuse = a_diffuse;
// convert texture coordinates from ST space to UV.
v_texcoord = a_texcoord * u_st_to_uv;
// transform XY coordinates from screen space to clip space.
gl_Position.xy = a_vertex * u_xyscale + vec2( -1.0, 1.0 );
gl_Position.zw = vec2( 0.0, 1.0 );
}
fragment program:
precision mediump float;
uniform sampler2D t_bitmap;
varying lowp vec4 v_diffuse;
varying vec2 v_texcoord;
void main(void)
{
vec4 color = texture2D( t_bitmap, v_texcoord );
gl_FragColor = v_diffuse * color;
}
"color" is a mediump variable, due to the default precision that you specified. This forces the implementation to convert the lowp sampled result to mediump. It also requires that diffuse be converted to mediump to perform the multiplication.
You can fix this by declaring "color" as lowp.