In my simple 2D game I have 2x framerate drop when using ES 2.0 implementation for drawing. Is it OK or 2.0 should be faster if used properly?
P.S. If you are interested in details. I use very simple shaders:
vertex program:
uniform vec2 u_xyscale;
uniform vec2 u_st_to_uv;
attribute vec2 a_vertex;
attribute vec2 a_texcoord;
attribute vec4 a_diffuse;
varying vec4 v_diffuse;
varying vec2 v_texcoord;
void main(void)
{
v_diffuse = a_diffuse;
// convert texture coordinates from ST space to UV.
v_texcoord = a_texcoord * u_st_to_uv;
// transform XY coordinates from screen space to clip space.
gl_Position.xy = a_vertex * u_xyscale + vec2( -1.0, 1.0 );
gl_Position.zw = vec2( 0.0, 1.0 );
}
fragment program:
precision mediump float;
uniform sampler2D t_bitmap;
varying lowp vec4 v_diffuse;
varying vec2 v_texcoord;
void main(void)
{
vec4 color = texture2D( t_bitmap, v_texcoord );
gl_FragColor = v_diffuse * color;
}
"color" is a mediump variable, due to the default precision that you specified. This forces the implementation to convert the lowp sampled result to mediump. It also requires that diffuse be converted to mediump to perform the multiplication.
You can fix this by declaring "color" as lowp.
Related
I'm using webgl to bind a matte image to a regular image. The matte image gives the regular image transparency.
I was able to successfully upload both images as textures, but when I run the program my color texture and matte texture don't line up. As you can see in the picture below the black should be all gone, but instead it seems scaled and flipped.
I find that very strange as both the "color" channel and the "alpha" channel are using the same texture.
My question is how can I rotate/resize the alpha chanel within the shader? Or will I have to make a new texture coordinate plane to map the alpha channel onto.
For reference this is my code below:
vertexShaderScript = [
'attribute vec4 vertexPos;',
'attribute vec4 texturePos;',
'varying vec2 textureCoord;',
'void main()',
'{',
' gl_Position = vertexPos;',
' textureCoord = texturePos.xy;',
'}'
].join('\n');
fragmentShaderScript = [
'precision highp float;',
'varying highp vec2 textureCoord;',
'uniform sampler2D ySampler;',
'uniform sampler2D uSampler;',
'uniform sampler2D vSampler;',
'uniform sampler2D aSampler;',
'uniform mat4 YUV2RGB;',
'void main(void) {',
' highp float y = texture2D(ySampler, textureCoord).r;',
' highp float u = texture2D(uSampler, textureCoord).r;',
' highp float v = texture2D(vSampler, textureCoord).r;',
' highp float a = texture2D(aSampler, textureCoord).r;',
' gl_FragColor = vec4(y, u, v, a);',
'}'
].join('\n');
If all you want to do is flip the alpha texture coordinate
const vertexShaderScript = `
attribute vec4 vertexPos;
attribute vec4 texturePos;
varying vec2 textureCoord;
void main()
{
gl_Position = vertexPos;
textureCoord = texturePos.xy;
}
`;
const fragmentShaderScript = `
precision highp float;
varying highp vec2 textureCoord;
uniform sampler2D ySampler;
uniform sampler2D uSampler;
uniform sampler2D vSampler;
uniform sampler2D aSampler;
uniform mat4 YUV2RGB;
void main(void) {
highp float y = texture2D(ySampler, textureCoord).r;
highp float u = texture2D(uSampler, textureCoord).r;
highp float v = texture2D(vSampler, textureCoord).r;
highp float a = texture2D(aSampler, vec2(textureCoord.x, 1. - textureCoord.y).r;
gl_FragColor = vec4(y, u, v, a);
}
`;
But in the general case it's up to you to decide how to supply or generate texture coordinates. You can manipulate them anyway you want. It's kind of asking how do I make the value 3 and I can answer 3, 1 + 1 + 1, 2 + 1, 5 - 2, 15 / 5, 300 / 100, 7 * 30 / 70, 4 ** 2 - (3 * 4 + 1)
The most generic way to change texture coordinates is to multiply them by a matrix just like positions
uniform mat3 texMatrix;
attribute vec2 texCoords;
...
vec2 newTexCoords = (texMatrix * vec3(texCoords, 1)).xy;
And then using the same type of matrix math you'd use for positions for texture coordinates instead.
This article gives some examples of manipulating texture cooridinates by a texture matrix
I have just include OpenGL ES 3.0 in my iOS app and it is working fine.
I have a working shader below:
#version 300 es
precision mediump float;
uniform sampler2D texSampler;
uniform float fExposure;
in vec2 fTexCoord;
in vec3 fColor;
out vec4 fragmentColor;
void main()
{
fragmentColor = texture(texSampler, fTexCoord) * vec4(fColor, 1.0) * fExposure;
}
Now, I want to use a sampler3D so I have:
#version 300 es
precision mediump float;
uniform sampler3D texSampler;
uniform float fExposure;
in vec3 fTexCoord;
in vec3 fColor;
out vec4 fragmentColor;
void main()
{
fragmentColor = texture(texSampler, fTexCoord) * vec4(fColor, 1.0) * fExposure;
}
and it doesn't compile. Also, I changed the vec2 texCoord to vec3 texCoord in the vertex shader.
Actually, sampler3D is not recognized, but as far as i know it exists in OpenGL ES 3.0.
Any ideas?
Similar to float, sampler3D does not have a default precision. Add this at the start of your fragment shader, where you also specify the default float precision:
precision mediump sampler3D;
Of course you can use lowp instead if that gives you sufficient precision.
The only sampler types that have a default precision in ES 3.0/3.1 are sampler2D and samplerCube (both default to lowp). For all others, the precision has to be specified either as a default precision, or as part of the variable declaration.
I have been trying to program a basic webgl spotlight shader, but no matter how hard I try, I cannot get the spotlights position to be relative to the world.The code that I am currently using is below, I have tried almost every coordinate frame that I can to get this working, but no matter what I do, I only get partially correct results.
For example if I switch to world coordinates the spotlights position will be correct, but it will only reflect off one object, or if I use the view space the light will work but it's position is relative to the camera.
In it's current state, the spotlight seems to be relative to each objects frame. (Not sure why.) Any help in solving helping this issue is greatly appreciated.
Vertex Shader:
attribute vec4 vPosition;
attribute vec4 vNormal;
attribute vec2 vTexCoord;
varying vec3 L, E, N,D;
varying vec2 fTexCoord;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform mat4 NormalMatrix;
uniform mat4 View;
uniform vec4 lightPosition;
uniform vec4 lightDirection;
uniform vec3 Eye;
void main(){
L= (modelViewMatrix * lightPosition).xyz; //Light position in eye coordinates
E = (modelViewMatrix * vPosition).xyz; //Vertex position in eye coordinates.
//Normal position in eye coordinate. Transpose(Inverse(modelViewMatrix) * vNormal.
N=(NormalMatrix * vNormal).xyz;
D=lightDirection.xyz;//Light direction
fTexCoord=vTexCoord;
gl_Position = projectionMatrix * modelViewMatrix * vPosition;
}
Fragment Shader:
precision mediump float;
uniform vec4 lDiffuseColor;
uniform vec4 lSpecular;
uniform vec4 lAmbientColor;
uniform float lShininess;
varying vec3 L,E,N,D;
const float lExponent=2.0;
const float lCutoff=0.867;
vec3 lWeight=vec3(0,0,0);
void main(){
vec3 vtoLS=normalize(L - E);//Vector to light source from vertex.
float Ks=pow(max(dot(normalize(N),vtoLS),0.0),lShininess);
vec3 specular=Ks* lSpecular.xyz;
float diffuseWeight=max(dot(normalize(N), -vtoLS),0.0);
vec3 diffuse=diffuseWeight * lDiffuseColor.xyz;
if(diffuseWeight >0.0){
float lEffect= dot(normalize(D),normalize(-vtoLS));
if(lEffect > lCutoff){
lEffect= pow(lEffect,Ks);
vec3 reflection= normalize(reflect(-vtoLS,normalize(N)));
vec3 vEye=-normalize(E);
float rdotv=max(dot(reflection,vEye),0.0);
float specularWeight=pow(rdotv,lShininess);
lWeight= (lEffect * diffuse.xyz + lEffect * specular.xyz) + vec3(0.5,0,0);
}
}
lWeight+=lAmbientColor.xyz;
gl_FragColor=vec4(lWeight.rgb,1);
}
Current Output: http://sta.sh/012uh5hwwlse
When I'm rendering my content onto a FBO with a texture bound to it and then render this bound texture to a fullscreen quad using a basic shader the performance drops ridiculously.
For example:
Render to screen directly (with basic shader):
And when render to texture first, then render texture with fullscreen quad: (with same basic shader, would be something like blur or bloom normally):
Anyone got an idea how to speed this up? Since the current performance is not usable. Also I'm using GLKit for the basic OpenGL stuff.
Need to use precisions in places where it's needed.
lowp - for colors, textures coord, normals etc.
highp - for matrices and vertices/positions
Quick reference , check the range of precisions, on 3 page in "Qualifiers".
// BasicShader.vsh
precision mediump float;
attribute highp vec2 position;
attribute lowp vec2 texCoord;
attribute lowp vec4 color;
varying lowp vec2 textureCoord;
varying lowp vec4 textureColor;
uniform highp mat4 projectionMat;
uniform highp mat4 worldMat;
void main() {
highp mat4 worldProj = worldMat * projectionMat;
gl_Position = worldProj * vec4(position, 0.0, 1.0);
textureCoord = texCoord;
textureColor = color;
}
// BasicShader.fsh
precision mediump float;
varying lowp vec2 textureCoord;
varying lowp vec4 textureColor;
uniform sampler2D sampler;
void main() {
lowp vec4 Color = texture2D(sampler, textureCoord);
gl_FragColor = Color * textureColor;
}
This is very likely caused by ill-performant openGL ES API calls.
You should attach a real device and do an openGL ES frame capture. (It really needs a real device, the option for frame capture won't be available with a simulator).
The frame capture will indicate memory and other warnings along with suggestions to fix them alongside each API call. Step through these and fix each. The performance should improve considerably.
Here's a couple of references to get this done:
Debugging openGL ES frame
Xcode tools overview
I'm currently trying to write a shader that should include a simple point light in OpenGL ES 2.0, but it's not quite working.
I built my own small SceneGraph and each Object (currently only Boxes) can have its own translation/rotation/scale and rendering works fine. Each of the boxes assigns its own modelView and normals matrix and all of them use the same projection matrix.
For each object I pass the matrices and the light position to the shader as a uniform.
If the Object does not rotate the light works fine, but as soon as the Object rotates the light seems to rotate with the object instead of staying at the same position.
Here is some Code.
First the creating the matrices:
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f);
GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 0.0f);
Each of the nodes computes an own transformation matrix containing the translation/rotation/scale and multiplies it with the modelViewMatrix:
modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, transformation);
This matrix is passed to the shader and after the object has been rendered the old matrix is recovered.
The normal matrix is calculated as follows:
GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL);
Vertex-Shader:
attribute vec4 Position;
attribute vec2 TexCoordIn;
attribute vec3 Normal;
uniform mat4 modelViewProjectionMatrix;
uniform mat4 modelViewMatrix;
uniform mat3 normalMatrix;
uniform vec3 lightPosition;
varying vec2 TexCoordOut;
varying vec3 n, PointToLight;
void main(void) {
gl_Position = modelViewProjectionMatrix * Position;
n = normalMatrix * Normal;
PointToLight = ((modelViewMatrix * vec4(lightPosition,1.0)) - (modelViewMatrix * Position)).xyz;
// Pass texCoord
TexCoordOut = TexCoordIn;
}
Fragment-Shader:
varying lowp vec2 TexCoordOut;
varying highp vec3 n, PointToLight;
uniform sampler2D Texture;
void main(void) {
gl_FragColor = texture2D(Texture, TexCoordOut);
highp vec3 nn = normalize(n);
highp vec3 L = normalize(PointToLight);
lowp float NdotL = clamp(dot(n, L), -0.8, 1.0);
gl_FragColor *= (NdotL+1.)/2.;
}
I guess the PointToLight is computed wrong, but I can't figure out what's going wrong.
I finally figured out what went wrong.
Instead of multiplying the lightPosition with the modelViewMatrix, I just need to multiply it with the viewMatrix, which only contains the transformations of the camera and not the transformations for the box:
PointToLight = ((viewMatrix * vec4(lightPosition,1.0)) - (viewMatrix * modelMatrix * Position)).xyz;
Now it works fine.