I am using webgl with javascript. Is there a way to render without antialias? I need every pixel to be of a solid color.
my current fragment shader is very simple:
precision mediump float;
varying highp vec3 lighting;
void main(void)
{
gl_FragColor = vec4(lighting, 1.0);
}
UPDATE
Based on #Moormanly's answer, I have achieved the following by setting the antialias attribute in the getContext:
Default aliasing:
Antialias = false:
You can set attributes when creating a context using the ( optional ) second parameter of the getContext method.
Code:
var context = canvas.getContext('webgl', {antialias: false});
Check out chapter 5.2 of the WebGL specification for more information.
From forums.tigsource.com
I am new to webgl. I am trying to set a time uniform, so I can change the output of my fragment shader as time passes. I thought this would be fairly simple to implement but I am struggling. I am aware that these two methods are probably involved:
https://developer.mozilla.org/en-US/docs/Web/API/WebGLUniformLocation
https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext/uniform
Is some kind of rendering loop required?
Any help here would be really appreciated, thanks.
This is my current solution...
In my webgl JS file I create a time uniform, then set it every animation frame with an updated value.
// create time uniform
var timeLocation = context.getUniformLocation(program, "u_time");
function renderLoop(timeStamp) {
// set time uniform
gl.uniform1f(timeLocation, timeStamp/1000.0);
gl.drawArrays(...);
// recursive invocation
window.requestAnimationFrame(renderLoop);
}
// start the loop
window.requestAnimationFrame(renderLoop);
Then in my fragment shader:
precision mediump float;
uniform float u_time;
void main() {
gl_FragColor = vec4(abs(sin(u_time)),0.0,0.0,1.0);
}
I've created a subclass of GLKViewController to render some waveforms using GLKBaseEffect which all works fine.
I want to use my own vertex and fragment shaders, so I used the boilerplate code from the default OpenGL Game project. Both shaders compile, link, and are usable. I can alter the color in the vertex shader without issue.
The problem I'm having is passing my own uniforms into the fragment shader. I can pass it to the vertex shader and use it there, but when I declare the same variable in the
Code:
// Vertex shader
attribute vec4 position;
uniform float amplitudeMax;
void main() {
gl_Position = position + amplitudeMax; // This works and offsets the drawing
}
// Fragment shader
uniform float amplitudeMax; // Fails to compile
//uniform float amplitudeMax; // Compiles fine
void main()
{
gl_FragColor = vec4(0.0, 0.0, 1.0, 1.0);
}
If curious, here is how I set up the uniforms and shaders
func loadShaders() -> Bool {
// Copied from default OpenGL Game project
// link program ...
uniforms[UNIFORM_AMPLITUDE_MAX] = glGetUniformLocation(program, "amplitudeMax")
// release shaders ...
}
// The draw loop
func render() {
configureShaders()
// ... draw
}
func configureShaders() {
glUseProgram(program)
let max = AudioManager.sharedInstance.bufferManager.currentPower)
glUniform1f(uniforms[UNIFORM_AMPLITUDE_MAX], max)
}
I had another idea of passing the value from the vertex shader to the fragment shader using a varying float. Again I can declare and use the variable in the vertex shader but delaring it in the fragment shader cause it to fail compiling.
Edit: --------------------------------------------------
Through trial and error I found that if (in my fragment shader) qualify my uniform declaration with a precision that it works for both uniforms and varying (can pass from vertex to shader).
uniform lowp float amplitudeMax;
void main() {
gl_FragColor = vec4(amplitudeMax, 0.0, 1.0, 1.0);
}
My code works but i am wondering why !
I have 2 textures :
uniform sampler2D uSampler0;
uniform sampler2D uSampler1;
void main() {
vec4 color0 = texture2D(uSampler0, vTexCoord);
vec4 color1 = texture2D(uSampler1, vTexCoord);
gl_FragColor = color0 * color1;
}
and my js code
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D,my_texture_ZERO);
gl.uniform1i(program.uSampler0,0);
gl.activeTexture(gl.TEXTURE1);
gl.bindTexture(gl.TEXTURE_2D,my_texture_ONE);
gl.uniform1i(program.uSampler1);
// uncomment one of the 3, it works.
// gl.bindTexture(gl.TEXTURE_2D, my_texture_ZERO);
// gl.bindTexture(gl.TEXTURE_2D, my_texture_ONE);
// gl.bindTexture(gl.TEXTURE_2D, texture_FOR_PURPOSE_ONLY);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
before gl.draw, i have tested the 3 bindings,
each one works !
So, i do not understand the real pipeline underlying .
Thanks for some explanations
This line is invalid
gl.uniform1i(program.uSampler1);
You're not passing a value to the sampler
The way WebGL texture units work is they are global state inside WebGL
gl.activeTexture sets the texture unit all other texture commands effect. For each texture unit there are 2 bind points, TEXTURE_2D and TEXTURE_CUBE_MAP.
You can think of it like this
gl = {
activeTextureUnit: 0,
textureUnits: [
{ TEXTURE_2D: null: TEXTURE_CUBE_MAP: null, },
{ TEXTURE_2D: null: TEXTURE_CUBE_MAP: null, },
{ TEXTURE_2D: null: TEXTURE_CUBE_MAP: null, },
...
],
};
gl.activeTexture just does this
gl.activeTexture = function(unit) {
gl.activeTextureUnit = unit - gl.TEXTURE0;
};
gl.bindTexture does this
gl.bindTexture = function(bindPoint, texture) {
gl.textureUnits[gl.activeTextureUnit][bindPoint] = texture;
};
gl.texImage2D and gl.texParamteri look up which texture to work with like this
gl.texImage2D = function(bindPoint, .....) {
var texture = gl.textureUnits[gl.activeTextureUnit][bindPoint];
// now do something with texture
In other words, inside WebGL there is a global array of texture units. gl.activeTexture and gl.bindTexture manipulate that array.
gl.texXXX manipulate the textures themselves but they reference the textures indirectly through that array.
gl.uniform1i(someSamplerLocation, unitNumber) sets the shader's uniform to look at a particular index in that array of texture units.
It's working correctly because in the presented code you are sending the appropriate uniforms for the samplers.
First texture was set to unit 0 by calling glActiveTexture(GL_TEXTURE0) and was bound afterward. Then a switch was made to unit1.
At that point there were two separate bound textures in each unit.
At the end these units were passed as the uniforms for samplers - which is how to indicate which texture should be in a sampler: in this case passing 0 corresponding to the GL_TEXTURE0 unit to the first uniform and analogousy for the second uniform.
Probably even without uncommenting these lines - things should work.
I'm writing an app on iOS allows drawing free style (using finger) and drawing image on screen. I use OpenGL ES to implement. I have 2 functions, one is drawing free style, one is drawing texture
--- Code drawing free style
- (void)drawFreeStyle:(NSMutableArray *)pointArray {
//Prepare vertex data
.....
// Load data to the Vertex Buffer Object
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, vertexCount*2*sizeof(GLfloat), vertexBuffer, GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, GL_FALSE, 0, 0);
**GLuint a_ver_flag_drawing_type = glGetAttribLocation(program[PROGRAM_POINT].id, "a_drawingType");
glVertexAttrib1f(a_ver_flag_drawing_type, 0.0f);
GLuint u_fra_flag_drawing_type = glGetUniformLocation(program[PROGRAM_POINT].id, "v_drawing_type");
glUniform1f(u_fra_flag_drawing_type, 0.0);**
glUseProgram(program[PROGRAM_POINT].id);
glDrawArrays(GL_POINTS, 0, (int)vertexCount);
// Display the buffer
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
}
--- Code drawing texture
- (void)drawTexture:(UIImage *)image atRect:(CGRect)rect {
GLuint a_ver_flag_drawing_type = glGetAttribLocation(program[PROGRAM_POINT].id, "a_drawingType");
GLuint u_fra_flag_drawing_type = glGetUniformLocation(program[PROGRAM_POINT].id, "v_drawing_type");
GLuint a_position_location = glGetAttribLocation(program[PROGRAM_POINT].id, "a_Position");
GLuint a_texture_coordinates_location = glGetAttribLocation(program[PROGRAM_POINT].id, "a_TextureCoordinates");
GLuint u_texture_unit_location = glGetUniformLocation(program[PROGRAM_POINT].id, "u_TextureUnit");
glUseProgram(PROGRAM_POINT);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texName);
glUniform1i(u_texture_unit_location, 0);
glUniform1f(u_fra_flag_drawing_type, 1.0);
const float textrect[] = {-1.0f, -1.0f, 0.0f, 0.0f,
-1.0f, 1.0f, 0.0f, 1.0f,
1.0f, -1.0f, 1.0f, 0.0f,
1.0f, 1.0f, 1.0f, 1.0f};
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, sizeof(textrect), textrect, GL_STATIC_DRAW);
glVertexAttrib1f(a_ver_flag_drawing_type, 1.0f);
glVertexAttribPointer(a_position_location, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), (void*)(0));
glVertexAttribPointer(a_texture_coordinates_location, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), (void*)(2 * sizeof(float)));
glEnableVertexAttribArray(a_ver_flag_drawing_type);
glEnableVertexAttribArray(a_position_location);
glEnableVertexAttribArray(a_texture_coordinates_location);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
}
Notice 2 variables a_ver_flag_drawing_type (attribute) and u_fra_flag_drawing_type (uniform). They're use for setting flags on vertex shader and fragment shader to determine drawing free style or texture on both files
--- Vertex shader
//Flag
attribute lowp float a_drawingType;
//For drawing
attribute vec4 inVertex;
uniform mat4 MVP;
uniform float pointSize;
uniform lowp vec4 vertexColor;
varying lowp vec4 color;
//For texture
attribute vec4 a_Position;
attribute vec2 a_TextureCoordinates;
varying vec2 v_TextureCoordinates;
void main()
{
if (abs(a_drawingType - 1.0) < 0.0001) {
//Draw texture
v_TextureCoordinates = a_TextureCoordinates;
gl_Position = a_Position;
} else {
//Draw free style
gl_Position = MVP * inVertex;
gl_PointSize = pointSize;
color = vertexColor;
}
}
--- Fragment shader
precision mediump float;
uniform sampler2D texture;
varying lowp vec4 color;
uniform sampler2D u_TextureUnit;
varying vec2 v_TextureCoordinates;
uniform lowp float v_drawing_type;
void main()
{
if (abs(v_drawing_type - 1.0) < 0.0001) {
//Draw texture
gl_FragColor = texture2D(u_TextureUnit, v_TextureCoordinates);
} else {
//Drawing free style
gl_FragColor = color * texture2D(texture, gl_PointCoord);
}
}
My idea is setting these flags from drawing code at drawing time. Attribute a_drawingType is used for vertex shader and Uniform v_drawing_type is used for fragment shader. Depending these flags to know draw free style or texture.
But if I run independently, each time just one type (if run drawing free style, comment code config drawing texture on vertex shader and fragment shader file and vice versa) it can draw as I want. If I combine them, it not only can't draw but also makes app crash as
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
I'm new for OpenGL ES and GLSL language, so I'm not sure my thinking about setting flags like that is right or wrong. Can anyone help me
So why don't you just build 2 seperate shader programs and useProgram() on one of them, instead of sending flag values to GL and making expensive conditional branch in vertex and especially fragment shader?
Attributes are per-vertex, uniforms are per-shader program.
You might see a crash if you supplied only one value for an attribute then asked OpenGL to draw, say, 100 points. In that case OpenGL is going to do an out-of-bounds array access when it attempts to fetch the attributes for vertices 2–100.
It'd be more normal to use two separate programs. Conditionals are very expensive on GPUs because GPUs try to maintain one program counter while processing multiple fragments. They're SIMD units. Any time the evaluation of an if differs between two neighbouring fragments you're probably reducing parallelism. The idiom is therefore not to use if statements where possible.
If you switch to a uniform there's a good chance you won't lose any performance through absent parallelism because the paths will never diverge. Your GLSL compiler may even be smart enough to recompile the shader every time you reset the constant, effectively performing constant folding. But you'll be paying for the recompilation every time.
If you just had two programs and switched between them you wouldn't pay the recompilation fee.
It's not completely relevant in this case but e.g. you'd also often see code like:
if (abs(v_drawing_type - 1.0) < 0.0001) {
//Draw texture
gl_FragColor = texture2D(u_TextureUnit, v_TextureCoordinates);
} else {
//Drawing free style
gl_FragColor = color * texture2D(texture, gl_PointCoord);
}
Written more like:
gl_FragColor = mix(texture2D(u_TextureUnit, v_TextureCoordinates),
color * texture2D(texture, gl_PointCoord),
v_drawing_type);
... because that avoids the conditional entirely. In this case you'd want to adjust the texture2D so that both calls were identical, and probably factor them out of the call, to ensure you don't end up always doing two samples instead of one.
The previously posted answers explain certain pieces, but an important part is missing in all of them. It is perfectly legal to specify a single attribute value that is applied to all vertices in a draw call. What you did here was basically valid:
glVertexAttrib1f(a_ver_flag_drawing_type, 1.0f);
The direct problem was this call that followed shortly after:
glEnableVertexAttribArray(a_ver_flag_drawing_type);
There are two main ways to specify the value of a vertex attribute:
Use the current value, as specified in your case by glVertexAttrib1f().
Use values from an array, as specified with glVertexAttribPointer().
You select which of the two options is used for any given attribute by enabling/disabling the array, which is done by calling glEnableVertexAttribArray()/glDisableVertexAttribArray().
In the posted code, the vertex attribute was specified as only a current value, but the attribute was then enabled to fetch from an array with glEnableVertexAttribArray(). This conflict caused the crash, because the attribute values would have been fetched from an array that was never specified. To use the specified current value, the call simply has to be changed to:
glDisableVertexAttribArray(a_ver_flag_drawing_type);
Or, if the array was never enabled, the call could be left out completely. But just in case another part of the code might have enabled it, it's safer to disable it explicitly.
As a side note, the following statement sequence from the first draw function also looks suspicious. glUniform*() sets value on the active program, so this will set a value on the previously active program, not the one specified in the second statement. If you want to set the value on the new program, the order of the statements has to be reversed.
glUniform1f(u_fra_flag_drawing_type, 0.0);
glUseProgram(program[PROGRAM_POINT].id);
On the whole thing, I think there are at least two approaches that are better than the one chosen:
Use separate shader programs for the two different types of rendering. While using a single program with switchable behavior is a valid option, it looks artificial, and using separate programs seems much cleaner.
If you want to stick with a single program, use a single uniform to do the switching, instead of using an attribute and a uniform. You could use the one you already have, but you might just as well make it a boolean while you're at it. So in both the vertex and fragment shader, use the same uniform declaration:
uniform bool v_use_texture;
Then the tests become:
if (v_use_texture) {
Getting the uniform location is the same as before, and you can set the value, which will the be available in both the vertex and fragment shader, with one of:
glUniform1i(loc, 0);
glUniform1i(loc, 1);
I found the problem, just change variable a_drawingType from Attribute to Uniform then use glGetUniformLocation and glUniform1f to get index and pass value. I think Attribute will pass for any vertex so use uniform to pass once.