WebGL render antialias - webgl

I am using webgl with javascript. Is there a way to render without antialias? I need every pixel to be of a solid color.
my current fragment shader is very simple:
precision mediump float;
varying highp vec3 lighting;
void main(void)
{
gl_FragColor = vec4(lighting, 1.0);
}
UPDATE
Based on #Moormanly's answer, I have achieved the following by setting the antialias attribute in the getContext:
Default aliasing:
Antialias = false:

You can set attributes when creating a context using the ( optional ) second parameter of the getContext method.
Code:
var context = canvas.getContext('webgl', {antialias: false});
Check out chapter 5.2 of the WebGL specification for more information.
From forums.tigsource.com

Related

how to bind a player position to a shader with melonjs?

I've created a glsl shader as:
<script id="player-fragment-shader" type="x-shader/x-fragment">
precision highp float;
varying vec3 fNormal;
uniform vec2 resolution;
float circle(in vec2 _pos, in float _radius) {
vec2 dist = _pos - vec2(0.5);
return 1.-smoothstep(_radius - (_radius * 0.5),
_radius + (_radius * 0.5),
dot(dist, dist) * 20.0);
}
void main() {
vec2 pos = gl_FragCoord.xy/resolution.xy;
// Subtract the inverse of orange from white to get an orange glow
vec3 color = vec3(circle(pos, 0.8)) - vec3(0.0, 0.25, 0.5);
gl_FragColor = vec4(color, 0.8);
}
</script>
<script id="player-vertex-shader" type="x-shader/x-vertex">
precision highp float;
attribute vec3 position;
attribute vec3 normal;
uniform mat3 normalMatrix;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
void main() {
vec4 pos = modelViewMatrix * vec4(position, 0.25);
gl_Position = projectionMatrix * pos;
}
</script>
I initialize it in the game load by running:
var vertShader = document.getElementById("player-vertex-shader").text;
var fragShader = document.getElementById("player-fragment-shader").text;
var shader = me.video.shader.createShader(me.video.renderer.compositor.gl, vertShader, fragShader);
This is done after video is initialized, and seems to compile the shader program and load fine. The shader also seems to work fine when loading it up in shaderfrog.com and other similar sites.
The problem is, it's leaving me with a totally black screen until I move the character and it redraws. I've read over the webgl fundamentals site, and it seems what I'm missing is binding the character position to the GL buffer.
How do I do this in melonjs.
Hi, I wrote the original WebGL compositor for melonJS.
tl;dr: Force the frame to redraw by returning true from your character's entity.update() method. (Or alternatively, increase the animation frame rate to match the game frame rate.)
Example overriding the update method:
update: function (dt) {
this._super(me.Entity, "update", [dt]);
return true;
}
This allows the update to continue operating normally (e.g. updating animation state, etc.) but returning true to force the frame to redraw every time.
It might help to understand how the compositor works, and how your shader is interacting with melonJS entities. This describes the inner workings of WebGL integration with melonJS. In short, there is no explicit step to bind positions to the shader. Positions are sent via the vertex attribute buffer, which is batched up (usually for an entire frame) and sent as one big array to WebGL.
The default compositor can be replaced if you need more control over building the vertex buffer, or if you want to do other custom rendering passes. This is done by passing a class reference to me.video.init in the options.compositor argument. The default is me.WebGLRenderer.Compositor:
me.video.init(width, height, {
wrapper: "screen",
renderer : me.video.WEBGL,
compositor: me.WebGLRenderer.Compositor
});
During the draw loop, the default compositor adds a new quad element to the vertex attribute array buffer for every me.WebGLRenderer.drawImage call. This method emulates the DOM canvas method of the same name. The implementation is very simple; it just converts the arguments into a quad and calls the compositor's addQuad method. This is where the vertex attribute buffer is actually populated.
After the vertex attribute buffer has been completed, the flush method is called, which sends the vertex buffer to the GPU with gl.drawElements.
melonJS takes drawing optimization to the extreme. Not only does it batch like-renderables to reduce the number of draw calls (as described above) but it also doesn't send any draw calls if there is nothing to draw. This condition occurs when the frame is identical to the last frame drawn. For example, no entity has moved, the viewport has not scrolled, idle animations have not advanced to the next state, on-screen timer has not elapsed a full second, etc.
It is possible to force the frame to redraw by having any entity in the scene return true from its update method. This is a signal to the game engine that the frame needs to be redrawn. The process is described in more detail on the wiki.

How to set a time uniform in webgl

I am new to webgl. I am trying to set a time uniform, so I can change the output of my fragment shader as time passes. I thought this would be fairly simple to implement but I am struggling. I am aware that these two methods are probably involved:
https://developer.mozilla.org/en-US/docs/Web/API/WebGLUniformLocation
https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext/uniform
Is some kind of rendering loop required?
Any help here would be really appreciated, thanks.
This is my current solution...
In my webgl JS file I create a time uniform, then set it every animation frame with an updated value.
// create time uniform
var timeLocation = context.getUniformLocation(program, "u_time");
function renderLoop(timeStamp) {
// set time uniform
gl.uniform1f(timeLocation, timeStamp/1000.0);
gl.drawArrays(...);
// recursive invocation
window.requestAnimationFrame(renderLoop);
}
// start the loop
window.requestAnimationFrame(renderLoop);
Then in my fragment shader:
precision mediump float;
uniform float u_time;
void main() {
gl_FragColor = vec4(abs(sin(u_time)),0.0,0.0,1.0);
}

glUniform. Fragment shader fails to compile with uniform or varying compares

I've created a subclass of GLKViewController to render some waveforms using GLKBaseEffect which all works fine.
I want to use my own vertex and fragment shaders, so I used the boilerplate code from the default OpenGL Game project. Both shaders compile, link, and are usable. I can alter the color in the vertex shader without issue.
The problem I'm having is passing my own uniforms into the fragment shader. I can pass it to the vertex shader and use it there, but when I declare the same variable in the
Code:
// Vertex shader
attribute vec4 position;
uniform float amplitudeMax;
void main() {
gl_Position = position + amplitudeMax; // This works and offsets the drawing
}
// Fragment shader
uniform float amplitudeMax; // Fails to compile
//uniform float amplitudeMax; // Compiles fine
void main()
{
gl_FragColor = vec4(0.0, 0.0, 1.0, 1.0);
}
If curious, here is how I set up the uniforms and shaders
func loadShaders() -> Bool {
// Copied from default OpenGL Game project
// link program ...
uniforms[UNIFORM_AMPLITUDE_MAX] = glGetUniformLocation(program, "amplitudeMax")
// release shaders ...
}
// The draw loop
func render() {
configureShaders()
// ... draw
}
func configureShaders() {
glUseProgram(program)
let max = AudioManager.sharedInstance.bufferManager.currentPower)
glUniform1f(uniforms[UNIFORM_AMPLITUDE_MAX], max)
}
I had another idea of passing the value from the vertex shader to the fragment shader using a varying float. Again I can declare and use the variable in the vertex shader but delaring it in the fragment shader cause it to fail compiling.
Edit: --------------------------------------------------
Through trial and error I found that if (in my fragment shader) qualify my uniform declaration with a precision that it works for both uniforms and varying (can pass from vertex to shader).
uniform lowp float amplitudeMax;
void main() {
gl_FragColor = vec4(amplitudeMax, 0.0, 1.0, 1.0);
}

OpenGL ES glFragColor depend on if condition on Fragment shader in iOS

I'm writing an app on iOS allows drawing free style (using finger) and drawing image on screen. I use OpenGL ES to implement. I have 2 functions, one is drawing free style, one is drawing texture
--- Code drawing free style
- (void)drawFreeStyle:(NSMutableArray *)pointArray {
//Prepare vertex data
.....
// Load data to the Vertex Buffer Object
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, vertexCount*2*sizeof(GLfloat), vertexBuffer, GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, GL_FALSE, 0, 0);
**GLuint a_ver_flag_drawing_type = glGetAttribLocation(program[PROGRAM_POINT].id, "a_drawingType");
glVertexAttrib1f(a_ver_flag_drawing_type, 0.0f);
GLuint u_fra_flag_drawing_type = glGetUniformLocation(program[PROGRAM_POINT].id, "v_drawing_type");
glUniform1f(u_fra_flag_drawing_type, 0.0);**
glUseProgram(program[PROGRAM_POINT].id);
glDrawArrays(GL_POINTS, 0, (int)vertexCount);
// Display the buffer
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
}
--- Code drawing texture
- (void)drawTexture:(UIImage *)image atRect:(CGRect)rect {
GLuint a_ver_flag_drawing_type = glGetAttribLocation(program[PROGRAM_POINT].id, "a_drawingType");
GLuint u_fra_flag_drawing_type = glGetUniformLocation(program[PROGRAM_POINT].id, "v_drawing_type");
GLuint a_position_location = glGetAttribLocation(program[PROGRAM_POINT].id, "a_Position");
GLuint a_texture_coordinates_location = glGetAttribLocation(program[PROGRAM_POINT].id, "a_TextureCoordinates");
GLuint u_texture_unit_location = glGetUniformLocation(program[PROGRAM_POINT].id, "u_TextureUnit");
glUseProgram(PROGRAM_POINT);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texName);
glUniform1i(u_texture_unit_location, 0);
glUniform1f(u_fra_flag_drawing_type, 1.0);
const float textrect[] = {-1.0f, -1.0f, 0.0f, 0.0f,
-1.0f, 1.0f, 0.0f, 1.0f,
1.0f, -1.0f, 1.0f, 0.0f,
1.0f, 1.0f, 1.0f, 1.0f};
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, sizeof(textrect), textrect, GL_STATIC_DRAW);
glVertexAttrib1f(a_ver_flag_drawing_type, 1.0f);
glVertexAttribPointer(a_position_location, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), (void*)(0));
glVertexAttribPointer(a_texture_coordinates_location, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), (void*)(2 * sizeof(float)));
glEnableVertexAttribArray(a_ver_flag_drawing_type);
glEnableVertexAttribArray(a_position_location);
glEnableVertexAttribArray(a_texture_coordinates_location);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
}
Notice 2 variables a_ver_flag_drawing_type (attribute) and u_fra_flag_drawing_type (uniform). They're use for setting flags on vertex shader and fragment shader to determine drawing free style or texture on both files
--- Vertex shader
//Flag
attribute lowp float a_drawingType;
//For drawing
attribute vec4 inVertex;
uniform mat4 MVP;
uniform float pointSize;
uniform lowp vec4 vertexColor;
varying lowp vec4 color;
//For texture
attribute vec4 a_Position;
attribute vec2 a_TextureCoordinates;
varying vec2 v_TextureCoordinates;
void main()
{
if (abs(a_drawingType - 1.0) < 0.0001) {
//Draw texture
v_TextureCoordinates = a_TextureCoordinates;
gl_Position = a_Position;
} else {
//Draw free style
gl_Position = MVP * inVertex;
gl_PointSize = pointSize;
color = vertexColor;
}
}
--- Fragment shader
precision mediump float;
uniform sampler2D texture;
varying lowp vec4 color;
uniform sampler2D u_TextureUnit;
varying vec2 v_TextureCoordinates;
uniform lowp float v_drawing_type;
void main()
{
if (abs(v_drawing_type - 1.0) < 0.0001) {
//Draw texture
gl_FragColor = texture2D(u_TextureUnit, v_TextureCoordinates);
} else {
//Drawing free style
gl_FragColor = color * texture2D(texture, gl_PointCoord);
}
}
My idea is setting these flags from drawing code at drawing time. Attribute a_drawingType is used for vertex shader and Uniform v_drawing_type is used for fragment shader. Depending these flags to know draw free style or texture.
But if I run independently, each time just one type (if run drawing free style, comment code config drawing texture on vertex shader and fragment shader file and vice versa) it can draw as I want. If I combine them, it not only can't draw but also makes app crash as
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
I'm new for OpenGL ES and GLSL language, so I'm not sure my thinking about setting flags like that is right or wrong. Can anyone help me
So why don't you just build 2 seperate shader programs and useProgram() on one of them, instead of sending flag values to GL and making expensive conditional branch in vertex and especially fragment shader?
Attributes are per-vertex, uniforms are per-shader program.
You might see a crash if you supplied only one value for an attribute then asked OpenGL to draw, say, 100 points. In that case OpenGL is going to do an out-of-bounds array access when it attempts to fetch the attributes for vertices 2–100.
It'd be more normal to use two separate programs. Conditionals are very expensive on GPUs because GPUs try to maintain one program counter while processing multiple fragments. They're SIMD units. Any time the evaluation of an if differs between two neighbouring fragments you're probably reducing parallelism. The idiom is therefore not to use if statements where possible.
If you switch to a uniform there's a good chance you won't lose any performance through absent parallelism because the paths will never diverge. Your GLSL compiler may even be smart enough to recompile the shader every time you reset the constant, effectively performing constant folding. But you'll be paying for the recompilation every time.
If you just had two programs and switched between them you wouldn't pay the recompilation fee.
It's not completely relevant in this case but e.g. you'd also often see code like:
if (abs(v_drawing_type - 1.0) < 0.0001) {
//Draw texture
gl_FragColor = texture2D(u_TextureUnit, v_TextureCoordinates);
} else {
//Drawing free style
gl_FragColor = color * texture2D(texture, gl_PointCoord);
}
Written more like:
gl_FragColor = mix(texture2D(u_TextureUnit, v_TextureCoordinates),
color * texture2D(texture, gl_PointCoord),
v_drawing_type);
... because that avoids the conditional entirely. In this case you'd want to adjust the texture2D so that both calls were identical, and probably factor them out of the call, to ensure you don't end up always doing two samples instead of one.
The previously posted answers explain certain pieces, but an important part is missing in all of them. It is perfectly legal to specify a single attribute value that is applied to all vertices in a draw call. What you did here was basically valid:
glVertexAttrib1f(a_ver_flag_drawing_type, 1.0f);
The direct problem was this call that followed shortly after:
glEnableVertexAttribArray(a_ver_flag_drawing_type);
There are two main ways to specify the value of a vertex attribute:
Use the current value, as specified in your case by glVertexAttrib1f().
Use values from an array, as specified with glVertexAttribPointer().
You select which of the two options is used for any given attribute by enabling/disabling the array, which is done by calling glEnableVertexAttribArray()/glDisableVertexAttribArray().
In the posted code, the vertex attribute was specified as only a current value, but the attribute was then enabled to fetch from an array with glEnableVertexAttribArray(). This conflict caused the crash, because the attribute values would have been fetched from an array that was never specified. To use the specified current value, the call simply has to be changed to:
glDisableVertexAttribArray(a_ver_flag_drawing_type);
Or, if the array was never enabled, the call could be left out completely. But just in case another part of the code might have enabled it, it's safer to disable it explicitly.
As a side note, the following statement sequence from the first draw function also looks suspicious. glUniform*() sets value on the active program, so this will set a value on the previously active program, not the one specified in the second statement. If you want to set the value on the new program, the order of the statements has to be reversed.
glUniform1f(u_fra_flag_drawing_type, 0.0);
glUseProgram(program[PROGRAM_POINT].id);
On the whole thing, I think there are at least two approaches that are better than the one chosen:
Use separate shader programs for the two different types of rendering. While using a single program with switchable behavior is a valid option, it looks artificial, and using separate programs seems much cleaner.
If you want to stick with a single program, use a single uniform to do the switching, instead of using an attribute and a uniform. You could use the one you already have, but you might just as well make it a boolean while you're at it. So in both the vertex and fragment shader, use the same uniform declaration:
uniform bool v_use_texture;
Then the tests become:
if (v_use_texture) {
Getting the uniform location is the same as before, and you can set the value, which will the be available in both the vertex and fragment shader, with one of:
glUniform1i(loc, 0);
glUniform1i(loc, 1);
I found the problem, just change variable a_drawingType from Attribute to Uniform then use glGetUniformLocation and glUniform1f to get index and pass value. I think Attribute will pass for any vertex so use uniform to pass once.

GPUImage two-pass filter - second frag shader never runs?

It's my impression (and the answer to this question seems to confirm it) that I can subclass from GPUImageTwoPassFilter to effectively run two fragment shaders in succession on an image but keep all the code and such confined into a single class. However, in my experimentation, it doesn't appear that the second fragment shader is ever compiled, much less executed; the example below builds and runs without complaint. The resulting image looks the same as if only the first fragment shader were run in a single-shader class.
What could be going wrong here? It doesn't help that all the examples I can find in the GPUImage code base that subclass GPUImageTwoPassFilter are simply using the same fragment shader program for each pass (as in GPUImageGaussianBlurFilter).
#import "BFTwoPassTest.h"
NSString *const kBFTwoPassTestFirstFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
gl_FragColor = vec4(1.0, textureColor.g, textureColor.b, 1.0);
}
);
NSString *const kBFTwoPassTestSecondFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
This should be an obvious syntax error.
}
);
#implementation BFTwoPassTest
- (id)init {
self = [self initWithFirstStageFragmentShaderFromString:kBFTwoPassTestFirstFragmentShaderString secondStageFragmentShaderFromString:kBFTwoPassTestSecondFragmentShaderString];
if (self) {
}
return self;
}
#end
Oops, there was a bug on line 55 of GPUImageTwoPassFilter.m. The following line:
if (!(self = [self initWithFirstStageVertexShaderFromString:kGPUImageVertexShaderString firstStageFragmentShaderFromString:firstStageFragmentShaderString secondStageVertexShaderFromString:kGPUImageVertexShaderString secondStageFragmentShaderFromString:firstStageFragmentShaderString]))
should have been
if (!(self = [self initWithFirstStageVertexShaderFromString:kGPUImageVertexShaderString firstStageFragmentShaderFromString:firstStageFragmentShaderString secondStageVertexShaderFromString:kGPUImageVertexShaderString secondStageFragmentShaderFromString:secondStageFragmentShaderString]))
Thanks for pointing this out, which should be fixed in the repository now. However, in the future may I suggest posting specific framework issues like this to the GitHub issues page for the project instead of here?

Resources