I'm trying to simulate things like glPolygonMode( GL_BACK, GL_LINE) in WebGL. I can draw with mode LINES instead of TRIANGLES, but I can't see how WebGL could automatically determine whether a particular segment is back-facing, because segments don't face anywhere.
To solve this, I pass the normal of the original triangle to the shader. It's easy to calculate transforms of this under the usual modelview transformations (rotations, scaling, translations); then I just multiply the normal by the transpose of the inverse. Based on whether the normal points into the screen or out I can decide whether to cull a line segment.
However, this isn't quite right. I need the inverse of the projection matrix as well to take perspective distortions into account.
I'm using the CanvasMatrix4.frustum() or CanvasMatrix4.ortho() functions to get the projection matrix. Are there formulas or functions available for their inverses?
Alternatively, is there a better way to simulate things like glPolygonMode( GL_BACK, GL_LINE)?
The idea of sending the other vertices of the triangle as extra attributes seems to work.
Here's the key part of the vertex shader. Attribute aPos is the position of the vertex being used in the line segment; v1 is the next vertex going around the triangle, and v2 is the one after that.
I've edited the code to leave out the lighting-related stuff. This probably isn't the most efficient implementation, but it does appear to work.
attribute vec3 aPos;
uniform mat4 mvMatrix;
uniform mat4 prMatrix;
attribute vec3 v1;
attribute vec3 v2;
varying float normz;
void main(void) {
gl_Position = prMatrix * (mvMatrix * vec4(aPos, 1.));
vec4 v1p = (prMatrix*(mvMatrix*vec4(v1, 1.)));
v1p = v1p/v1p.w - gl_Position/gl_Position.w;
vec4 v2p = (prMatrix*(mvMatrix*vec4(v2, 1.)));
v2p = v2p/v2p.w - gl_Position/gl_Position.w;
normz = v1p.x*v2p.y - v1p.y*v2p.x; // z component of cross product
}
In the fragment shader, discard fragments based on the sign of normz. I use
if ((normz <= 0.) != front) discard;
where front is a bool indicating whether I want to show the front or not.
If using WebGl extensions is an option, you might want to look into OES_standard_derivatives. This extension was introduced for the specific purpose of solving your perspective correction issue. According to WebGL Stats support is pretty univeral.
In your main code base, enable this extension:
gl.getExtension("OES_standard_derivatives");
The extension introduces the dFdx and dFdy functions, allowing you to do a front/back test like this:
vec3 dx = dFdx(position);
vec3 dy = dFdy(position);
vec3 faceNormal = normalize(cross(dx, dy));
if (dot(normal, faceNormal) > 0.0) {
// Front facing
}
Take the area of the triangle. If it is positive, draw it if it is negative, cull it.
Related
Given some vertices with xyz coordinates it is easy to obtain an xyz aligned bounding box, just take the min/max xyz values from the vertices. Ideally, these values only have to be found once and can be found before any rendering takes place.
My question is: if rotating, scaling, or translating the object, what's the best way to calculate the new xyz values which bound the object? Do I have to go through all the vertices after each transform and find new min/max xyz values?
Given GLSL code
in vec4 a_position;
in vec4 a_color;
// transformation matrix
uniform mat4 u_matrix;
out vec4 v_color;
void main() {
gl_Position = u_matrix * a_position;
v_color = a_color;
}
My idea: would adding new out variables for bounding box coordinates work? Or is there a better way?
in vec4 a_position;
in vec4 a_color;
// transformation matrix
uniform mat4 u_matrix;
out vec4 v_color;
out vec3 max_bounds;
out vec3 min_bounds;
void main() {
vec4 position = u_matrix * a_position;
if(position.x > max_bounds.x){
max_bounds.x = position.x;
}
if(position.y > max_bounds.y){
max_bounds.y = position.y;
}
if(position.z > max_bounds.z){
max_bounds.z = position.z;
}
// ...
gl_Position = position;
v_color = a_color;
}
You can't, since your vertex shader code (and all other shader code) is executed in parallel and the outputs only go to the next stage (fragment shader in your case).
An exemption is transform feedback where the outputs of a vertex shader can be written to buffers, however you can only use that to map data not gather / reduce it. A significant chunk of the efficiency/performance advantage of GPUs is due to executing code in parallel. The ability to share data among those parallel threads is very limited and not accessible via WebGL to begin with.
On top of all that, your task (finding the min/max extents in a vertex array) is inherently sequential as it requires shared data(the min and max values) available and current to all threads.
Since AABBs are inherently rather loose fitting, one (if not the) common approach is to transform the 8 corner vertices of the AABB(of the untransformed mesh) and gather the AABB from those.
Theoretically speaking you could store the vertex positions in a floating point texture, transform those with a fragment (instead of vertex) shader, write it back to a texture, then do a bunch of gather passes where you gather the min max values for chunks of X by X size (e.g. 64x64) and write that back to a set of increasingly smaller textures until you've reached a 1x1 pixel texture which you'd then read your result from using readPixels. That said, this is simply not worth the effort (and probably slower for meshes with lower vertex counts) just to get a slightly better fitting AABB, if you really need that you'd rather create a compound volume comprised of better fitting bounding shapes and than gather a combined AABB from those.
I'd like to know how to create custom filters for GPUImage. Right now I can create a sepia custom filter from this code(this was provided by Brad Larson on GitHub as an example):
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 outputColor;
outputColor.r = (textureColor.r * 0.393) + (textureColor.g * 0.769) + (textureColor.b * 0.189);
outputColor.g = (textureColor.r * 0.349) + (textureColor.g * 0.686) + (textureColor.b * 0.168);
outputColor.b = (textureColor.r * 0.272) + (textureColor.g * 0.534) + (textureColor.b * 0.131);
outputColor.a = 1.0;
gl_FragColor = outputColor;
}
I'm curious to know how Brad Larson knew these numbers and what it means. I have searched everywhere and did not find a tutorial on this. Can someone please guide me to create my own custom photo Filters with Photoshop and then translate it into a .fsh code file?
For example, if I am to change an image into a pink tone in Photoshop. How do I get these numbers in the code above?
Your question is a little broad, as you can literally write an entire book on how to create custom shaders, but it's a commonly asked one, so I can at least point people in the right direction.
Filters in GPUImage are written in the OpenGL Shading Language (GLSL). There are slight differences between the OpenGL targets (Mac, desktop Linux) and OpenGL ES ones (iOS, embedded Linux) in that shaders on the latter use precision qualifiers that are missing on the former. Beyond that, the syntax is the same.
Shader programs are composed of a matched pair of a vertex shader and a fragment shader. A vertex shader operates over each vertex and usually handles geometric manipulations. A fragment shader operates over each fragment (pixel, generally) and calculates the color to be output to the screen at that fragment.
GPUImage deals with image processing, so most of the time you'll be working only with fragment shaders and relying on one of the stock vertex shaders. The above is an example of a fragment shader that takes in each pixel of an input texture (the image from the previous step in the processing pipeline), manipulates its color values, and writes out the result to the gl_FragColor builtin.
The first line in the main() function:
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
uses the texture2D() function to read the pixel color in the inputImageTexture at a given coordinate (passed in from the vertex shader in the first stage). These coordinates are normalized to the 0.0-1.0 range, and therefore are independent of the input image size.
The values are loaded into a vector type (vec4) that contains multiple components within a single type. In this case, the color components for red, green, blue, and alpha are stored in this four-component vector and can be accessed via .r, .g, .b, and .a. The color component values are also normalized to the 0.0-1.0 range, in case you're used to working with 0-255 values.
In the particular case of a sepia tone filter, I'm attempting to apply a well-known color matrix for a sepia tone effect to convert the incoming color to an outgoing one. That requires matrix multiplication, which I do explicitly in the code above. In the actual framework, this is done as matrix math using builtin types within the shader.
There are many, many ways to manipulate colors to achieve certain effects. The GPUImage framework is full of them, based largely on things like color conversion standards published by Adobe or other organizations. For a given effect, you should identify if one of these existing implementations will do what you want before setting out to write your own.
If you do want to write your own, first figure out the math required to translate incoming colors into whatever output you want. Once you have that, writing the shader code is easy.
The lookup filter in GPUImage takes another approach, and that's to start with a lookup image that you manipulate in Photoshop under the filter conditions you wish to mimic. You then take that filtered lookup image and attach it to the lookup filter. It translates between the incoming colors and their equivalents in the lookup to provide arbitrary color manipulation.
Once you have your shader, you can create a new filter around that in a few different ways. I should say that my new GPUImage 2 framework greatly simplifies that process, if you're willing to forgo some backwards compatibility.
I have started experimenting with the info-beamer software for Raspberry Pi. It appears to have support for display PNGs, text, and video, but when I see GLSL primitives, my first instinct is to draw a texture-mapped polygon.
Unfortunately, I can't find the documentation that would allow me to draw so much as a single triangle using the shaders. I have made a few toys using GLSL, so I'm familiar with the pipeline of setting transform matrices and drawing triangles that are filtered by the vertex and fragment shaders.
I have grepped around in info-beamer-nodes on GitHub for examples of GL drawing, but the relevant examples have so far escaped my notice.
How do I use info-beamer's GLSL shaders on arbitrary UV mapped polygons?
Based on the comment by the author of info-beamer it is clear that functions to draw arbitrary triangles are not available in info-beamer 0.9.1.
The specific effect I was going to attempt was a rectangle that faded to transparent at the margins. Fortunately the 30c3-room/ example in the info-beamer-nodes sources illustrates a technique where we draw an image as a rectangle that is filtered by the GL fragment shader. The 1x1 white PNG is a perfectly reasonable template whose color can be replaced by the calculations of the shader in my application.
While arbitrary triangles are not available, UV-mapped rectangles (and rotated rectangles) are supported and are suitable for many use cases.
I used the following shader:
uniform sampler2D Texture;
varying vec2 TexCoord;
uniform float margin_h;
uniform float margin_v;
void main()
{
float q = min((1.0-TexCoord.s)/margin_h, TexCoord.s/margin_h);
float r = min((1.0-TexCoord.t)/margin_v, TexCoord.t/margin_v);
float p = min(q,r);
gl_FragColor = vec4(0,0,0,p);
}
and this LUA in my node.render()
y = phase * 30 + center.y
shader:use {
margin_h=0.03;
margin_v=0.2;
}
white:draw(x-20,y-20,x+700,y+70)
shader:deactivate()
font:write(x, y, "bacon "..(phase), 50, 1,1,0,1)
This question has been asked before but the quite a few years ago in my searches. The answer was always to use texture mapping but what I really want to do is represent the star as a single vertex - you may think I'm copping out with a simplistic method but in fact, a single point source of light actually looks pretty good and realistic. But I want to process that point of light with something like a gaussian blur too give it a little more body when zooming in or for brighter stars. I was going to texture map a gaussian blur image but if I understand things correctly I would then have to draw each star with 4 vertexes. Maybe not so difficult but I don't want to go there if I can just process a single vertex. Would a vertex-shader do this? Can GLKBaseEffects get me there? Any suggestions?
Thanks.
You can use point sprites.
Draw Calls
You use a texture containing the image of the star, and use the typical setup to bind a texture, bind it to a sampler uniform in the shader, etc.
You draw a single vertex for each star, with GL_POINTS as the primitive type passed as the first argument to glDrawArrays()/glDrawElements(). No texture coordinates are needed.
Vertex Shader
In the vertex shader, you transform the vertex as you normally would, and also set the built-in gl_PointSize variable:
uniform float PointSize;
attribute vec4 Position;
void main() {
gl_Position = ...; // Transform Position attribute;
gl_PointSize = PointSize;
}
For the example, I used a uniform for the point size, which means that all stars will have the same size. Depending on the desired effect, you could also calculate the size based on the distance, or use an additional vertex attribute to specify a different size for each star.
Fragment Shader
In the fragment shader, you can now access the built-in gl_PointCoord variable to get the relative coordinates of the fragment within the point sprite. If your point sprite is a simple texture image, you can use it directly as the texture coordinates.
uniform sampler2D SpriteTex;
void main() {
gl_FragColor = texture2D(SpriteTex, gl_PointCoord);
}
Additional Material
I answered a somewhat similar question here: Render large circular points in modern OpenGL. Since it was for desktop OpenGL, and not for a textured sprite, this seemed worth a separate answer. But some of the steps are shared, and might be explained in more detail in the other answer.
I've been busy educating myself on this and trying it but I'm getting strange results. It seems to work with regard to vertex transform - because I see the points moved out on the screen - but pointsize and colour are not being affected. The colour seems to be some sort of default yellow colour with some shading between vertices.
What bothers me too is that I get error messages on built-ins in the vertex shader. Here are the vertex/fragment code and the error messages:
#Vertex shader
precision mediump float;
precision lowp int;
attribute float Pointsize;
varying vec4 color_out;
void main()
{
gl_PointSize = Pointsize;
gl_Position = gl_ModelViewMatrix * gl_Vertex;
color_out = vec4(0.0, 1.0, 0.0, 1.0); // output only green for test
}
#Fragment shader
precision mediump float;
precision lowp int;
varying vec4 color_out;
void main()
{
gl_FragColor = color_out;
}
Here's the error message:
ERROR: 0:24: Use of undeclared identifier 'gl_ModelViewMatrix'
ERROR: 0:24: Use of undeclared identifier 'gl_Vertex'
ERROR: One or more attached shaders not successfully compiled
It seems the transform is being passed from my iOS code where I'm using GLKBaseEffects such as in the following lines:
self.effect.transform.modelviewMatrix = modelViewMatrix;
[self.effect prepareToDraw];
But I'm not sure exactly whats happening, especially with the shader compile errors.
I'm trying to implement the Atkinson dithering algorithm in a fragment shader in GLSL using our own Brad Larson's GPUImage framework. (This might be one of those things that is impossible but I don't know enough to determine that yet so I'm just going ahead and doing it anyway.)
The Atkinson algo dithers grayscale images into pure black and white as seen on the original Macintosh. Basically, I need to investigate a few pixels around my pixel and determine how far away from pure black or white each is and use that to calculate a cumulative "error;" that error value plus the original value of the given pixel determines whether it should be black or white. The problem is that, as far as I could tell, the error value is (almost?) always zero or imperceptibly close to it. What I'm thinking might be happening is that the texture I'm sampling is the same one that I'm writing to, so that the error ends up being zero (or close to it) because most/all of the pixels I'm sampling are already black or white.
Is this correct, or are the textures that I'm sampling from and writing to distinct? If the former, is there a way to avoid that? If the latter, then might you be able to spot anything else wrong with this code? 'Cuz I'm stumped, and perhaps don't know how to debug it properly.
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp vec3 dimensions;
void main()
{
highp vec2 relevantPixels[6];
relevantPixels[0] = vec2(textureCoordinate.x, textureCoordinate.y - 2.0);
relevantPixels[1] = vec2(textureCoordinate.x - 1.0, textureCoordinate.y - 1.0);
relevantPixels[2] = vec2(textureCoordinate.x, textureCoordinate.y - 1.0);
relevantPixels[3] = vec2(textureCoordinate.x + 1.0, textureCoordinate.y - 1.0);
relevantPixels[4] = vec2(textureCoordinate.x - 2.0, textureCoordinate.y);
relevantPixels[5] = vec2(textureCoordinate.x - 1.0, textureCoordinate.y);
highp float err = 0.0;
for (mediump int i = 0; i < 6; i++) {
highp vec2 relevantPixel = relevantPixels[i];
// #todo Make sure we're not sampling a pixel out of scope. For now this
// doesn't seem to be a failure (?!).
lowp vec4 pointColor = texture2D(inputImageTexture, relevantPixel);
err += ((pointColor.r - step(.5, pointColor.r)) / 8.0);
}
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
lowp float hue = step(.5, textureColor.r + err);
gl_FragColor = vec4(hue, hue, hue, 1.0);
}
There are a few problems here, but the largest one is that Atkinson dithering can't be performed in an efficient manner within a fragment shader. This kind of dithering is a sequential process, being dependent on the results of fragments above and behind it. A fragment shader can only write to one fragment in OpenGL ES, not neighboring ones like is required in that Python implementation you point to.
For potential shader-friendly dither implementations, see the question "Floyd–Steinberg dithering alternatives for pixel shader."
You also normally can't write to and read from the same texture, although Apple did add some extensions in iOS 6.0 that let you write to a framebuffer and read from that written value in the same render pass.
As to why you're seeing odd error results, the coordinate system within a GPUImage filter is normalized to the range 0.0 - 1.0. When you try to offset a texture coordinate by adding 1.0, you're reading past the end of the texture (which is then clamped to the value at the edge by default). This is why you see me using texelWidth and texelHeight values as uniforms in other filters that require sampling from neighboring pixels. These are calculated as a fraction of the overall image width and height.
I'd also not recommend doing texture coordinate calculation within the fragment shader, as that will lead to a dependent texture read and really slow down the rendering. Move that up to the vertex shader, if you can.
Finally, to answer your title question, usually you can't modify a texture as it is being read, but the iOS texture cache mechanism sometimes allows you to overwrite texture values as a shader is working its way through a scene. This leads to bad tearing artifacts usually.
#GarrettAlbright For the 1-Bit Camera app I ended up with simply iterating over the image data using raw memory pointers and (rather) tightly optimized C code. I looked into NEON intrisics and the Accelerate framework, but any parallelism really screws up an algorithm of this nature so I didn't use it.
I also toyed around with the idea to do a decent enough aproximation of the error distribution on the GPU first, and then do the thresholding in another pass, but I never got anything but a rather ugly noise dither from those experiments. There are some papers around covering other ways of approaching diffusion dithering on the GPU.