WebGL - Get fragment coordinates within a shape in triangle mode? GL_FragCoord doesn't work - webgl

I'm trying to create a WebGL shader that can both output solid rectangles as well as hollow rectangles (with a fixed border width) within the same draw call, and so far, the best way I've thought of how to do it is as follows:
In the vertex shader, send in a uniform value uniform float borderWidth
and then inside the fragment shader, I need a coordinate space that is x = [ 0, 1] and y = [0, 1] where x=0 when we are the leftmost, and y=0 when we are at the topmost of the shape's borders, or something like that. After I have that then drawing the lines is straightforward and I can figure it out from there, I can use something like:
1a - Have a smooth step from the fragment's x=0 coordinate to x=borderWidth for the vertical lines and x=1-borderWidth to x=1 for the vertical lines
1b - Something similar for the horizontal lines and the y coordinate
The Problem
The problem I'm facing is I can't create that coordinate space. I tried using gl_FragCoord but I think it's undefined for shapes rendering in TRIANGLES mode. So I'm kinda lost. Anyone have any suggestions?

gl_FragCoord is never undefined, it is the position of the fragment in the output buffer (like your screen), if you're rendering to the center of a FullHD screen gl_FragCoord would be vec3(940,540,depth), however this data is of no use for what you're trying to do.
What you describe sounds like you need barycentric coordinates that you have to define as additional attributes next to your vertex positions, then pass through to the fragment shader as varyings so they're linearly interpolated across the vertices. If you render non-indexed geometry and use webgl 2 you can derive the barycentrics using gl_VertexID % 3 instead.

Related

How can I make my WebGL Coordinate System "Top Left" Oriented?

Because of computation efficiency, I use a fragment shader to implement a simple 2D metaballs algorithm. The data of the circles to render is top-left oriented.
I have everything working, except that the origin of WebGL's coordinate system (bottom-left) is giving me a hard time: Obviously, the rendered output is mirrored along the horizontal axis.
Following https://webglfundamentals.org/webgl/lessons/webgl-2d-rotation.html (and others), I tried to rotate things using a vertex shader. Without any success unfortunately.
What is the most simple way of achieving the reorientation of WebGL's coordinate system?
I'd appreciate any hints and pointers, thanks! :)
Please find a working (not working ;) ) example here:
https://codesandbox.io/s/gracious-fermat-znbsw?file=/src/index.js
Since you are using gl_FragCoord in your pixels shader, you can't do it from the vertex shader becasuse gl_FragCoord is the canvas coordinates but upside down. You could easily invert it in javascript in your pass trough to WebGL
gl.uniform3fv(gl.getUniformLocation(program, `u_circles[${i}]`), [
circles[i].x,
canvas.height - circles[i].y - 1,
circles[i].r
]);
If you want to do it in the shader and keep using gl_FragCoord then you should pass the height of the canvas to the shader using a uniform and do the conversion of y there by doing something like
vec2 screenSpace = vec2(gl_FragCoord.x, canvasHeight - gl_FragCoord.y - 1);
The -1 is because the coordinates start at 0.

Vertex Shader: compute the leftmost vertex

Target: OpenGL ES >= 3.0
My app:
1) creates several complicated Meshes
2) for each Mesh, renders it:
a) runs vertex shader which distorts the Mesh' vertices in nontrivial ways
b) nothing special in fragment shader
3) Again for each Mesh:
a) postprocess the area taken by it
Now, in order for postprocessing to be efficient, I call glScissor and make only the smallest rectangle containing the Mesh pass the Scissor test. In order to do that, I need to know the bounding rectangle, and to compute that, I need to know the Mesh
a) leftmost
b) rightmost
c) topmost
d) bottom-most
vertices in window coordinates. It wouldn't be such a big problem if not for the Vertex Shader which distorts the Mesh' vertices (step 2a above).
I deal with that by setting up Transform Feedback so that after step 2, I have the transformed vertices in CPU. I then compute the leftmost- (and the 3 others) one by simply one loop though all of them.
There are hundreds of thousands of vertices though and I was thinking if this job couldn't be done by the Vertex Shader itself.
Question: can a Vertex Shader - one which modifies the vertices positions - figure out the leftmost one and only pass me back it (and the 3 other 'extreme' vertices) ?

How to determine if point lies inside shape?

I need to determine if point lies inside shape. In case our shape is circle it's easy:
highp vec2 textureCoordinateToUse = vec2(textureCoordinate.x, (textureCoordinate.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp float dist = distance(center, textureCoordinateToUse);
textureCoordinateToUse = textureCoordinate;
if (dist < radius) {
...
}
But what if my shape is star, hexagon, spirale or etc? Does somebody know any fast way to do it? May I use images with alpha channels as shapes? How to do it?
UPDATE: I have just understand that the best option now is to pass another texture to shader. How can I do it? Now shader has 2 properties: varying highp vec2 textureCoordinate; and uniform sampler2D inputImageTexture;. And I want to pass another texture to check alpha channel of it inside shader code.
UPDATE 2: I have tried to load shape to shader (I think so). I'm using GPUImage framework, so I have set sampler2D with my shape to uniform and tried to check alpha channel there. Is it okay? On my iPhone 5s it's looks very well, but what about performance?
A shader won't give you anything because the result of shader's routine is an image.
In image-based approach the problem need to be reformulated. Lets say you have a grayscale image with rendered shapes where white and gray pixels define shapes and black pixels define nothing. You must know the center of each shape and the bounding circle of each shape. Note that bounding circles of shapes must not intersect each other.
Then you can probe a point agains shapes first by bounding circles (this probe is necessary to distinguish shapes because by peeking a pixel from image you can only know if you point intersect some shape), and second by peeking a certain pixel. If both probes are positive then your point is inside of a shape.
If you can have an analytic shape representation such as circle all you need to find is an equation that describes that shape.
If you have a pre-drawn shape and you can pack it into a texture you can do that as well. All you need is to treat the object as a rectangle (a whole texture image) and do a rectangle check such as for the circle plus get the colour of that texture and do a colour check. What to check in colour really depends on you, it can be black-white, use the alpha channel.. anything really.
If you have a complex drawn object such as 3D model you need to get a model projection (silhouette) which can be drawn to a frame buffer object and again used as a texture or better yet try to draw it directly to the scene using some additional buffer such as stencil which you can then again use in fragment shader to check a specific value.
For arbitrary polygonal shape:
1. Triangulate you shape (for example using Delaunay triangulation).
2. Check you point against every triangle. It is trivial.
3. Improve performance by using bounding shapes around original polygonal shapes and space partitioning for triangles.

Texture Sampling Coordinates to Render a Sprite

Let's say we have a texture (in this case 8x8 pixels) we want to use as a sprite sheet. One of the sub-images (sprite) is a subregion of 4x3 inside the texture, like in this image:
(Normalized texture coordinates of the four corners are shown)
Now, there are basically two ways to assign texture coordinates to a 4px x 3px-sized quad so that it effectively becomes the sprite we are looking for; The first and most straightforward is to sample the texture at the corners of the subregion:
// Texture coordinates
GLfloat sMin = (xIndex0 ) / imageWidth;
GLfloat sMax = (xIndex0 + subregionWidth ) / imageWidth;
GLfloat tMin = (yIndex0 ) / imageHeight;
GLfloat tMax = (yIndex0 + subregionHeight) / imageHeight;
Although when first implementing this method, ca. 2010, I realized the sprites looked slightly 'distorted'. After a bit of search, I came across a post in the cocos2d forums explaining that the 'right way' to sample a texture when rendering a sprite is this:
// Texture coordinates
GLfloat sMin = (xIndex0 + 0.5) / imageWidth;
GLfloat sMax = (xIndex0 + subregionWidth - 0.5) / imageWidth;
GLfloat tMin = (yIndex0 + 0.5) / imageHeight;
GLfloat tMax = (yIndex0 + subregionHeight - 0.5) / imageHeight;
...and after fixing my code, I was happy for a while. But somewhere along the way, and I believe it is around the introduction of iOS 5, I started feeling that my sprites weren't looking good. After some testing, I switched back to the 'blue' method (second image) and now they seem to look good, but not always.
Am I going crazy, or something changed with iOS 5 related to GL ES texture mapping? Perhaps I am doing something else wrong? (e.g., the vertex position coordinates are slightly off? Wrong texture setup parameters?) But my code base didn't change, so perhaps I am doing something wrong from the beginning...?
I mean, at least with my code, it feels as if the "red" method used to be correct but now the "blue" method gives better results.
Right now, my game looks OK, but I feel there is something half-wrong that I must fix sooner or later...
Any ideas / experiences / opinions?
ADDENDUM
To render the sprite above, I would draw a quad measuring 4x3 in orthographic projection, with each vertex assigned the texture coords implied in the code mentioned before, like this:
// Top-Left Vertex
{ sMin, tMin };
// Bottom-Left Vertex
{ sMin, tMax };
// Top-Right Vertex
{ sMax, tMin };
// Bottom-right Vertex
{ sMax, tMax };
The original quad is created from (-0.5, -0.5) to (+0.5, +0.5); i.e. it is a unit square at the center of the screen, then scaled to the size of the subregion (in this case, 4x3), and its center positioned at integer (x,y) coordinates. I smell this has something to do too, especially when either width, height or both are not even?
ADDENDUM 2
I also found this article, but I'm still trying to put it together (it's 4:00 AM here)
http://www.mindcontrol.org/~hplus/graphics/opengl-pixel-perfect.html
There's slightly more to this picture than meets the eye, the texture coordinates are not the only factor in where the texture gets sampled. In your case I believe the blue is probably what want to have.
What you ultimately want is to sample each texel in center. You don't want to be taking samples on the boundary between two texels, because that either combines them with linear sampling, or arbitrarily chooses one or the other with nearest, depending on which way the floating point calculations round.
Having said that, you might think that you don't want to have your texcoords at (0,0), (1,1) and the other corners, because those are on the texel boundary. However an important thing to note is that opengl samples textures in the center of a fragment.
For a super simple example, consider a 2 by 2 pixel monitor, with a 2 by 2 pixel texture.
If you draw a quad from (0,0) to (2,2), this will cover 4 pixels. If you texture map this quad, it will need to take 4 samples from the texture.
If your texture coordinates go from 0 to 1, then opengl will interpolate this and sample from the center of each pixel, with the lower left texcoord starting at the bottom left corner of the bottom left pixel. This will ultimately generate texcoord pairs of (0.25, 0.25), (0.75,0.75), (0.25, 0.75), and (0.75, 0.25). Which puts the samples right in the middle of each texel, which is what you want.
If you offset your texcoords by a half pixel as in the red example, then it will interpolate incorrectly, and you'll end up sampling the texture off center of the texels.
So long story short, you want to make sure that your pixels line up correctly with your texels (don't draw sprites at non-integer pixel locations), and don't scale sprites by arbitrary amounts.
If the blue square is giving you bad results, can you give an example image, or describe how you're drawing it?
Picture says 1000 words:

Water surface sample for iOS

I'm looking for an water surface effect sample like Pocket pond HD. I have found some tutorials:
iPhone OpenGL demo water waves
Waves effect
However, it's sketchy.
It is very simple.
You just have to make a 2D heightmap (2D array of water height at that particular place). With heightmap, you can calculate (approximate, interpolate) a normal at each place depending on the nearest height points.
Then you perform a "simple raytracing". You "refract each ray" depending on normal, intersect it with plane (bottom) and get a color from texture at that place.
Practically: you make a triangle mesh from height map and render those triangles. You can send normals in Vertex Buffer or compute them in Vertex Shader. Raytracing is done in Fragment Shader. Direction of each ray can be (0, 0, 1). You refract it by current normal and scale the result, so Z coordinate equals water depth. The new X and Y coordinates are texture coordinates.
To make an animation, just update the heightmap in time.

Resources