I need to determine if point lies inside shape. In case our shape is circle it's easy:
highp vec2 textureCoordinateToUse = vec2(textureCoordinate.x, (textureCoordinate.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp float dist = distance(center, textureCoordinateToUse);
textureCoordinateToUse = textureCoordinate;
if (dist < radius) {
...
}
But what if my shape is star, hexagon, spirale or etc? Does somebody know any fast way to do it? May I use images with alpha channels as shapes? How to do it?
UPDATE: I have just understand that the best option now is to pass another texture to shader. How can I do it? Now shader has 2 properties: varying highp vec2 textureCoordinate; and uniform sampler2D inputImageTexture;. And I want to pass another texture to check alpha channel of it inside shader code.
UPDATE 2: I have tried to load shape to shader (I think so). I'm using GPUImage framework, so I have set sampler2D with my shape to uniform and tried to check alpha channel there. Is it okay? On my iPhone 5s it's looks very well, but what about performance?
A shader won't give you anything because the result of shader's routine is an image.
In image-based approach the problem need to be reformulated. Lets say you have a grayscale image with rendered shapes where white and gray pixels define shapes and black pixels define nothing. You must know the center of each shape and the bounding circle of each shape. Note that bounding circles of shapes must not intersect each other.
Then you can probe a point agains shapes first by bounding circles (this probe is necessary to distinguish shapes because by peeking a pixel from image you can only know if you point intersect some shape), and second by peeking a certain pixel. If both probes are positive then your point is inside of a shape.
If you can have an analytic shape representation such as circle all you need to find is an equation that describes that shape.
If you have a pre-drawn shape and you can pack it into a texture you can do that as well. All you need is to treat the object as a rectangle (a whole texture image) and do a rectangle check such as for the circle plus get the colour of that texture and do a colour check. What to check in colour really depends on you, it can be black-white, use the alpha channel.. anything really.
If you have a complex drawn object such as 3D model you need to get a model projection (silhouette) which can be drawn to a frame buffer object and again used as a texture or better yet try to draw it directly to the scene using some additional buffer such as stencil which you can then again use in fragment shader to check a specific value.
For arbitrary polygonal shape:
1. Triangulate you shape (for example using Delaunay triangulation).
2. Check you point against every triangle. It is trivial.
3. Improve performance by using bounding shapes around original polygonal shapes and space partitioning for triangles.
Related
I'm trying to create a WebGL shader that can both output solid rectangles as well as hollow rectangles (with a fixed border width) within the same draw call, and so far, the best way I've thought of how to do it is as follows:
In the vertex shader, send in a uniform value uniform float borderWidth
and then inside the fragment shader, I need a coordinate space that is x = [ 0, 1] and y = [0, 1] where x=0 when we are the leftmost, and y=0 when we are at the topmost of the shape's borders, or something like that. After I have that then drawing the lines is straightforward and I can figure it out from there, I can use something like:
1a - Have a smooth step from the fragment's x=0 coordinate to x=borderWidth for the vertical lines and x=1-borderWidth to x=1 for the vertical lines
1b - Something similar for the horizontal lines and the y coordinate
The Problem
The problem I'm facing is I can't create that coordinate space. I tried using gl_FragCoord but I think it's undefined for shapes rendering in TRIANGLES mode. So I'm kinda lost. Anyone have any suggestions?
gl_FragCoord is never undefined, it is the position of the fragment in the output buffer (like your screen), if you're rendering to the center of a FullHD screen gl_FragCoord would be vec3(940,540,depth), however this data is of no use for what you're trying to do.
What you describe sounds like you need barycentric coordinates that you have to define as additional attributes next to your vertex positions, then pass through to the fragment shader as varyings so they're linearly interpolated across the vertices. If you render non-indexed geometry and use webgl 2 you can derive the barycentrics using gl_VertexID % 3 instead.
I'm implementing a paint app by using OpenGL/GLSL.
There is a feature where a user draws a "mask" by using brush with a pattern image, meantime the background changes according to the brush position. Take a look at the video to understand: video
I used CALayer's mask (iOS stuff) to achieve this effect (on the video). But this implementation is very costly, fps is pretty low. So I decided to use OpenGL for that.
For OpenGL implementation, I use the Stencil buffer for masking, i.e.:
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_ALWAYS, 1, 0);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
// Draw mask (brush pattern)
glStencilFunc(GL_EQUAL, 1, 255);
// Draw gradient background
// Display the buffer
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
The problem: Stencil buffer doesn't work with alpha, that's why I can't use semi-transparent patterns for the brushes.
The question: How can I achieve that effect from video by using OpenGL/GLSL but without Stencil buffer?
Since your background is already generated (from comments) then you can simply use 2 textures in the shader to draw a each of the segments. You will need to redraw all of them until user lifts up his finger though.
So assume you have a texture that has a white footprint on it with alpha channel footprintTextureID and a background texture "backgroundTextureID". You need to bind both of a textures using activeTexture 1 and 2 and pass the 2 as uniforms in the shader.
Now in your vertex shader you will need to generate the relative texture coordinates from the position. There should be a line similar to gl_Position = computedPosition; so you need to add another varying value:
backgroundTextureCoordinates = vec2((computedPosition.x+1.0)*0.5, (computedPosition.y+1.0)*0.5);
or if you need to flip vertically
backgroundTextureCoordinates = vec2((computedPosition.x+1.0)*0.5, (-computedPosition.y+1.0)*0.5):
(The reason for this equation is that the output vertices are in interval [-1,1] but the textures use [0,1]: [-1,1]+1 = [0,2] then [0,2]*0.5 = [0,1]).
Ok so assuming you bound all of these correctly you now only need to multiply the colors in fragment shader to get the blended color:
uniform sampler2D footprintTexture;
varying lowp vec2 footprintTextureCoordinate;
uniform sampler2D backgroundTexture;
varying lowp vec2 backgroundTextureCoordinates;
void main() {
lowp vec4 footprintColor = texture2D(footprintTexture, footprintTextureCoordinate);
lowp vec4 backgroundColor = texture2D(backgroundTexture, backgroundTextureCoordinates);
gl_FragColor = footprintColor*backgroundColor;
}
If you wanted you could multiply with alpha value from the footprint but that only loses the flexibility. Until the footprint texture is white it makes no difference so it is your choice.
Stencil is a boolean on/off test, so as you say it can't cope with alpha.
The only GL technique which works with alpha is the blending, but due to the color change between frames you can't simply flatten this into a single layer in a single pass.
To my mind it sounds like you need to maintain multiple independent layers in off-screen buffers, and then blend them together per frame to form what is shown on screen. This gives you complete independence for how you update each layer per frame.
I have started experimenting with the info-beamer software for Raspberry Pi. It appears to have support for display PNGs, text, and video, but when I see GLSL primitives, my first instinct is to draw a texture-mapped polygon.
Unfortunately, I can't find the documentation that would allow me to draw so much as a single triangle using the shaders. I have made a few toys using GLSL, so I'm familiar with the pipeline of setting transform matrices and drawing triangles that are filtered by the vertex and fragment shaders.
I have grepped around in info-beamer-nodes on GitHub for examples of GL drawing, but the relevant examples have so far escaped my notice.
How do I use info-beamer's GLSL shaders on arbitrary UV mapped polygons?
Based on the comment by the author of info-beamer it is clear that functions to draw arbitrary triangles are not available in info-beamer 0.9.1.
The specific effect I was going to attempt was a rectangle that faded to transparent at the margins. Fortunately the 30c3-room/ example in the info-beamer-nodes sources illustrates a technique where we draw an image as a rectangle that is filtered by the GL fragment shader. The 1x1 white PNG is a perfectly reasonable template whose color can be replaced by the calculations of the shader in my application.
While arbitrary triangles are not available, UV-mapped rectangles (and rotated rectangles) are supported and are suitable for many use cases.
I used the following shader:
uniform sampler2D Texture;
varying vec2 TexCoord;
uniform float margin_h;
uniform float margin_v;
void main()
{
float q = min((1.0-TexCoord.s)/margin_h, TexCoord.s/margin_h);
float r = min((1.0-TexCoord.t)/margin_v, TexCoord.t/margin_v);
float p = min(q,r);
gl_FragColor = vec4(0,0,0,p);
}
and this LUA in my node.render()
y = phase * 30 + center.y
shader:use {
margin_h=0.03;
margin_v=0.2;
}
white:draw(x-20,y-20,x+700,y+70)
shader:deactivate()
font:write(x, y, "bacon "..(phase), 50, 1,1,0,1)
I wrote two simple WebGL demos which use a 512x512 image as texture. But the result is not what I want. I know the solution is to use projective texture mapping(or any other solutions?) but I have no idea how to implement it in my simple demos. Anyone can help?
The results are as follows(both of them are incorrect):
Codes of demos are here: https://github.com/jiazheng/WebGL-Learning/tree/master/texture
note: Both the model and texture could not be modified in my case.
In order to get perspective-correct texture mapping, you must actually be using perspective. That is, instead of narrowing the top of your polygon along the x axis, move it backwards along the z axis, and apply a standard perspective projection matrix.
I'm a little hazy on the details myself, but my understanding is that the way the perspective matrix maps the z coordinate into the w coordinate is the key to getting the GPU to interpolate along the surface “correctly”.
If you have already-perspective-warped 2D geometry, then you will have to implement some method of restoring it to 3D data, computing appropriate z values. There is no way in WebGL to get a perspective quadrilateral, because the primitives are triangles and there is not enough information in three points to define the texture mapping you're looking for unambiguously — your code must use the four points to work out the corresponding depths. Unfortunately, I don't have enough grasp of the math to advise you on the details.
You must specify vec4 texture coordinates not vec2. The 4th field in each vec4 will be homogeneous w that when divided into x and y produce your desired coordinate. This in turn should allow the perspective correction division in hardware to give you a non affine mapping within the triangle provided your numbers are correct. Now, if you use a projection matrix to transform a vec4 with w=1 in your vertex shader you should get the correct vec4 numbers ready for perspective correction going into setup and rasterization for your fragment shader. If this is unclear then you need to seek out tutorials on projective texture transformation and homogeneous coordinates in projection.
Could someone explain the math behind the function Tex2D in HLSL?
One of the examples is: given a quad with 4 vertices, the texture coordinates are (0,0) (0,1) (1,0) (1,1) on it and the texture's width and height are 640 and 480. How is the shader able to determine the number of times of sampling to be performed? If it is to map texels to pixels directly, does it mean that the shader needs to perform 640*480 times of sampling with the texture coordinates increasing in some kind of gradients? Also, I would appreciate if you could provide more references and articles on this topic.
Thanks.
After the vertex shader the rasterizer "converts" triangles into pixels. Each pixel is associated with a screen position, and the vertex attributes of the triangles (eg: texture coordinates) are interpolated across the triangles and an interpolated value is stored in each pixel according to the pixel position.
The pixel shader runs once per pixel (in most cases).
The number of times the texture is sampled per pixel depends on the sampler used. If you use a point sampler the texture is sampled once, 4 times if you use a bilinear sampler and a few more if you use more complex samplers.
So if you're drawing a fullscreen quad, the texture you're sampling is the same size of the render target and you're using a point sampler the texture will be sampled width*height times (once per pixel).
You can think about textures as an 2-dimensional array of texels. tex2D simply returns the texel at the requested position performing some kind of interpolation depending on the sampler used (texture coordinates are usually relative to the texture size so the hardware will convert them to absolute coordinates).
This link might be useful: Rasterization