How can I make blend edges with shaders - webgl

let me introduce my answer.
This is triagnle rendered with webgl. Well it is a little enlarged ...
And this is triangle, which I want to have:
So Im looking for some shader, that will be able to blend edges of primitive triangle. I have an idea how to realize one, but Im probably not good enough to write it yet.
My idea is something like:
Based on position of 3 vertices calculate for each fragment, how much does primitive cover pixel, and then set up transparency of this pixel based on calculated information...
I can get 2D coordinates from vertex shader and use them in fragment shader. Now I probably want to use gl_FragCoord.xy or gl_PointCoord.xy and calculate % pixel cover, but I not able to compare these values (it seems that units are different, I compute miles with milimetres and also 'point zero' is somewhere else for these vectors), so I can't calculate final transparency value.
Can anyone help me please? Just turn me correct way.

There are lots ways to achieve this
You can render at a higher resolution. Make your canvas larger than the size its displayed, the browser will almost certainly bilinear interpolate the result. Example:
<canvas width="400" height="400" style="width: 200px; height 200px" />
declares a canvas with 400x400 backstore that is scaled to 200x200 when displayed.
Here's a fiddle.
Another technique would be to compute an alpha value in the shader such that you get the blending you want along the edge of the polygon.
I'm sure there are others. Most Canvas2D implementations are gpu accelerated and anti-aliased even if the GPU does not support anti-aliasing so you could try digging through one of those.

The problem with your plan is that OpenGL applies its own test to decide which pixels to draw first — if the centre of the fragment lies inside the geometry boundary then it is rasterised, if it lies outside then it is not, if it lies exactly on the boundary then rasterisation depends on whether it is at the start or end of a horizontal or vertical run. The boundary condition ensures that where two triangles exactly meet, they never both contain the same fragments.
So if you compute coverage per fragment you're almost never going to get a number less than 50% (corners and other very thin bits of geometry being the exception). You're not going to get the complete anti-aliasing you desire. You'll get the antialiased version clipped by the aliased version.
Hardware achieves this by sampling multiple fragments per output pixel. You can simulate that by rendering to texture at a multiple of your output size, then scaling down. The mip map generation will filter the input image.
That all being said, have you tried just passing antialias as true when calling canvas.GetContext? That will use the hardware capabilities, subject to hardware and browser support.

Related

Making GL_POINTS look like a 3D Rectangle

Suppose one has an array of GL_POINTS and wants to make each appear to have a distinct "height" or "depth", so instead of appearing like a flat scatter of squares they appear to be a scatter of 3D rectangles / right rectangular prisms.
Is there a technique in WebGL that will allow one to achieve this effect? One could of course use vertices that actually articulate those 3D rectangles, but my goal is to optimize for performance as I have ~100,000 of these rectangles to render, and I thought points would be the best primitive to use.
Right now I am thinking one could probably use a series of point sprites each with varying depth, then assign each point the sprite that corresponds most closely with the desired depth (effectively quantizing the depth data field). But is there a way to keep the depth field continuous?
Any pointers on this question would be greatly appreciated!
In my experience POINTS are not faster than making your own vertices. Also, if you use instanced drawing you can get away with almost the same amount of data. You need one quad and then position, width, and height for each rectangle. Not sure instancing is as fast as just making all the vertices though. Might depend on the GPU/driver
As pointed out 😄 in many other Q&As on points, the maximum point size is GPU/driver specific and allowed to be as low as 1 pixel. There are plenty of GPUs that only allow size >= 256 pixels (no idea why) and a few with only size >= 64. Yet another reason to not use POINTS
Otherwise though, POINTS always draw a square so you'd have to draw a square large enough that contains your rectangle and then in the fragment shader, discard the pixels outside of the rectangle.
That's unlikely to be good for speed though. Every pixel of the square will still need to be evaluated by the fragment shader which is slower than drawing a rectangle with vertices since then those pixels outside the rectangle are not even considered. Further, using discard in a shader is often slower than not using it. This is because, for example, things like setting the depth buffer, if there is no discard nothing needs to be checked, the depth buffer can be updated unconditionally separate from the shader. With discard the depth buffer can't be updated until the GPU knows if the shader kept or discarded the fragment.
As for making them appear 3D I'm not sure what you mean. Effectively points are just like drawing a square quad so you can put anything you want on that square. The majority of shaders on shadertoy can be adapted to draw themselves on points. I wouldn't recommend it as it would likely be slow but just pointing out that it's just a quad. Draw a texture on them, draw a procedural texture on them, draw a solid color on them, draw a procedural snail on them.
Another possible solution is you can apply a normal map to the quad and then do lighting calculations on those normals so each quad will have the correct lighting for its position relative to your light(s)

Colors not blending properly in OpenGL ES

I'm trying to render 2 (light) circles in OpenGL ES in 2D. The middle is white, the border is black. It works fine, as long as they don't overlap:
But as soon as they do, I get this artifact:
I'm using glBlendFunc(GL_ONE, GL_ONE) with blending enabled of course.
What could be causing this? Is there a way to fix it?
I'd like them to blend more like this:
Thanks!
Are your circles currently linear gradients? You might get less of an artifact if you have a different curve.
Based on your example, though, it looks like you want the maximum intensity of the two circles, not the sum of the intensities. It appears that Apple's OpenGL ES 2.0 implementation support the EXT_blend_minmax extension, which lets you specify that the resulting fragment values should be the maximum of the inbound and existing values. Maybe try that?
The result you're seeing is exactly what should come out for linear gradients. Hint: Open up Photoshop or The GIMP draw two radial gradients into two layers and set them to "Addition" blending mode. It will look exactly like your picture.
A effect like what you desire is given with square gradients. If your gradient is in the range 0…1 take the square of the value and draw this. You may apply a sqrt later if you want to linearize the single gradients.
Not that this is something not easily done using the blending stage; it can be done using multiple passes, but then it's actually more straightforward to use a shader to combine passed from two FBOs.

Direct3D line thickness, with a slightly different take

I realise that Direct3D doesn't properly support line thickness, and infact on most graphics hardware, lines are actually just collapsed rectangles.
At least I thought that was the case, until I tried to actually implement line thickness by rendering rectangles instead of lines and found that they lost detail and were eventually invisible as I zoomed out; whereas line primtive types seem to be guaranteed to always be 1 pixel wide regardless of scale.
I'm creating an AutoCAD viewer, of which lines are a fairly staple entity, and need to support a thickness; but regardless of zoom level must always be at least one pixel wide.
Can anyone suggest a strategy for achieving this; ideally a rendering settings adjustment as opposed to working out if it should render lines instead of rectangles?
[Edit] Should have mentioned; it's Direct3D 9 in .Net via SlimDX.
The simplest approach I can think of would be to render the lines as simple quads in 2D, and have the pixel shader write an oDepth value containing the correct 3d perspective depth.

XNA Adding Craters (via GPU) with a "Burn" Effect

I am currently working on a 2D "Worms" clone in XNA, and one of the features is "deformable" terrain (e.g. when a rocket hits the terrain, there is an explosion and a chunk of the terrain disappears).
How I am currently doing this is by using a texture that has a progressively higher Red value as it approaches the center. I cycle through every pixel of that "Deform" texture, and if the current pixel overlaps a terrain pixel and has a high enough red value, I modify the color array representing the terrain to transparent. If the current pixel does NOT have a high enough Red value, I blacken the terrain color (it gets blacker the closer the Red value is to the threshold). At the end of this operation I use SetData to update my terrain texture.
I realize this is not a good way to do it, not only because I have read about pipeline stalls and such, but also because it can become quite laggy if lots of craters are being added at the same time. I want to remake my Crater Generation on the GPU instead using Render Targets "ping-ponging" between being the target and the texture to modify. That isn't the problem, I know how to do that. the problem is I don't know how to keep my burn effect using this method.
Here's how the burn effect looks right now:
Does anybody have an idea how I would create a similar burn effect (darkening the edges around the formed crater)? I am completely unfamiliar with Shaders but if it requires it I would be really thankful if someone walked me through on how to do it. If there are any other ways that'd be great too.
Sounds like you're good in the right way. But you're doing a lot of things by hand, which can also be done by just drawing sprites and applying the right formulas.
For example:
Suppose your terrain is saved into a giant texture in the alpha channel of the texture. 1 is terrain, 0 is nothing.
An explosion happens and the terrain has to be deformed. Update your texture easily by just drawing a black transparent sphere (or explosion area) onto your texture. The terrain is gone, because the alpha value is 0 of the black sphere. Your texture is now up to date, everything was done by the spriteBatch. And nothing had to be checked.
I don't know if you wanted a solution for this as well, but now you have one.
For the burn effect
Now that we have our terrain in a texture, we can do a post effect on the drawing by using a shader (just like you said). The shader obtains the texture's alpha channel and can now do different things to get our burn effect.
The first option is to do edge detection. Check a few pixels in all 4 directions and see if the pixel is at the edge. If so, it needs to do a burn by, for example, multiplying it with the distance to the edge (or any other function you like)
Another way is quite similar to the first one, but does it in two steps. First you do the same kind of edge detection, but you save the edges in a seperate texture. Now, when you are drawing your texture, you can overlay your edges. So it's quite the same as just drawing the ground at once.
The main difference for the second option is that you can also choose to just draw your normal ground and you are not adjusting the pixel in the ground texture on rendering.
I know this is a long story, but it is a nice technique. Have a look at toon shaders, they do edge detection as well, even though it is 3D.
Keywords: Toon shading, HLSL, Post effects, edge detection, image processing.
Recommended reading: http://rbwhitaker.wikidot.com/xna-tutorials

rendering a photoshop style brush in openGL

I have lines that are programmatically defined by my program. what I want to do is render a brush stroke along them.
the way I think the type of brush I want works is, it simply has a texture, mostly transparent, and what you do is, render this texture centered on EVERY PIXEL in the path, and they blend together to create the stroke.
now assuming this even works, I'm going to make a bet that it will be WAY too expensive (targeting the ipad and other mobile chips, which HATE fillrate and alpha blending)
so, what other options are there?
if it could be done in realtime (that is, the path spline updating every frame) that would be ideal. but if not, within a fraction of a second on the ipad would be good too (the splines connect nodes, the user can drag nodes around thus transforming the spline, but it would be acceptable to revert to a simpler fill for the spline while it was moving around, then recalculate the brush once they release it)
for those wondering, I'm trying to get it so the thick lines look like they have been made with a pencil. it should look as real life as possible.
I considered just rendering the brushed spline to a texture, but as the spline can be any length, in any direction, dedicating a WHOLE rectangular texture to encompass the whole spline would be way to costly...
the spline is inevitably broken up into quads for rendering, so I thought of initially rendering the brush to a texture, then generating an optimized texture with each of the quads separated and packed as neatly as possible into the texture.
but two renders to texture... algorithm to create the optimized texture, making it so the quads still seamlessly blend with each other... sounds like a nightmare, and thats not even making it realtime.
so yeah, any ideas on how to draw thick, pencil like lines that follow a spline in real time on the ipad in openGL?
From my point of view, what you want is to render a line that:
is textured
has the edges fade off (i.e. no sharp edge to it)
follows a spline
To achieve these goals I would first of all break the spline up into a series of line segments that closely approximate the curve (you can make it more or less fine-grained depending on how accurate you want it to be versus how fast you want it to render).
Once you have these, you will need to make each segment into 3 quads, one that goes over the middle of the line segment that serves as the fully opaque part of the line and one on each edge of the line that will fade out to be totally transparent.
You will need to use a little bit of math to make sure that you extrude the quads along a vector that bisects 2 segments equally (i.e. so that the angle between the each segment and the extrusion vector are equal). This will ensure that you don't have gaps in the obtuse part of the join and overlaps in the acute parts.
After all of this, you just need to use the vertex positions as the UV co-ordinates (probably scaled though) and allow the texture to wrap around.
Using this method, you should end up with a mesh that has a solid thick line running through the middle of your spline with "fins" that taper off into complete transparency. This should approximate the effect you want quite closely while only rendering to relevant pixels (i.e. no giant areas of completely transparent pixels) and with very litter memory overhead.
I've been a little vague here as its kind of hard to explain with text alone and without writing an in depth tutorial. If you need more info, just comment on what your stuck on and I'll elaborate further.

Resources