I'm trying to render 2 (light) circles in OpenGL ES in 2D. The middle is white, the border is black. It works fine, as long as they don't overlap:
But as soon as they do, I get this artifact:
I'm using glBlendFunc(GL_ONE, GL_ONE) with blending enabled of course.
What could be causing this? Is there a way to fix it?
I'd like them to blend more like this:
Thanks!
Are your circles currently linear gradients? You might get less of an artifact if you have a different curve.
Based on your example, though, it looks like you want the maximum intensity of the two circles, not the sum of the intensities. It appears that Apple's OpenGL ES 2.0 implementation support the EXT_blend_minmax extension, which lets you specify that the resulting fragment values should be the maximum of the inbound and existing values. Maybe try that?
The result you're seeing is exactly what should come out for linear gradients. Hint: Open up Photoshop or The GIMP draw two radial gradients into two layers and set them to "Addition" blending mode. It will look exactly like your picture.
A effect like what you desire is given with square gradients. If your gradient is in the range 0…1 take the square of the value and draw this. You may apply a sqrt later if you want to linearize the single gradients.
Not that this is something not easily done using the blending stage; it can be done using multiple passes, but then it's actually more straightforward to use a shader to combine passed from two FBOs.
Related
I'm completely new to DirectX (11) so this question will be extremely basic. Sorry about that.
I'd like to draw a cube on screen that has solid-coloured faces. All of the examples that I've seen have 8 vertices, with a colour defined at each vertex (red, green, blue). The pixel shader then interpolates between these vertices to give a spectrum of colours. This looks nice, but isn't what I'm trying to achieve. I'd just like a cube with six, coloured faces.
Two ideas come to mind:
use 24 vertices, and have each vertex referenced only a single time, i.e. no sharing. This way I can define three different colours at each 3D position, one for each face.
use a texture for each face that 'stretches' to give the face the correct colour. I'm not very familiar with textures right now, so not all that sure about this idea.
What's the typical/canonical way to achieve this effect? I'm sure this 'problem' has been solved many, many times before.
For your particular problem, vertex coloring might be the easiest and best solution. But the more complex you models will become the more complicated is to create a proper vertex coloring, because you don't always want to limit you in your imagination to the underlying geometry.
In general 3D objects are colored with one or more textures. Therefore you create an UV-Mapping (wiki), which unwraps you three-dimensional surface onto a 2D-Plane, the texture. Now you can paint freely in any resolution you want colors on your object, which gives you the most freedom to have the model look as you want.
Of course each application has its own characteristics, so some projects would choose another approach, but I think this is the most popular way to colorize models.
Option 1 is the way to go if:
You want zero color bleed between faces
You want zero texture bleed between faces
You later want to use the color as a lighting scheme ala Minecraft
Caveats:
Could use more memory as more verts being used (There are some techniques around this depending on how large your object is and its spacial resolution. eg using 1 byte for x/y/z instead of a float)
let me introduce my answer.
This is triagnle rendered with webgl. Well it is a little enlarged ...
And this is triangle, which I want to have:
So Im looking for some shader, that will be able to blend edges of primitive triangle. I have an idea how to realize one, but Im probably not good enough to write it yet.
My idea is something like:
Based on position of 3 vertices calculate for each fragment, how much does primitive cover pixel, and then set up transparency of this pixel based on calculated information...
I can get 2D coordinates from vertex shader and use them in fragment shader. Now I probably want to use gl_FragCoord.xy or gl_PointCoord.xy and calculate % pixel cover, but I not able to compare these values (it seems that units are different, I compute miles with milimetres and also 'point zero' is somewhere else for these vectors), so I can't calculate final transparency value.
Can anyone help me please? Just turn me correct way.
There are lots ways to achieve this
You can render at a higher resolution. Make your canvas larger than the size its displayed, the browser will almost certainly bilinear interpolate the result. Example:
<canvas width="400" height="400" style="width: 200px; height 200px" />
declares a canvas with 400x400 backstore that is scaled to 200x200 when displayed.
Here's a fiddle.
Another technique would be to compute an alpha value in the shader such that you get the blending you want along the edge of the polygon.
I'm sure there are others. Most Canvas2D implementations are gpu accelerated and anti-aliased even if the GPU does not support anti-aliasing so you could try digging through one of those.
The problem with your plan is that OpenGL applies its own test to decide which pixels to draw first — if the centre of the fragment lies inside the geometry boundary then it is rasterised, if it lies outside then it is not, if it lies exactly on the boundary then rasterisation depends on whether it is at the start or end of a horizontal or vertical run. The boundary condition ensures that where two triangles exactly meet, they never both contain the same fragments.
So if you compute coverage per fragment you're almost never going to get a number less than 50% (corners and other very thin bits of geometry being the exception). You're not going to get the complete anti-aliasing you desire. You'll get the antialiased version clipped by the aliased version.
Hardware achieves this by sampling multiple fragments per output pixel. You can simulate that by rendering to texture at a multiple of your output size, then scaling down. The mip map generation will filter the input image.
That all being said, have you tried just passing antialias as true when calling canvas.GetContext? That will use the hardware capabilities, subject to hardware and browser support.
I want to simulate stroking a carpet, so you would have a graphic of a fury carpet and with your finger you can move around and stroke it. I need to shift pixels and create some fake distortion around where I am touching.
Anyone have any tips ?
Firstly I guess do I have enough to work with assuming I have 1 jpeg of the material. Not any skeleton or 3d file, just a flat image
this can be also improved with 'fur rendering'
I've some examples:
http://www.ozone3d.net/benchmarks/fur/
http://www.xbdev.net/directx3dx/specialX/Fur/index.php
or new demo from NVidia:
http://www.youtube.com/watch?v=2Fp5N-pOxKA - around 35sec
Sounds like a typical task to be solved with OpenGL shaders.
As MrTJ says: Shaders is your key here.
Apart from your diffuse use a second texture as your "carpet" map that you modify. Maybe use the like a normal map, storing a directional vector per texel.
Use your "carpet" map in your shader and distort however you like to create your desired carpet effect.
I have an application which requires that a solid black outline be drawn around a partly-transparent UIImage. Not around the frame of the image, but rather around all the opaque parts of the image itself. I.e., think of a transparent PNG with an opaque white "X" on it -- I need to outline the "X" in black.
To make matters trickier, AFTER the outline is drawn, the opacity of the original image will be adjusted, but the outline must remain opaque -- so the outline I generate has to include only the outline, and not the original image.
My current technique is this:
Create a new UIView which has the dimensions of the original image.
Duplicate the UIImage 4 times and add the duplicates as subviews of the UIView, with each UIImage offset diagonally from the original location by a couple pixels.
Turn that UIView into an image (via the typical UIGraphicsGetImageFromCurrentImageContext method).
Using CGImageMaskCreate and CGImageCreateWithMask, subtract the original image from this new image, so only the outline remains.
It works. Even with only the 4 offset images, the result looks quite good. However, it's horribly inefficient, and causes a good solid 4-second delay on an iPhone 4.
So what I need is a nice, speedy, efficient way to achieve the same thing, which is fully supported by iOS 4.0.
Any great ideas? :)
I would like to point out that whilst a few people have suggested edge detection, this is not an appropriate solution. Edge detection is for finding edges within image data where there is no obvious exact edge representation in the data.
For you, edges are more well defined, you are looking for the well defined outline. An edge in your case is any pixel which is on a fully transparent pixel and next to a pixel which is not fully transparent, simple as that! iterate through every pixel in the image and set them to black if they fulfil these conditions.
Alternatively, for an anti-aliased result, get a boolean representation of the image, and pass over it a small anti-aliased circle kernel. I know you said custom filters are not supported, but if you have direct access to image data this wouldn't be too difficult to implement by hand...
Cheers, hope this helps.
For the sake of contributing new ideas:
A variant on your current implementation would use CALayer's support for shadows, which it calculates from the actual pixel contents of the layer rather than merely its bounding rectangle, and for which it uses the GPU. You can try amping up the shadowOpacity to some massive value to try to eliminate the feathering; failing that you could to render to a suitable CGContext, take out the alpha layer only and manually process it to apply a threshold test on alpha values, pushing them either to fully opaque or fully transparent.
You can achieve that final processing step on the GPU even under ES 1 through a variety of ways. You'd use the alpha test to apply the actual threshold, you could then, say, prime the depth buffer to 1.0, disable colour output and the depth test, draw the version with the shadow at a depth of 0.5, draw the version without the shadow at a depth of 1.0 then enable colour output and depth tests and draw a solid black full-screen quad at a depth of 0.75. So it's like using the depth buffer to emulate stencil (since the GPU Apple used before the ES 2 capable device didn't support a stencil buffer).
That, of course, assumes that CALayer shadows appear outside of the compositor, which I haven't checked.
Alternatively, if you're willing to limit your support to ES 2 devices (everything 3GS+) then you could upload your image as a texture and do the entire process over on the GPU. But that would technically leave some iOS 4 capable devices unsupported so I assume isn't an option.
You just need to implement an edge detection algorithm, but instead of using brightness or color to determine where the edges are, use opacity. There are a number of different ways to go about that. For example, you can look at each pixel and its neighbors to identify areas where the opacity crosses whatever threshold you've set. Whenever you need to look at every pixel of an image in MacOS X or iOS, think Core Image. There's a helpful series of blog posts starting with this one that looks at implementing a custom Core Image filter -- I'd start there to build an edge detection filter.
instead using UIView, i suggest just push a context like following:
UIGraphicsBeginImageContextWithOptions(image.size,NO,0.0);
//draw your image 4 times and mask it whatever you like, you can just copy & paste
//current drawing code here.
....
outlinedimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
this will be much faster than your UIView.
I have lines that are programmatically defined by my program. what I want to do is render a brush stroke along them.
the way I think the type of brush I want works is, it simply has a texture, mostly transparent, and what you do is, render this texture centered on EVERY PIXEL in the path, and they blend together to create the stroke.
now assuming this even works, I'm going to make a bet that it will be WAY too expensive (targeting the ipad and other mobile chips, which HATE fillrate and alpha blending)
so, what other options are there?
if it could be done in realtime (that is, the path spline updating every frame) that would be ideal. but if not, within a fraction of a second on the ipad would be good too (the splines connect nodes, the user can drag nodes around thus transforming the spline, but it would be acceptable to revert to a simpler fill for the spline while it was moving around, then recalculate the brush once they release it)
for those wondering, I'm trying to get it so the thick lines look like they have been made with a pencil. it should look as real life as possible.
I considered just rendering the brushed spline to a texture, but as the spline can be any length, in any direction, dedicating a WHOLE rectangular texture to encompass the whole spline would be way to costly...
the spline is inevitably broken up into quads for rendering, so I thought of initially rendering the brush to a texture, then generating an optimized texture with each of the quads separated and packed as neatly as possible into the texture.
but two renders to texture... algorithm to create the optimized texture, making it so the quads still seamlessly blend with each other... sounds like a nightmare, and thats not even making it realtime.
so yeah, any ideas on how to draw thick, pencil like lines that follow a spline in real time on the ipad in openGL?
From my point of view, what you want is to render a line that:
is textured
has the edges fade off (i.e. no sharp edge to it)
follows a spline
To achieve these goals I would first of all break the spline up into a series of line segments that closely approximate the curve (you can make it more or less fine-grained depending on how accurate you want it to be versus how fast you want it to render).
Once you have these, you will need to make each segment into 3 quads, one that goes over the middle of the line segment that serves as the fully opaque part of the line and one on each edge of the line that will fade out to be totally transparent.
You will need to use a little bit of math to make sure that you extrude the quads along a vector that bisects 2 segments equally (i.e. so that the angle between the each segment and the extrusion vector are equal). This will ensure that you don't have gaps in the obtuse part of the join and overlaps in the acute parts.
After all of this, you just need to use the vertex positions as the UV co-ordinates (probably scaled though) and allow the texture to wrap around.
Using this method, you should end up with a mesh that has a solid thick line running through the middle of your spline with "fins" that taper off into complete transparency. This should approximate the effect you want quite closely while only rendering to relevant pixels (i.e. no giant areas of completely transparent pixels) and with very litter memory overhead.
I've been a little vague here as its kind of hard to explain with text alone and without writing an in depth tutorial. If you need more info, just comment on what your stuck on and I'll elaborate further.