GLSL tapered lines - ios

I'm drawing lots of GL_LINES primitives, shading them using vertex and fragment shaders written in GLSL. What I'd like is for the lines to taper off at the ends in alpha value. That is, at the centre of the line the alpha value should be 1 but at each end it should taper off to 0.
I'm wondering if there is a nice solution that doesn't involve breaking the lines into several vertices first. That is, something done purely using shaders.

Well, just pass a value to each vertex in the line. 0 for the start, 1 for the end. Let the interpolator interpolate between them, and then take the absolute distance of this value from 0.5 as 1 minus the alpha. Or, in GLSL:
gl_FragCoord.a = 1 - (abs(value - 0.5) * 0.5);
Where value is the value passed from the vertex shader. To do this, you can't render a GL_LINE_STRIP or GL_LINE_LOOP; it has to be GL_LINES.

Related

WebGL - Get fragment coordinates within a shape in triangle mode? GL_FragCoord doesn't work

I'm trying to create a WebGL shader that can both output solid rectangles as well as hollow rectangles (with a fixed border width) within the same draw call, and so far, the best way I've thought of how to do it is as follows:
In the vertex shader, send in a uniform value uniform float borderWidth
and then inside the fragment shader, I need a coordinate space that is x = [ 0, 1] and y = [0, 1] where x=0 when we are the leftmost, and y=0 when we are at the topmost of the shape's borders, or something like that. After I have that then drawing the lines is straightforward and I can figure it out from there, I can use something like:
1a - Have a smooth step from the fragment's x=0 coordinate to x=borderWidth for the vertical lines and x=1-borderWidth to x=1 for the vertical lines
1b - Something similar for the horizontal lines and the y coordinate
The Problem
The problem I'm facing is I can't create that coordinate space. I tried using gl_FragCoord but I think it's undefined for shapes rendering in TRIANGLES mode. So I'm kinda lost. Anyone have any suggestions?
gl_FragCoord is never undefined, it is the position of the fragment in the output buffer (like your screen), if you're rendering to the center of a FullHD screen gl_FragCoord would be vec3(940,540,depth), however this data is of no use for what you're trying to do.
What you describe sounds like you need barycentric coordinates that you have to define as additional attributes next to your vertex positions, then pass through to the fragment shader as varyings so they're linearly interpolated across the vertices. If you render non-indexed geometry and use webgl 2 you can derive the barycentrics using gl_VertexID % 3 instead.

Depth Buffer Clear Behavior between Draw Calls?

I have been testing WebGL to see whether I can batch-draw polygons in a particular way. I am going to simplify the use case, but it goes something along the lines of the following:
First, my vertices are simply:
vertices[v0_xy0, v1_xyz, ... vn_xyz]
In my case, each vertex must have a z value in the range (0 - 100) (I pick 100 arbitrarily) because I want all of those vertices to be depth tested against each other using those z values. On batch N + 1, I am limited to depth values (0 - 100) again, but I need the vertices in this batch to be guaranteed to be drawn atop all previous batches (layers of vertices). In other words, vertices within each batch are depth tested against each, but each batch is just drawn atop the previous one as if there were no depth testing.
At first I was going to try drawing to a texture with a framebuffer and depthbuffer attachment, draw to the canvas, repeat for the next group of vertices, but I realized that I might be able to do just this:
// pseudocode
function drawBuffers()
// clear both the color and the depth
gl.clearDepth(1.0);
gl.clear(gl.CLEAR_COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
// iterate over all vertex batches
for each vertexBatch in vertexBatches do
// draw the batch with depth testing
gl.draw(vertexBatch);
// clear the depth buffer
/* QUESTION: does this guarantee that subsequent batches
will be drawn atop previous batches, or will the pixels be written at
random (sometimes underneath, sometimes above)?
*/
gl.clearDepth(1.0);
gl.clear(gl.DEPTH_BUFFER_BIT);
endfor
end drawBuffers
I tested the above by drawing two overlapping quads, clearing the depth buffer, translating left and in negative z (in an attempt to "go under" the previous batch), and drawing the two overlapping quads again. I think that this works because I see that the second pair of quads are drawn in front of the first pair even though their z values are behind the previous pair's z values;
I am not certain that my test is reliable though. Could there be some undefined behavior involved? Is it just a coincidence that my test works as a result of the clearDepth setting and shapes?
May I have clarification so I can confirm whether my method will work for sure?
Thank you.
Since WebGL is based on OpenGL ES see OpenGL ES 1.1 Full Specification, 4.1.6 Depth Buffer Test, page 104:
The depth buffer test discards the incoming fragment if a depth comparison fails.
....
The comparison is specified with
void DepthFunc( enum func );
This command takes a single symbolic constant: one of NEVER, ALWAYS, LESS, LEQUAL, EQUAL, GREATER, GEQUAL, NOTEQUAL. Accordingly, the depth buffer test passes never, always, if the incoming fragment’s zw value is less than, less than or equal to, equal to, greater than, greater than or equal to, or not equal to the depth value stored at the location given by the incoming fragment’s (xw, yw) coordinates.
This means, if the clear value for the depth buffer glClearDepth is 1.0 (1.0 is the initial value)
gl.clearDepth(1.0);
and the depth buffer is cleared
gl.clear(gl.DEPTH_BUFFER_BIT);
and the depth function glDepthFunc is LESS or LEQUAL (LESS is the initial value)
gl.enable(gl.DEPTH_TEST);
gl.depthFunc(gl.LEQUAL);
then the next fragment which is drawn to any (xw, yw) coordinates, will pass the depth test and will overwrite the fragment stored at the location (xw, yw).
(Of course gl.BLEND has to be disabled and the fragment has to be in clip space)

why the sphere texture map can not actually match

i have a sphere in my 3d project, and i have an earth texture, i use the algorithm from wiki to calculate the texture coordinate.
the code in my effect file look like this:
float pi = 3.14159265359f;
output.uvCoords.x = 0.5 + atan2(input.normal.z, input.normal.x) / (2 * pi);
output.uvCoords.y = 0.5f - asin(input.normal.y) / pi;
the result is the picture below:
look from left( there is a line, this is my question)
look from front
3.look from right
Not pretend to be a complete answer at all, but there are some ideas:
Try 6.28 instead 6.18, because 3.14 * 2 = 6.28. It is always a good idea to create variables or macro instead of plain numbers, to prevent such sad mistakes in future
Try to use more precise value of Pi (numbers to the right of the decimal point)
Try to normalize normal vector before calculations
Even better calculate texcoords on CPU once and for all, instead of calculating on each shader invocation. You can use any asset library for this purpose or just quickly move your HLSL to main code.
#define PI 3.14159265359f
#define PImul2 6.28318530718f // pi*2
#define PIdiv2 1.57079632679f // pi/2
#define PImul3div2 2.09439510239 // 3*pi/2
#define PIrev 0.31830988618f // 1/pi
...
Hope it helps.
finally i figure it out by myself, The problem lies in the fact, that i'am calculating the texture coordinates in the vertex shader. The problem is that one vertex is on the far right of the texture, while the other 2 vertices of a triangle are on the far left of the texture, which results in almost the whole texture being visible on such a triangle. so there is a line of jumbled texture coords. the solution is i should send the normal to pixel shader and calculate the texture coord in the pixel shader

DirectX9 How to find intersected point?

I have found intersection point's distance with function 'D3DXIntersectTri'.
Now, using distance value, how can i find that points value?
IDE: Delphi - JEDI
Language: Pascal
DirectX 9
EDIT:
Actually i have 2 cylinder and want to render only intersected part in 3-dimention. see Image:
As explained in the MSDN article, you can calculate the point with the barycentric coordinates:
p = p1 + pU * (p2 - p1) + pV(p3 - p1)
Rendering to certain parts of the screen is the task of the stencil buffer. Unless you want to create a new vertex buffer from the intersection (which could be created by clipping parts away, which is not that easy), using the stencil buffer is more efficient.
The stencil buffer is a buffer that holds integer values. You have to create it with the depth buffer, specifying the correct format (e.g. D24S8). You can then specify when pixels are discarded. Here is the idea:
Clear stencil buffer to 0
Enable solid rendering
Enable stencil buffer
Set blend states to not draw anything (Souce: 0, Destination: 1)
Disable depth testing, enable backface culling
Set the following stencil states:
CompareFunc to Always
StencilRef to 1
StencilWriteMask to 255
StencilFail to Replace
StencilPass to Replace
//this will set value 1 to every pixel that will be drawn
Draw the first cylinder
Now set the following stencil states:
CompareFunc to Equal
StencilFail to Keep //this keeps the value where the stencil test fails
StencilPass to Increment //this increments the value to 2 where stencil test passes
Draw the second cylinder
//Now there is a 2 in the stencil buffer where the cylinders intersect
Reset blend states
Reenable depth testing
Set StencilRef to 2 //render only pixels where stencil value == 2
Draw both cylinders
You might need to change the compare function to GreaterEqual before the last render pass. If pixels overlap, there can be values greater than two.

OpenGL ES 2 (iOS) Morph / Animate between two set of vertexes

I have two sets of vertexes used as a line strip:
Vertexes1
Vertexes2
It's important to know that these vertexes have previously unknown values, as they are dynamic.
I want to make an animated transition (morph) between these two. I have come up with two different ways of doing this:
Option 1:
Set a Time uniform in the vertex shader, that goes from 0 - 1, where I can do something like this:
// Inside main() in the vertex shader
float originX = Position.x;
float destinationX = DestinationVertexPosition.x;
float interpolatedX = originX + (destinationX - originX) * Time;
gl_Position.x = interpolatedX;
As you probably see, this has one problem: How do I get the "DestinationVertexPosition" in there?
Option 2:
Make the interpolation calculation outside the vertex shader, where I loop through each vertex and create a third vertex set for the interpolated values, and use that to render:
// Pre render
// Use this vertex set to render
InterpolatedVertexes
for (unsigned int i = 0; i < vertexCount; i++) {
float originX = Vertexes1[i].x;
float destinationX = Vertexes2[i].x;
float interpolatedX = originX + (destinationX - originX) * Time;
InterpolatedVertexes[i].x = interpolatedX;
}
I have highly simplified these two code snippets, just to make the idea clear.
Now, from the two options, I feel like the first one is definitely better in terms of performance, given stuff happens at the shader level, AND I don't have to create a new set of vertexes each time the "Time" is updated.
So, now that the introduction to the problem has been covered, I would appreciate any of the following three things:
A discussion of better ways of achieving the desired results in OpenGL ES 2 (iOS).
A discussion about how Option 1 could be implemented properly, either by providing the "DestinationVertexPosition" or by modifying the idea somehow, to better achieve the same result.
A discussion about how Option 2 could be implemented.
In ES 2 you specify such attributes as you like — there's therefore no problem with specifying attributes for both origin and destination, and doing the linear interpolation between them in the vertex shader. However you really shouldn't do it component by component as your code suggests you want to as GPUs are vector processors, and the mix GLSL function will do the linear blend you want. So e.g. (with obvious inefficiencies and assumptions)
int sourceAttribute = glGetAttribLocation(shader, "sourceVertex");
glVertexAttribPointer(sourceAttribute, 3, GL_FLOAT, GL_FALSE, 0, sourceLocations);
int destAttribute = glGetAttribLocation(shader, "destVertex");
glVertexAttribPointer(destAttribute, 3, GL_FLOAT, GL_FALSE, 0, destLocations);
And:
gl_Position = vec4(mix(sourceVertex, destVertex, Time), 1.0);
Your two options here have a trade off: supply twice the geometry once and interpolate between that, or supply only one set of geometry, but do so for each frame. You have to weigh geometry size vs. upload bandwidth.
Given my experience with iOS devices, I'd highly recommend option 1. Uploading new geometry on every frame can be extremely expensive on these devices.
If the vertices are constant, you can upload them once into one or two vertex buffer objects (VBOs) with the GL_STATIC_DRAW flag set. The PowerVR SGX series has hardware optimizations for dealing with static VBOs, so they are very fast to work with after the initial upload.
As far as how to upload two sets of vertices for use in a single shader, geometry is just another input attribute for your shader. You could have one, two, or more sets of vertices fed into a single vertex shader. You just define the attributes using code like
attribute vec3 startingPosition;
attribute vec3 endingPosition;
and interpolate between them using code like
vec3 finalPosition = startingPosition * (1.0 - fractionalProgress) + endingPosition * fractionalProgress;
Edit: Tommy points out the mix() operation, which I'd forgotten about and is a better way to do the above vertex interpolation.
In order to inform your shader program as to where to get the second set of vertices, you'd use pretty much the same glVertexAttribPointer() call for the second set of geometry as the first, only pointing to that VBO and attribute.
Note that you can perform this calculation as a vector, rather than breaking out all three components individually. This doesn't get you much with a highp default precision on current PowerVR SGX chips, but could be faster on future ones than doing this one component at a time.
You might also want to look into other techniques used for vertex skinning, because there might be other ways of animating vertices that don't require two full sets of vertices to be uploaded.
The one case that I've heard where option 2 (uploading new geometry on each frame) might be preferable is in specific cases where using the Accelerate framework to do vector manipulation of the geometry ends up being faster than doing the skinning on-GPU. I remember the Unity folks were talking about this once, but I can't remember if it was for really small or really large sets of geometry. Option 1 has been faster in all the cases I've worked with myself.

Resources