I'm reading through learningwebgl.com and what confuses me is that it draws the first buffer I bound as last element?
http://jsfiddle.net/Cx8gG/1/
red triangle
green square
blue square
I expected to see only the blue square because everything else gets overdrawn, the output seems to be in reverse order?
I've also read about stencil buffers, so what I tried to do is create a mask (red) and then there should be a green triangle on the blue square.
the mask works ( http://jsfiddle.net/D3QNg/3/ ) but I don't know if it's right or if I'm just lucky.
Would appreciate some help.
It does this because you enabled the depth buffer at line 203
gl.enable(gl.DEPTH_TEST);
The depth buffer holds the depth for each pixel drawn. In the default mode, when trying to draw a pixel WebGL will check the depth of the pixel already there, only if the new pixel's depth is LESS then the previous pixel will the new pixel be drawn.
Since all your shapes have a depth of 0.0 the first one fills in the depth buffer for those pixels with 0.0. The next shape you draw also has a depth of 0.0 for each pixel which is not LESS than the 0.0 already there so those pixels do not get overwritten
If you comment out the line that enables depth testing you'll get the results you were expecting.
Note, with depth testing enabled you can set the comparison WebGL uses to decide whether or not to draw the pixel by calling the function gl.depthFunc (docs)
Related
I'm new to action recognition and anything related to image processing. I'm studying a paper about image processing. It is about action recognition based on human pose estimation. Here is a summary of how it works:
We first run a state-of-the-art human pose estimator [4] in every
frame and obtain heatmaps for every human joint. These heatmaps encode
the probabilities of each pixel to contain a particular joint. We
colorize these heatmaps using a color that depends on the relative
time of the frame in the video clip. For each joint, we sum the
colorized heatmaps over all frames to obtain the PoTion representation
for the entire video clip.
So for each joint j in frame t, it extracts a heatmap H^t_j[x, y] that is the likelihood of pixel (x, y) containing joint j at frame t. The resolution of this heatmap is denoted by W*H.
My first question: What is a heatmap exactly? I wanted to be sure whether heatmap is a probability matrix in which, for example, the element in (1,1) contains a number which is an indicator of the probability that (1,1) pixel may contain the joint.
In the next step this heatmap is colorized with C channels which C shows the number of colors for visualizing each pixel. Here the idea is to use the same color for the joint heatmaps of a frame.
We start by presenting the proposed colorization scheme for 2 channels
(C = 2). For visualization we can for example use red and green colors
for channel 1 and 2. The main idea is to colorize the first frame in
red, the last one in green, and the middle one with equal proportion
(50%) of green and red. The exact proportion of red and green is a
linear function of the relative time t, i.e., t−1/T−1 , see Figure 2
(left). For C = 2, we have o(t) = (t−1/T−1 , 1−(t−1/T−1). The
colorized heatmap of joint j for a pixel (x, y) and a channel c at
time t is given by:
And here is figure 2 which is mentioned in the context:
My problem is that I cannot figure out whether this equation ( o(t) = (t−1/T−1 , 1−(t−1/T−1) ) represents the degree of one color (i.e red) in a frame or it shows the proportion of both of these colors. If it is used for each color channel separately, What does o_red(t) = (1/6 , 5/6) means when the number of frames (T) is equal to 7?
Or if it used for both channels, since the article says that the first frame is colored red and the last frame is colored green, how we can interpret o(1) = (0,1) if the first element indicates the proportion of red and the second one the proportion of green? As far as I can understand it means the first frame is colored green not red!
In this concept there is a subtle relationship between time and pixel positions.
As far as I know: This kind of heatmap is for involving Time in your image. The purpose is to show the movement of a moving object which is captured by a video, in only one image, so every pixel of the image that is related to the fixed (unmoving) objects of the scene (like background pixels) get to be zero (black). In contrast, if in the video, the moving object pass from a pixel position, that corresponding pixel in the image, will be colorful and it's color depends on the number (time) of the frame that moving object has been seen in the pixel.
For example consider we have a completely black curtain in front of the camera and we are filming. We get a 1-second video which is made from 10 frames. At the first moment (frame 1) a very tiny white ball comes into the scene and get captured at pixel (1,1) in frame 1. then at frame two , that small ball got captured at pixel (1,2), and so on. At the end when we stop filming at frame 10, ball will be seen at pixel (1,10). Now we have 10 frames, which one of them has a white pixel at different position and we want to show the whole process in only one image, so 10 pixels of that image will be colorful (pixels: (1,1), (1,2),(1,3),...,(1,10)) and the other pixels are black.
With the formula you mentioned, the color of each pixel is computed according to the related frame number (which the ball got captured):
T=10 # 10 frames
pixel (1,1) got the white ball at frame 1 so its color would be ((0/9),1-(0/9)) which means the green channel has a zero value in that pixels and the red channel has 1 value so this pixel looks completely red.
pixel (1,2) got the white ball at frame 2 so its color would be (1/9 , 8/9), and this pixels is more red than green.
... # continue so on for other 7 pixels
pixel (1,10) got the white ball at frame 2 so its color would be (1 , 0), and this pixels is completely green.
Now at the if you look at the image, you see a colorful line which is 10-pixel long and it is red at the beginning and its color gradually changes to green as it goes to the end (10th pixel). WHICH means the ball moved from pixel one to pixel 10 during that 1 second video.
(If I were unclear at any point of the explanation, please comment and I will elaborate)
I an rendering a simple box:
MDLMesh(boxWithExtent: ...)
In my draw loop when I turn off back-face culling:
renderCommandEncoder.setCullMode(.none)
All depth comparison is disabled and sides of the box are drawn completely wrong with back-facing quads in front of front-facing.
Huh?
My intent is to include back-facing surfaces in the depth comparison not ignore them. This is important for when I have, for example, a shape with semi-transparent textures that reveal the shape's internals which have a different shading style. How to I force depth comparison?
UPDATE
So Warren's suggestion is an improvement but it is still not correct.
My depthStencilDescriptor:
let depthStencilDescriptor = MTLDepthStencilDescriptor()
depthStencilDescriptor.depthCompareFunction = .less
depthStencilDescriptor.isDepthWriteEnabled = true
depthStencilState = device.makeDepthStencilState(descriptor: depthStencilDescriptor)
Within my draw loop I set depth stencil state:
renderCommandEncoder.setDepthStencilState(depthStencilState)
The resultant rendering
Description. This is a box mesh. Each box face uses a shader the paints a disk texture. The texture is transparent outside the body of the disk. The shader paints a red/white spiral texture on front-facings quads and a blue/black spiral texture on back-facing quads. The box sits in front of a camera aligned quad textured with a mobil image.
Notice how one of the textures paints over the rear back-facing quad with the background texture color. Notice also that the rear-most back-facing quad is not drawn at all.
Actually it is not possible to achieve the effect I am after. I basically want to do a simple composite - Porter/Duff - here but that is order dependent. Order cannot be guaranteed here so I am basically hosed.
I'm working on a WebGL project that resembles a particle system. For aesthetic purposes, my single rendering pass is configured to blend additively, and I've disabled depth testing. I'm also clearing my viewport buffer to 50% gray, for the sake of argument.
gl.enable(gl.BLEND);
gl.blendFunc(gl.ONE, gl.ONE);
gl.disable(gl.DEPTH_TEST);
gl.clearColor(0.5, 0.5, 0.5, 1.0);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
I've uploaded a vertex buffer and index buffer to the GPU representing two partially overlapping triangles. Their vertices have a vec3 color attribute. I've assigned each vertex a color of 50% gray (0.5, 0.5, 0.5).
When I draw the triangles with my shaders, I'm relieved to report that my viewport buffer now looks 50% gray with two overlapping triangular regions that are white. The triangles are white because their fragments' color values were additively blended with those already in the color buffer.
Now, I re-upload the vertex buffer with the following change: the color of the vertices of the second triangle are now -50% gray (-0.5, -0.5, -0.5).
What I hope to accomplish is that my viewport buffer would look 50% gray with two overlapping triangular regions– one white, one black– which intersect, and produce 50% gray at their intersection. After all, adding a negative number should be the same as subtracting a positive number of the same magnitude.
What I see instead is a 50% gray viewport with only one triangular region– the white one.
I assume that this is because the output of my fragment shader is being clamped to a range whose lower bound is zero, before it's blended with the color buffer. I would like to know how to circumvent this clamping– ideally in WebGL, without requiring multiple render passes.
I'll be testing solutions in the source of the page at this URL: http://rezmason.net/scourge/issues/positive_negative_fragments.html
UPDATE
As an investigation, I've experimented with performing my additive blending in a frame buffer, and then drawing the frame buffer to the viewport buffer by texturing it to a unit quad– that's two separate draw calls, of course, which ideally I'd like to avoid.
That said, because I can explicitly set the format of the frame buffer to floating point, no clamping occurs with any value while I perform operations within that buffer. When I draw the buffer to the viewport, I assume that clamping finally occurs, but by then all the blending is already done.
My question is now substantially simpler: is there any way to initialize a WebGL or OpenGL context, so that its viewport is formatted as a floating point buffer?
Use gl.blendEquation( gl.FUNC_SUBTRACT ). Then use positive values in your shader.
If you want do something in the middle, you can make some hacky things:
gl.enable(gl.BLEND);
gl.blendFuncSeparate(gl.ONE_MINUS_SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA, gl.ONE, gl.ONE);
gl.blendEquation(gl.FUNC_ADD);
It will give you this equation:
You can now draw white triangle if you set color to (0.5, 0.5, 0.5, 0) and black triangle with color (0.5, 0.5, 0.5, 1).
If you want different colors I hope you get the point. You can compare different blending functions here: http://www.andersriggelsen.dk/glblendfunc.php
Edit:
Sorry, my mistake. You should change
gl.blendFuncSeparate(gl.ONE_MINUS_DST_ALPHA, gl.ONE_MINUS_DST_ALPHA, gl.ONE, gl.ONE);
to
gl.blendFuncSeparate(gl.ONE_MINUS_SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA, gl.ONE, gl.ONE);
I've forgotten which one is source and which one is destination. I've updated my answer.
I'm using Orthographic projection to draw my objects.
Each object items is being added to different buffers and being drawn in several cycles.
Let's say that each object has an outline square and fill for the square (in different color).
So i'm drawing first the all the fillings, and then the outlines.
I'm using depth buffer to make sure that the outlines will not be over all the fills as shown at the picture
Now i'm facing a problem that each object contains another drawing item on it (such as text - points) which can be longer than this squares. So i'm using the stencil buffer for cutting this additional drawing over the square. Although, when doing this there is no consideration in the depth buffer.
Meaning that one text item can be drawn over the other square. as showed below.
Is there anyway\trick to make it happen ?
You should be able to set the stencil buffer to a different value for each of the squares (provided there is <= 255 squares, as you won't be able to get a more than 8-bit stencil buffer). Configure the stencil value to KEEP for pixels that fail the depth test, causing any stencil values written by quads that are further in front but have been drawn earlier to be retained.
This will allow clipping each text individually.
Another way is to use only the depth buffer and pass the pixel extents of the current quad into the text pixel shader, where you can discard any extra pixels. This requires less state changes.
I have found intersection point's distance with function 'D3DXIntersectTri'.
Now, using distance value, how can i find that points value?
IDE: Delphi - JEDI
Language: Pascal
DirectX 9
EDIT:
Actually i have 2 cylinder and want to render only intersected part in 3-dimention. see Image:
As explained in the MSDN article, you can calculate the point with the barycentric coordinates:
p = p1 + pU * (p2 - p1) + pV(p3 - p1)
Rendering to certain parts of the screen is the task of the stencil buffer. Unless you want to create a new vertex buffer from the intersection (which could be created by clipping parts away, which is not that easy), using the stencil buffer is more efficient.
The stencil buffer is a buffer that holds integer values. You have to create it with the depth buffer, specifying the correct format (e.g. D24S8). You can then specify when pixels are discarded. Here is the idea:
Clear stencil buffer to 0
Enable solid rendering
Enable stencil buffer
Set blend states to not draw anything (Souce: 0, Destination: 1)
Disable depth testing, enable backface culling
Set the following stencil states:
CompareFunc to Always
StencilRef to 1
StencilWriteMask to 255
StencilFail to Replace
StencilPass to Replace
//this will set value 1 to every pixel that will be drawn
Draw the first cylinder
Now set the following stencil states:
CompareFunc to Equal
StencilFail to Keep //this keeps the value where the stencil test fails
StencilPass to Increment //this increments the value to 2 where stencil test passes
Draw the second cylinder
//Now there is a 2 in the stencil buffer where the cylinders intersect
Reset blend states
Reenable depth testing
Set StencilRef to 2 //render only pixels where stencil value == 2
Draw both cylinders
You might need to change the compare function to GreaterEqual before the last render pass. If pixels overlap, there can be values greater than two.