I'm using Orthographic projection to draw my objects.
Each object items is being added to different buffers and being drawn in several cycles.
Let's say that each object has an outline square and fill for the square (in different color).
So i'm drawing first the all the fillings, and then the outlines.
I'm using depth buffer to make sure that the outlines will not be over all the fills as shown at the picture
Now i'm facing a problem that each object contains another drawing item on it (such as text - points) which can be longer than this squares. So i'm using the stencil buffer for cutting this additional drawing over the square. Although, when doing this there is no consideration in the depth buffer.
Meaning that one text item can be drawn over the other square. as showed below.
Is there anyway\trick to make it happen ?
You should be able to set the stencil buffer to a different value for each of the squares (provided there is <= 255 squares, as you won't be able to get a more than 8-bit stencil buffer). Configure the stencil value to KEEP for pixels that fail the depth test, causing any stencil values written by quads that are further in front but have been drawn earlier to be retained.
This will allow clipping each text individually.
Another way is to use only the depth buffer and pass the pixel extents of the current quad into the text pixel shader, where you can discard any extra pixels. This requires less state changes.
Related
I've been making progress in a fan-replicated game I'm coding, but I'm stuck with this problem.
Right now I'm drawing a texture pixel by pixel on the curve path, but this cuts down frames per second from 4000 to 50 on curves with long lengths.
I need to store pixel by pixel Vector2 + length data anyway, so I can produce static speed movement along it, looping through it to draw the curve as well.
Curves I need to be able to draw are Bezier, Circular and Catmull.
Any ideas of how to make it more efficient?
Maybe I have misunderstood the question but I did this once:
Create the curve and sample x points on it. (Red dots)
Create a mesh from it by calculating the cross vector of each point. (Green lines)
Build a quad between all of these. So basically 5 of them in my picture.
Set the U coordinate to be on the perpendicular plane and V coordinate follows the curve length. So 0 at the start an 1 at the end of it.
You can of course scale V if you want you texture to repeat.
Any ideas of how to make it more efficient?
Assuming the texture needs to be dynamic, draw the texture on the GPU-side using a shader. Drawing it on the CPU-side is not only slow, but bogs down both the CPU and GPU when you need to send it back to the GPU every frame. Much better to draw it GPU-side.
I need to store pixel by pixel Vector2 + length data anyway
The shader can store additional information into the texture. e.g. even though you may allocate a RGBA texture, it doesn't mean that it needs to store color information when it is your shaders that will interpret the data.
I'm reading through learningwebgl.com and what confuses me is that it draws the first buffer I bound as last element?
http://jsfiddle.net/Cx8gG/1/
red triangle
green square
blue square
I expected to see only the blue square because everything else gets overdrawn, the output seems to be in reverse order?
I've also read about stencil buffers, so what I tried to do is create a mask (red) and then there should be a green triangle on the blue square.
the mask works ( http://jsfiddle.net/D3QNg/3/ ) but I don't know if it's right or if I'm just lucky.
Would appreciate some help.
It does this because you enabled the depth buffer at line 203
gl.enable(gl.DEPTH_TEST);
The depth buffer holds the depth for each pixel drawn. In the default mode, when trying to draw a pixel WebGL will check the depth of the pixel already there, only if the new pixel's depth is LESS then the previous pixel will the new pixel be drawn.
Since all your shapes have a depth of 0.0 the first one fills in the depth buffer for those pixels with 0.0. The next shape you draw also has a depth of 0.0 for each pixel which is not LESS than the 0.0 already there so those pixels do not get overwritten
If you comment out the line that enables depth testing you'll get the results you were expecting.
Note, with depth testing enabled you can set the comparison WebGL uses to decide whether or not to draw the pixel by calling the function gl.depthFunc (docs)
I'll get straight to the point :)
From the above 480 x 320 diagram, I am thinking I can detect collision on pixel level like a worm game.
What I want to know is how to sample pixels on separate layers. As you can see in the diagram, as the worm is falling, I want to only sample the black pixels with glReadPixels() to see if the worm is standing (colliding) with any terrain but when I last tried it, glReadPixels() samples all pixels on screen, without any ideas of "layers".
The white pixel is the background that should not be part of the sampling.
Am I perhaps suppose to have a black and white copy of my terrain on a separate buffer and call glReadPixels() on that separate buffer so that the background images (white pixels) won't get sampled?
Before I was drawing my terrain on the screen in the same buffer/context where I draw my background image.
Any ideas?
What read pixels does is read back the binded buffer, since the buffer is the output of all your compositions, will obviously contain all the data your wrote and doesn't understand you logic arrangement into layer. You can try drawing your terrain into the stencil buffer and read back only that. Use GL_DEPTH_STENCIL (format parameter).
I have a concave polygon I need to draw in OpenGL.
The polygon is defined as a list of points which form its exterior ring, and a list of lists-of-points that define its interior rings (exclusion zones).
I can already deal with the exclusion zones, so a solution for how to draw a polygon without interior rings will be good too.
A solution with Boost.Geometry will be good, as I already use it heavily in my application.
I need this to work on the iPhone, namely OpenGL ES (the older version with fixed pipeline).
How can I do that?
Try OpenGL's tessellation facilities. You can use it to convert a complex polygon into a set of triangles, which you can render directly.
EDIT (in response to comment): OpenGL ES doesn't support tessellation functions. In this case, and if the polygon is static data, you could generate the tessellation offline using OpenGL on your desktop or notebook computer.
If the shape is dynamic, then you are out of luck with OpenGL ES. However, there are numerous libraries (e.g., CGAL) that will perform the same function.
It's a bit complicated, and resource-costly method, but any concave polygon can be drawn with the following steps (note this methos works surely on flat polygons, but I also assume you try to draw on flat surface, or in 2D orthogonal mode):
enable stencil test, use glStencilFunc(GL_ALWAYS,1,0xFFFF)
disable color mask to oprevent unwanted draws: glColorMask(0,0,0,0)
I think you have the vertices in an array of double, or in other form (strongly recommended as this method draws the same polygon multiple times, but using glList or glBegin-glEnd can be used as well)
set glStencilOp(GL_KEEP,GL_KEEP,GL_INCR)
draw the polygon as GL_TRIANGLE_FAN
Now on the stencil layer, you have bits set >0 where triangles of polygon were drawn. The trick is, that all the valid polygon area is filled with values having mod2=1, this is because the triangle fan drawing sweeps along polygon surface, and if the selected triangle has area outside the polygon, it will be drawn twice (once at the current sequence, then on next drawings when valid areas are drawn) This can happens many times, but in all cases, pixels outside the polygon are drawn even times, pixels inside are drawn odd times.
Some exceptions can happen, when order of pixels cause outside areas not to be drawn again. To filter these cases, the reverse directioned vertex array must be drawn (all these cases work properly when order is switched):
- set glStencilFunc(GL.GL_EQUAL,1,1) to prevent these errors happen in reverse direction (Can draw only areas inside the polygon drawn at first time, so errors happening in the other direction won't apperar, logically this generates the intersectoin of the two half-solution)
- draw polygon in reverse order, keeping glStencilFunc to increase sweeped pixel values
Now we have a correct stencil layer with pixel_value%2=1 where the pixel is truly inside the polygon. The last step is to draw the polygon itself:
- set glColorMask(1,1,1,1) to draw visible polygon
- keep glStencilFunc(GL_EQUAL,1,1) to draw the correct pixels
- draw polygon in the same mode (vertex arrays etc.), or if you draw without lighting/texturing, a single whole-screen-rectangle can be also drawn (faster than drawing all the vertices, and only the valid polygon pixels will be set)
If everything goes well, the polygon is correctly drawn, make sure that after this function you reset the stencil usage (stencil test) and/or clear stencil buffer if you also use it for another purpose.
Check out glues, which has tessellation functions that can handle concave polygons.
I wrote a java classe for a small graphical library that do exacly what you are looking for, you can check it here :
https://github.com/DzzD/TiGL/blob/main/android/src/fr/dzzd/tigl/PolygonTriangulate.java
It receive as input two float arrays (vertices & uvs) and return the same vertices and uvs reordered and ready to be drawn as a list of triangles.
If you want to exclude a zone (or many) you can simply connect your two polygones (the main one + the hole) in one by connecting them by a vertex, you will end with only one polygone that can be triangulate like any other with the same function.
Like this :
To better understand zoomed it will look like :
Finally it is just a single polygon.
What is the purpose of the source rectangle parameter in the SpriteBatch.Draw() method?
MSDN says: A rectangle that specifies (in texels) the source texels from a texture. Use null to draw the entire texture.
What does that mean?
The idea of the sourceRectangle is to allow you to implement what is both a performance optimisation and an artist convenience by arranging multiple sprites into a single texture. This is known as a "Texture Atlas" or a "Sprite Sheet".
(source: andrewrussell.net)
I explain why it is a performance optimisation in this answer. Basically it lets you reduce the number of texture-swaps. (So in the case of my illustration, if you're only drawing an animated character once, using a sprite-sheet will not improve performance.)
It also lets you implement tacky 2D special effects, like having a sprite "wipe" in:
(source: andrewrussell.net)
A texel is more-or-less the same thing as a pixel in the texture (a "texture pixel", if you will). So, when you draw your sprite, you specify the top-left corner of your sprite within the texture, along with its width and height. (The same as if you selected it in an image editor.)
If you pass in null for your source rectangle, XNA will assume a source rectangle that covers the entire texture.
The origin you specify to Draw is also measured in texels from the upper-left corner of the source rectangle.
In a situation where you have a single texture that contains different frames (animated textures), you will want to specify the source rectangle, so that you can draw a single frame from a texture.
i.e.
Look at this spritesheet here
The source rectangle defines the area of the texture that will be displayed. So if you have a 40x40 texture, and your rectangle is (0, 0, 20, 20), only the top left corner of the texture will be displayed. If you specify null for the rectangle, you will draw the entire texture.
This can be helpful when drawing from a spritesheet (a collection of textures that are all put into one bigger texture), and also in image manipulation programs.