My application shows big amount of objects in orthographic projection.
Each object is combined from few sprites on the same vertex with different textures.
For simplify:
Each object has 2 sprites: filled rectangle and outline rectangle
I've have one mesh (vertices and attributes) for each sprite type - one for all the objects fills and one for all the objects outlines. Total 2 meshes.
I'm drawing my objects (Sprites) in 2 phases - The fills mesh (which contains all fills sprites), and outlines mesh (which contains all outlines sprites)
For each object sprites - (in all the meshes) - i'm giving the same z-value so it will be drawn correctly.
Using depthFunc(LEQUAL) for drawing when same z value
as shown below:
The problem is when i want to change transparency for all objects. The depth is blocking from drawing some objects so i can't see the object behind. If i'm disabling the depth test suddenly all the outlines will jump to the front (looks much worse in the real application when the object is combined with much more than 2 sprites).
Sorting the objects will not work here because i'm drawing each mesh (fills and outlines) one after each other.
Is there any trick to do to make it happened ?
Related
I'm working with 3D meshes (mostly triangle meshes, though occasionally quad- or general polygonal meshes) for which I compute a value for each edge. This value I'd like to visualise using a colour map, i.e. render each edge in a colour corresponding to its associated value.
Is there a way to assign values to edges in WebGL that is more efficient than using a typical drawArrays approach? That is, looping over the edges and storing the vertices pair-wise in a buffer (resulting in a lot of duplicated x, y, z coordinate data) and introducing an additional vertex attribute storing the edge value (the same value for both vertices)?
In OpenGL, I'd use a drawElements approach, store the edge values in a texture (or buffer texture), and then use gl_PrimitiveID in the fragment shader to look up the relevant edge value for the edge currently being processed. Unfortunately, WebGL doesn't know about gl_PrimitiveID, and I don't see a way to emulate it. I briefly thought about instanced rendering (using gl_InstanceID), but that would complicate things and probably end up not being much more efficient...
The Issue
I've set up a minimal SceneKit project with a scene that contains the default airplane with a transparent plane that acts as a shadow receiver. I've duplicated this setup so there are two airplanes and two transparent shadow planes.
There is a directional light that cast shadows and has its shadowMode property set to .deferred. When the two shadow planes overlap, the plane that is closer to the camera 'cuts out' the shadow on the plane that is further away from the camera.
I know this is due to the fact that the plane's material has its .writesToDepthBuffer property set to true. However, without this the deferred shadows don't work.
The Question
Is there a way to show shadows on multiple overlapping planes? I know I can use SCNFloor to show multiple shadows but I specifically want shadows on multiple planes with a different Y position. Think of a scenario in ARKit where multiple planes are detected.
The Code
I've set up a minimal project on GitHub here.
Making both Y values of shadow planes closer enough will solve the cutoff issue.
In SceneKit it's a regular behaviour of two different planes that have a shadow projections. For getting a robust shadows use just one 3d object (plane or custom-shape geometry if you need different floor levels) as a shadow catcher.
If you have several 3D objects with Writes depth option turned On use Rendering order properties for each object. Nodes with greater rendering orders are rendered last. Default value of Rendering order is zero.
For instance:
geoNodeOne.renderingOrder = -1 /* Rendered first */
geoNodeTwo.renderingOrder = 50 /* Rendered last */
But in your case Rendering order property is useless because one shadow-projected plane blocks the other one.
To model a custom-shape geometry use Extrude Tool in 3D modelling app (like Maya or 3dsMax):
I am trying to write a little script to apply texture to rectangular cuboids. To accomplish this, I run through the scenegraph, and wherever I find the SoIndexedFaceSet Nodes, I insert a SoTexture2 Node before that. I put my image file in the SoTexture2 Node. The problem I am facing is that the texture is applied correctly to 2 of the faces(say face1 and face2), in the Y-Z plane, but for the other 4 planes, it just stretches the texture at the boundaries of the two faces(1 and 2).
It looks something like this.
The front is how it should look, but as you can see, on the other two faces, it just extrapolates the corner values of the front face. Any ideas why this is happening and any way to avoid this?
Yep, assuming that you did not specify texture coordinates for your SoIndexedFaceSet, that is exactly the expected behavior.
If Open Inventor sees that you have applied a texture image to a geometry and did not specify texture coordinates, it will automatically compute some texture coordinates. Of course it's not possible to guess how you wanted the texture to be applied. So it computes the bounding box then computes texture coordinates that stretch the texture across the largest extent of the geometry (XY, YZ or XZ). If the geometry is a cuboid you can see the effect clearly as in your image. This behavior can be useful, especially as a quick approximation.
What you need to make this work the way you want, is to explicitly assign texture coordinates to the geometry such that the texture is mapped separately to each face. In Open Inventor you can actually still share the vertices between faces because you are allowed to specify different vertex indices and texture coordinate indices (of course this is only more convenient for the application because OpenGL doesn't support this and Open Inventor has to re-shuffle the data internally). If you applied the same texture to an SoCube node you would see that the texture is mapped separately to each face as expected. That's because SoCube defines texture coordinates for each face.
I got a problem with texture coordinates. First I would like to describe what I want to do then I will ask the question.
I want to have a mesh that has more textures using only one big texture. The big texture merges all textures the mesh is using in it. I made a routine that merges textures, that is no problem, but I still have to modify the texture coordinates, so the mesh that now uses only one texture instead of many has everything placed well.
See the picture:
On the upper left corner I got one of the textures (let's call it A) I merged into a big texture, the right one (B). A's top left is 0,0 and bottom right is 1,1. For easy use let's say that B.width = A.width * 2 and so for the height too. So on B the mini texture (M what is the A originally) bottom-right should be 0.5,0.5.
I got no problems understanding these so far and I hope I understood it ok. But the problem here is, that there are texture coordinates that are:
above 1
negative
on the original A. What should these be on M?
Let's say, A has -0.1,0 - is that -0.05,0 on M inside B?
What about those numbers that are outside 0..1 region? Is -3.2,0 on A -1.6 or -3.1 on B? So I clip of the part that is %1 and divide by 2 (because I stated above that width is double) or should I divide the whole number by 2? As far I understand so far, numbers outside this region are about mirroring the texture. How do I manage this, so the output does not contain the orange texture from B?
If my question is not clear enough (I am not much skilled in English), please ask and I will edit/answer, just help me clear my confusion :)
Thanks in advance:
Péter
A single texture has coordinates in [0-1,0-1] range
The new texture has coordinates in [0-1,0-1] range
In your new texture composed by four single textures, your algoritm has to translate texture coordinates this way.
Blue single square texture will have new coordinates in [0-0.5,
0-0.5] range
Orange single square texture will have new coordinates
in [0.5-1, 0-0.5] range
I have a concave polygon I need to draw in OpenGL.
The polygon is defined as a list of points which form its exterior ring, and a list of lists-of-points that define its interior rings (exclusion zones).
I can already deal with the exclusion zones, so a solution for how to draw a polygon without interior rings will be good too.
A solution with Boost.Geometry will be good, as I already use it heavily in my application.
I need this to work on the iPhone, namely OpenGL ES (the older version with fixed pipeline).
How can I do that?
Try OpenGL's tessellation facilities. You can use it to convert a complex polygon into a set of triangles, which you can render directly.
EDIT (in response to comment): OpenGL ES doesn't support tessellation functions. In this case, and if the polygon is static data, you could generate the tessellation offline using OpenGL on your desktop or notebook computer.
If the shape is dynamic, then you are out of luck with OpenGL ES. However, there are numerous libraries (e.g., CGAL) that will perform the same function.
It's a bit complicated, and resource-costly method, but any concave polygon can be drawn with the following steps (note this methos works surely on flat polygons, but I also assume you try to draw on flat surface, or in 2D orthogonal mode):
enable stencil test, use glStencilFunc(GL_ALWAYS,1,0xFFFF)
disable color mask to oprevent unwanted draws: glColorMask(0,0,0,0)
I think you have the vertices in an array of double, or in other form (strongly recommended as this method draws the same polygon multiple times, but using glList or glBegin-glEnd can be used as well)
set glStencilOp(GL_KEEP,GL_KEEP,GL_INCR)
draw the polygon as GL_TRIANGLE_FAN
Now on the stencil layer, you have bits set >0 where triangles of polygon were drawn. The trick is, that all the valid polygon area is filled with values having mod2=1, this is because the triangle fan drawing sweeps along polygon surface, and if the selected triangle has area outside the polygon, it will be drawn twice (once at the current sequence, then on next drawings when valid areas are drawn) This can happens many times, but in all cases, pixels outside the polygon are drawn even times, pixels inside are drawn odd times.
Some exceptions can happen, when order of pixels cause outside areas not to be drawn again. To filter these cases, the reverse directioned vertex array must be drawn (all these cases work properly when order is switched):
- set glStencilFunc(GL.GL_EQUAL,1,1) to prevent these errors happen in reverse direction (Can draw only areas inside the polygon drawn at first time, so errors happening in the other direction won't apperar, logically this generates the intersectoin of the two half-solution)
- draw polygon in reverse order, keeping glStencilFunc to increase sweeped pixel values
Now we have a correct stencil layer with pixel_value%2=1 where the pixel is truly inside the polygon. The last step is to draw the polygon itself:
- set glColorMask(1,1,1,1) to draw visible polygon
- keep glStencilFunc(GL_EQUAL,1,1) to draw the correct pixels
- draw polygon in the same mode (vertex arrays etc.), or if you draw without lighting/texturing, a single whole-screen-rectangle can be also drawn (faster than drawing all the vertices, and only the valid polygon pixels will be set)
If everything goes well, the polygon is correctly drawn, make sure that after this function you reset the stencil usage (stencil test) and/or clear stencil buffer if you also use it for another purpose.
Check out glues, which has tessellation functions that can handle concave polygons.
I wrote a java classe for a small graphical library that do exacly what you are looking for, you can check it here :
https://github.com/DzzD/TiGL/blob/main/android/src/fr/dzzd/tigl/PolygonTriangulate.java
It receive as input two float arrays (vertices & uvs) and return the same vertices and uvs reordered and ready to be drawn as a list of triangles.
If you want to exclude a zone (or many) you can simply connect your two polygones (the main one + the hole) in one by connecting them by a vertex, you will end with only one polygone that can be triangulate like any other with the same function.
Like this :
To better understand zoomed it will look like :
Finally it is just a single polygon.