Texture getting stretched across faces of a cuboid in Open Inventor - texture-mapping

I am trying to write a little script to apply texture to rectangular cuboids. To accomplish this, I run through the scenegraph, and wherever I find the SoIndexedFaceSet Nodes, I insert a SoTexture2 Node before that. I put my image file in the SoTexture2 Node. The problem I am facing is that the texture is applied correctly to 2 of the faces(say face1 and face2), in the Y-Z plane, but for the other 4 planes, it just stretches the texture at the boundaries of the two faces(1 and 2).
It looks something like this.
The front is how it should look, but as you can see, on the other two faces, it just extrapolates the corner values of the front face. Any ideas why this is happening and any way to avoid this?

Yep, assuming that you did not specify texture coordinates for your SoIndexedFaceSet, that is exactly the expected behavior.
If Open Inventor sees that you have applied a texture image to a geometry and did not specify texture coordinates, it will automatically compute some texture coordinates. Of course it's not possible to guess how you wanted the texture to be applied. So it computes the bounding box then computes texture coordinates that stretch the texture across the largest extent of the geometry (XY, YZ or XZ). If the geometry is a cuboid you can see the effect clearly as in your image. This behavior can be useful, especially as a quick approximation.
What you need to make this work the way you want, is to explicitly assign texture coordinates to the geometry such that the texture is mapped separately to each face. In Open Inventor you can actually still share the vertices between faces because you are allowed to specify different vertex indices and texture coordinate indices (of course this is only more convenient for the application because OpenGL doesn't support this and Open Inventor has to re-shuffle the data internally). If you applied the same texture to an SoCube node you would see that the texture is mapped separately to each face as expected. That's because SoCube defines texture coordinates for each face.

Related

OpenGL Image warping using lookup table

I am working on an Android application that slims or fatten faces by detecting it. Currently, I have achieved that by using the Thin-plate spline algorithm.
http://ipwithopencv.blogspot.com.tr/2010/01/thin-plate-spline-example.html
The problem is that the algorithm is not fast enough for me so I decided to change it to OpenGL. After some research, I see that the lookup table texture is the best option for this. I have a set of control points for source image and new positions of them for warp effect.
How should I create lookup table texture to get warp effect?
Are you really sure you need a lookup texture?
Seems that it`d be better if you had a textured rectangular mesh (or a non-rectangular mesh, of course, as the face detection algorithm you have most likely returns a face-like mesh) and warped it according to the algorithm:
Not only you`d be able to do that in a vertex shader, thus processing each mesh node in parallel, but also it`s less values to process compared to dynamic texture generation.
The most compatible method to achieve that is to give each mesh point a Y coordinate of 0 and X coordinate where the mesh index would be stored, and then pass a texture (maybe even a buffer texture if target devices support it) to the vertex shader, where at the needed index the R and G channels contain the desired X and Y coordinates.
Inside the vertex shader, the coordinates are to be loaded from the texture.
This approach allows for dynamic warping without reloading geometry, if the target data texture is properly updated — for example, inside a pixel shader.

Turn an entire SceneKit scene into an image suitable for a texture

I've written a little app using CoreMotion, AV and SceneKit to make a simple panorama. When you take a picture, it maps that onto a SK rectangle and places it in front of whatever CM direction the camera is facing. This is working fine, but...
I would like the user to be able to click a "done" button and turn the entire scene into a single image. I could then map that onto a sphere for future viewing rather than re-creating the entire set of objects. I don't need to stitch or anything like that, I want the individual images to remain separate rectangles, like photos glued to the inside of a ball.
I know about snapshot and tried using that with a really wide FOV, but that results in a fisheye view that does not map back properly (unless I'm doing it wrong). I assume there is some sort of transform I need to apply? Or perhaps there is an easier way to do this?
The key is "photos glued to the inside of a ball". You have a bunch of rectangles, suspended in space. Turning that into one image suitable for projection onto a sphere is a bit of work. You'll have to project each rectangle onto the sphere, and warp the image accordingly.
If you just want to reconstruct the scene for future viewing in SceneKit, use SCNScene's built in serialization, write(to:​options:​delegate:​progress​Handler:​) and SCNScene(named:).
To compute the mapping of images onto a sphere, you'll need some coordinate conversion. For each image, convert the coordinates of the corners into spherical coordinates, with the origin at your point of view. Change the radius of each corner's coordinate to the radius of your sphere, and you now have the projected corners' locations on the sphere.
It's tempting to repeat this process for each pixel in the input rectangular image. But that will leave empty pixels in the spherical output image. So you'll work in reverse. For each pixel in the spherical output image (within the 4 corner points), compute the ray (trivially done, in spherical coordinates) from POV to that point. Convert that ray back to Cartesian coordinates, compute its intersection with the rectangular image's plane, and sample at that point in your input image. You'll want to do some pixel weighting, since your output image and input image will have different pixel dimensions.

"warping" an image on iOS

I'm trying to find a way to do something similar to this on iOS:
Does anyone know a simple way to do it?
I don't know of a oneliner to do this, but you can use OpenGL to render a textured grid with quads, which has the texture coordinates equally distributed.
Exampe of 2x2 grid:
{0.0,1.0} {0.33333,1.0} {1.0,1.0}
{0.0,0.33333} {0.33333,0.33333} {1.0,0.33333}
{0.0,0.0} {0.33333,0.0} {1.0,0.0}
If you move shared vertices of adjacent quads (like in your example) while texture coords remain, you get a warp effect. You need a trivial vertex and fragment shader when using OpenGL ES, especially if you want to smoothen the warp effect, which is linearly interpolated per quad/triangle in its simple form.

XNA texture coordinates on merged textures

I got a problem with texture coordinates. First I would like to describe what I want to do then I will ask the question.
I want to have a mesh that has more textures using only one big texture. The big texture merges all textures the mesh is using in it. I made a routine that merges textures, that is no problem, but I still have to modify the texture coordinates, so the mesh that now uses only one texture instead of many has everything placed well.
See the picture:
On the upper left corner I got one of the textures (let's call it A) I merged into a big texture, the right one (B). A's top left is 0,0 and bottom right is 1,1. For easy use let's say that B.width = A.width * 2 and so for the height too. So on B the mini texture (M what is the A originally) bottom-right should be 0.5,0.5.
I got no problems understanding these so far and I hope I understood it ok. But the problem here is, that there are texture coordinates that are:
above 1
negative
on the original A. What should these be on M?
Let's say, A has -0.1,0 - is that -0.05,0 on M inside B?
What about those numbers that are outside 0..1 region? Is -3.2,0 on A -1.6 or -3.1 on B? So I clip of the part that is %1 and divide by 2 (because I stated above that width is double) or should I divide the whole number by 2? As far I understand so far, numbers outside this region are about mirroring the texture. How do I manage this, so the output does not contain the orange texture from B?
If my question is not clear enough (I am not much skilled in English), please ask and I will edit/answer, just help me clear my confusion :)
Thanks in advance:
Péter
A single texture has coordinates in [0-1,0-1] range
The new texture has coordinates in [0-1,0-1] range
In your new texture composed by four single textures, your algoritm has to translate texture coordinates this way.
Blue single square texture will have new coordinates in [0-0.5,
0-0.5] range
Orange single square texture will have new coordinates
in [0.5-1, 0-0.5] range

Drawing a concave polygon in OpenGL

I have a concave polygon I need to draw in OpenGL.
The polygon is defined as a list of points which form its exterior ring, and a list of lists-of-points that define its interior rings (exclusion zones).
I can already deal with the exclusion zones, so a solution for how to draw a polygon without interior rings will be good too.
A solution with Boost.Geometry will be good, as I already use it heavily in my application.
I need this to work on the iPhone, namely OpenGL ES (the older version with fixed pipeline).
How can I do that?
Try OpenGL's tessellation facilities. You can use it to convert a complex polygon into a set of triangles, which you can render directly.
EDIT (in response to comment): OpenGL ES doesn't support tessellation functions. In this case, and if the polygon is static data, you could generate the tessellation offline using OpenGL on your desktop or notebook computer.
If the shape is dynamic, then you are out of luck with OpenGL ES. However, there are numerous libraries (e.g., CGAL) that will perform the same function.
It's a bit complicated, and resource-costly method, but any concave polygon can be drawn with the following steps (note this methos works surely on flat polygons, but I also assume you try to draw on flat surface, or in 2D orthogonal mode):
enable stencil test, use glStencilFunc(GL_ALWAYS,1,0xFFFF)
disable color mask to oprevent unwanted draws: glColorMask(0,0,0,0)
I think you have the vertices in an array of double, or in other form (strongly recommended as this method draws the same polygon multiple times, but using glList or glBegin-glEnd can be used as well)
set glStencilOp(GL_KEEP,GL_KEEP,GL_INCR)
draw the polygon as GL_TRIANGLE_FAN
Now on the stencil layer, you have bits set >0 where triangles of polygon were drawn. The trick is, that all the valid polygon area is filled with values having mod2=1, this is because the triangle fan drawing sweeps along polygon surface, and if the selected triangle has area outside the polygon, it will be drawn twice (once at the current sequence, then on next drawings when valid areas are drawn) This can happens many times, but in all cases, pixels outside the polygon are drawn even times, pixels inside are drawn odd times.
Some exceptions can happen, when order of pixels cause outside areas not to be drawn again. To filter these cases, the reverse directioned vertex array must be drawn (all these cases work properly when order is switched):
- set glStencilFunc(GL.GL_EQUAL,1,1) to prevent these errors happen in reverse direction (Can draw only areas inside the polygon drawn at first time, so errors happening in the other direction won't apperar, logically this generates the intersectoin of the two half-solution)
- draw polygon in reverse order, keeping glStencilFunc to increase sweeped pixel values
Now we have a correct stencil layer with pixel_value%2=1 where the pixel is truly inside the polygon. The last step is to draw the polygon itself:
- set glColorMask(1,1,1,1) to draw visible polygon
- keep glStencilFunc(GL_EQUAL,1,1) to draw the correct pixels
- draw polygon in the same mode (vertex arrays etc.), or if you draw without lighting/texturing, a single whole-screen-rectangle can be also drawn (faster than drawing all the vertices, and only the valid polygon pixels will be set)
If everything goes well, the polygon is correctly drawn, make sure that after this function you reset the stencil usage (stencil test) and/or clear stencil buffer if you also use it for another purpose.
Check out glues, which has tessellation functions that can handle concave polygons.
I wrote a java classe for a small graphical library that do exacly what you are looking for, you can check it here :
https://github.com/DzzD/TiGL/blob/main/android/src/fr/dzzd/tigl/PolygonTriangulate.java
It receive as input two float arrays (vertices & uvs) and return the same vertices and uvs reordered and ready to be drawn as a list of triangles.
If you want to exclude a zone (or many) you can simply connect your two polygones (the main one + the hole) in one by connecting them by a vertex, you will end with only one polygone that can be triangulate like any other with the same function.
Like this :
To better understand zoomed it will look like :
Finally it is just a single polygon.

Resources