IDirect3DDevice9, setting how textures scale? - directx

In Photoshop you can control how pictures are scaled up and down as 'image interpolation', it has different options like 'Bicubic', 'Bilinear', 'Nearest Neighbour' and such.
I was wondering if I could do something similar in DirectX? Basically if I slap a texture on a quad and stretch the quad how can I control how the texture on the quad is represented?
Thanks for any help!

If you are using fixed function pipeline :
http://msdn.microsoft.com/en-us/library/ee421769(VS.85).aspx
Setting the D3DSAMP_MAGFILTER, D3DSAMP_MINFILTER , D3DSAMP_MIPFILTER values.
Otherwise set the FILTER option of the sampler object if you're using HLSL.
There are 4 type of filtering. NONE, POINT, LINEAR, ANISOTROPIC.

Related

OpenGL Image warping using lookup table

I am working on an Android application that slims or fatten faces by detecting it. Currently, I have achieved that by using the Thin-plate spline algorithm.
http://ipwithopencv.blogspot.com.tr/2010/01/thin-plate-spline-example.html
The problem is that the algorithm is not fast enough for me so I decided to change it to OpenGL. After some research, I see that the lookup table texture is the best option for this. I have a set of control points for source image and new positions of them for warp effect.
How should I create lookup table texture to get warp effect?
Are you really sure you need a lookup texture?
Seems that it`d be better if you had a textured rectangular mesh (or a non-rectangular mesh, of course, as the face detection algorithm you have most likely returns a face-like mesh) and warped it according to the algorithm:
Not only you`d be able to do that in a vertex shader, thus processing each mesh node in parallel, but also it`s less values to process compared to dynamic texture generation.
The most compatible method to achieve that is to give each mesh point a Y coordinate of 0 and X coordinate where the mesh index would be stored, and then pass a texture (maybe even a buffer texture if target devices support it) to the vertex shader, where at the needed index the R and G channels contain the desired X and Y coordinates.
Inside the vertex shader, the coordinates are to be loaded from the texture.
This approach allows for dynamic warping without reloading geometry, if the target data texture is properly updated — for example, inside a pixel shader.

HLSL - how does vertex shader's output POSITION0 affect pixel shader's texture mapping uv?

It seems POSITION/POSITION0's w devide everything in output struct. thus made pixel shader can do correct perspective mapping.and it cant be removed,otherwise pixel shader wont output anything.
i didn't see any configuration in program code. Is it a fixed default setting for all devices? or can i customize this setting?
You have the choice to disable the perspective correction in hlsl on any interpolator as find here.
The modifier you want is noperspective.

Texture getting stretched across faces of a cuboid in Open Inventor

I am trying to write a little script to apply texture to rectangular cuboids. To accomplish this, I run through the scenegraph, and wherever I find the SoIndexedFaceSet Nodes, I insert a SoTexture2 Node before that. I put my image file in the SoTexture2 Node. The problem I am facing is that the texture is applied correctly to 2 of the faces(say face1 and face2), in the Y-Z plane, but for the other 4 planes, it just stretches the texture at the boundaries of the two faces(1 and 2).
It looks something like this.
The front is how it should look, but as you can see, on the other two faces, it just extrapolates the corner values of the front face. Any ideas why this is happening and any way to avoid this?
Yep, assuming that you did not specify texture coordinates for your SoIndexedFaceSet, that is exactly the expected behavior.
If Open Inventor sees that you have applied a texture image to a geometry and did not specify texture coordinates, it will automatically compute some texture coordinates. Of course it's not possible to guess how you wanted the texture to be applied. So it computes the bounding box then computes texture coordinates that stretch the texture across the largest extent of the geometry (XY, YZ or XZ). If the geometry is a cuboid you can see the effect clearly as in your image. This behavior can be useful, especially as a quick approximation.
What you need to make this work the way you want, is to explicitly assign texture coordinates to the geometry such that the texture is mapped separately to each face. In Open Inventor you can actually still share the vertices between faces because you are allowed to specify different vertex indices and texture coordinate indices (of course this is only more convenient for the application because OpenGL doesn't support this and Open Inventor has to re-shuffle the data internally). If you applied the same texture to an SoCube node you would see that the texture is mapped separately to each face as expected. That's because SoCube defines texture coordinates for each face.

How do you add light with multiple passes in OpenGL?

I have two functions that I want to combine the results of:
drawAmbient
drawDirectional
They each work fine individually, drawing the scene with the ambient light only, or the directional light only. I want to show both the ambient and directional light but am having a bit of trouble. I try this:
[self drawAmbient];
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);
[self drawDirectional];
glDisable(GL_BLEND);
but I only see the results from first draw. I calculate the depth in the same way for both sets of draw calls. I could always just render to texture and blend the textures, but that seems redundant. Is there I way that I can add the lighting together when rendering to the default framebuffer?
You say you calculate the depth the same way in both passes. This is of course correct, but as the default depth comparison function is GL_LESS, nothing will actually be rendered in the second pass, since the depth is never less than what is currently in the depth buffer.
So for the second pass just change the depth test to
glDepthFunc(GL_EQUAL);
and then back to
glDepthFunc(GL_LESS);
Or you may also set it to GL_LEQUAL for the whole runtime to cover both cases.
As far as I know, you should render lighting to separate render targets and then combine them. So you will have rendered scene into these targets:
textured without lighting
summary diffuse lighting (fill with ambient color and additively render all light sources)
summary specular lighting (if you use specular component)
Then combine textures, so final_color = textured * diffuse + specular.

"warping" an image on iOS

I'm trying to find a way to do something similar to this on iOS:
Does anyone know a simple way to do it?
I don't know of a oneliner to do this, but you can use OpenGL to render a textured grid with quads, which has the texture coordinates equally distributed.
Exampe of 2x2 grid:
{0.0,1.0} {0.33333,1.0} {1.0,1.0}
{0.0,0.33333} {0.33333,0.33333} {1.0,0.33333}
{0.0,0.0} {0.33333,0.0} {1.0,0.0}
If you move shared vertices of adjacent quads (like in your example) while texture coords remain, you get a warp effect. You need a trivial vertex and fragment shader when using OpenGL ES, especially if you want to smoothen the warp effect, which is linearly interpolated per quad/triangle in its simple form.

Resources