DirectX: "see through" polygons - directx

I've created a simple DirectX app that renders a field of vertices. Vertices are rendered like this (if viewed from top):
|\|\|\|\|
|\|\|\|\|
Each triangle is rendered like this:
1
|\
2 3
Which should mean that the polygon is counterclockwise and not be rendered, but it is. Anyway when viewed from top the plane is perfect.
However, when viewed from another level some polygons are sort of transparent and you can see geometry behind them. I've highlighted some of the place where this is happening.
I am thinking this is some of the basic, beginner problems. What am I missing? My rasterizer description is such:
new RasterizerStateDescription
{
CullMode = CullMode.Front,
IsAntialiasedLineEnabled = true,
IsMultisampleEnabled = true,
IsDepthClipEnabled = true,
IsFrontCounterclockwise = false,
IsScissorEnabled = true,
DepthBias = 1,
DepthBiasClamp = 1000.0f,
FillMode = FillMode.Wireframe,
SlopeScaledDepthBias = 1.0f
};

This is by design. FillMode.Wireframe only draws edges of each triangle as lines. That's all.
Do a first pass with a solid fill mode and depth writes on and a color mask (RenderTargetWriteMask in D3D11 terminology), and a second one with depth test on (but depth writes off) and wireframe mode on. You will probably need depth bias too since lines and triangles are not rasterized the same way (and their z can differ at equal fragment position).
BTW, this technique is known as hidden line removal. You can check this presentation for more details.

Turned out I just had no depth-stencil buffer set up. Oh well.

Related

Trouble implementing shadows in WebGL

I am trying to implement shadows into my WEBGL 2.0 Project using this tutorial
https://webgl2fundamentals.org/webgl/lessons/webgl-shadows.html
Currently I am getting really bad results like this:
Basically a ton of the terrain is being drawn in shadow that shouldn't be. The light projection is from your camera towards the direction you are looking so hypothetically you shouldn't be able to see any shdaows becuase the light projection is the same as your camera ( I am just doing this for testing until I can get this working properly)
I have everything the same as the tutorial I believe except I am using glMatrix instead of their matrix math library (shouldn't matter I would assume). Here's the thing though. I don't use a model view matrix for anything I am rendering so none of my points are on a -1,1 range. They can go out as far as -3200...ect Its just all one big terrain mesh chunked out.
I think the issue lies with how I am creating the texture matrix
textureMatrix = glMatrix.mat4.create();
glMatrix.mat4.translate(textureMatrix,textureMatrix,[0.5,0.5,0.5]);
glMatrix.mat4.scale(textureMatrix,textureMatrix,[0.5,0.5,0.5]);
glMatrix.mat4.multiply(textureMatrix,textureMatrix, projectionMatrix);
glMatrix.mat4.invert(lightMatrix,lightMatrix);
glMatrix.mat4.multiply(textureMatrix,textureMatrix, lightMatrix);
I am using the same matrix for the light projection as your normal projection, is that an issue? if anyone could help it would be greatly appreciated.
That's probably because the Y position of your light (in your example, it is much more the distance between the eye and the scene) is too big for the Z size of your shadow volume (the size of your shadow volume in the view direction.) Here if posY is inside the wireframe box :
But if you increase posY too much (i.e. your shapes get out of the shadow volume, they disappear
So you should increase the size of your shadow volume (or shrinken your scene, either way.) You cannot simulate that with the slider because they just give you the control to the two dimensions X and Y dimensions : projWidth and projHeight.
i.e. in the last code in your tutorial page, the latest parameter ("far") for example change it from 10 to 100
const lightProjectionMatrix = settings.perspective
? m4.perspective(
degToRad(settings.fieldOfView),
settings.projWidth / settings.projHeight,
0.5, // near
10) // far
: m4.orthographic(
-settings.projWidth / 2, // left
settings.projWidth / 2, // right
-settings.projHeight / 2, // bottom
settings.projHeight / 2, // top
0.5, // near
100); // far
Then you can increase posY far more :
without having your full code, it is hard to reproduce and help you. Could you not try to just inject your scene into the tutorial code ? You can bind the viewpoint with the source and orientation of the light by using the same inputs : (just adding 0.5 to X to see a bit of shadow and make sure it is properly computed.)
/*const cameraPosition = [settings.cameraX, settings.cameraY, 15];*/
const cameraPosition = [settings.posX+0.5, settings.posY, settings.posZ];
/*const target = [0, 0, 0]; */
const target = [settings.targetX, settings.targetY, settings.targetZ];

Wrong result using function fillPoly in opencv for very large images

I have a hard time solving the issue with mask creation.My image is large,
40959px X 24575px and im trying to create a mask for it.
I noticed that i dont have a problem for images up to certain size(I tested about 33000px X 22000px), but for dimensions larger than that i get an error inside my mask(Error is that it gets black in the middle of the polygon and white region extends itself to the left edge.Result should be without black area inside polygon and no white area extending to the left edge of image).
So my code looks like this:
pixel_points_list = latLonToPixel(dataSet, lat_lon_pairs)
print pixel_points_list
# This is the list im getting
#[[213, 6259], [22301, 23608], [25363, 22223], [27477, 23608], [35058, 18433], [12168, 282], [213, 6259]]
image = cv2.imread(in_tmpImgFilePath,-1)
print image.shape
#Value of image.shape: (24575, 40959, 4)
mask = np.zeros(image.shape, dtype=np.uint8)
roi_corners = np.array([pixel_points_list], dtype=np.int32)
print roi_corners
#contents of roi_corners_array:
"""
[[[ 213 6259]
[22301 23608]
[25363 22223]
[27477 23608]
[35058 18433]
[12168 282]
[ 213 6259]]]
"""
channel_count = image.shape[2]
ignore_mask_color = (255,)*channel_count
cv2.fillPoly(mask, roi_corners, ignore_mask_color)
cv2.imwrite("mask.tif",mask)
And this is the mask im getting with those coordinates(minified mask):
You see that in the middle of the mask the mask is mirrored.I took those points from pixel_points_list and drawn them on coordinate system and im getting valid polygon, but when using fillPoly im getting wrong results.
Here is even simpler example where i have only 4(5) points:
roi_corners = array([[ 213 6259]
[22301 23608]
[35058 18433]
[12168 282]
[ 213 6259]])
And i get
Does anyone have a clue why does this happen?
Thanks!
The issue is in the function CollectPolyEdges, called by fillPoly (and drawContours, fillConvexPoly, etc...).
Internally, it's assumed that the point coordinates (of integer type int32) have meaningful values only in the 16 lowest bits. In practice, you can draw correctly only if your points have coordinates up to 32768 (which is exactly the maximum x coordinate you can draw in your image.)
This can't be considered as a bug, since your images are extremely large.
As a workaround, you can try to scale your mask and your points by a given factor, fill the poly on the smaller mask, and then re-scale the mask back to original size
As #DanMaĆĄek pointed out in the comments, this is in fact a bug, not fixed, yet.
In the bug discussion, there is another workaround mentioned. It consists on drawing using multiple ROIs with size less than 32768, correcting coordinates for each ROI using the offset parameter in fillPoly.

Edge detection on pool table

I am currently working on an algorithm to detect the playing area of a pool table. For this purpose, I captured an image, transformed it to grayscale, and used a Sobel operator on it. Now I want to detect the playing area as a box with 4 corners located in the 4 corners of the table.
Detecting the edges of the table is quite straightforward, however, it turns out that detecting the 4 corners is not so easy, as there are pockets in the pool table. Now I just want to fit a line to each of the side edges, and from those lines, I can compute the intersects, which are the corners for my table.
I am stuck here, because I could not yet come up with a good solution to find these lines in my image. I can see it very easily when I used the Sobel operator. But what would be a good way of detecting it and computing the position of the corners?
EDIT: I added some sample Images
Basic Image:
Grayscale Image
Sobel Filter (horizontal only)
For a general solution, there will be many sources of noise: problems with cloth around the rails, wood texture (or no texture) on the rails, varying lighting, shadows, stains on the cloth, chalk on the rails, and so on.
When color and lighting aren't dependable, and when you want to find the edges of geometric objects, then it's best to think in terms of edge pixels rather than gray/color pixels.
A while back I was thinking of making a phone-based app to save ball positions for later review, including online, so I've though a bit about this problem. Although I can provide some guidance for your current question, it occurs to me you'll run into new problems each step of the way, so I'll try to provide a more complete answer.
Convert the image to grayscale. If we can't get an algorithm to work in grayscale, we'll inevitably run into problems with color. (See below)
[TBD] Do some preprocessing to reduce noise.
Find edge points using Sobel or (if you must) Canny.
Run Hough lines detection, but with a few caveats and parameterizations as described below.
Find the lines described a keystone-shaped quadrilateral. (This will likely be the inner quadrilateral of two: one inside the rail on the bed, and the other slightly larger quadrilateral at the cloth/wood rail edge at top.)
(Optional) Use the side pockets to help determine the orientation of the quadrilateral.
Use an affine transform to map the perspective-distorted table bed to a rectangle of [thankfully] known relative dimensions. We know the bed sizes in advance, so you can remap the distorted rectangle to a proper rectangle. (We'll ignore some optical effects for now.)
Remap the color image to the perspective-corrected rectangle. You'll probably need to tweak the positions of some balls.
General notes:
Filtering by color in the general sense can be difficult. It's tempting to think of the cloth as being simply green, blue, or red (or some other color), but when you look at the actual RGB values and try to separate colors you'll begin to appreciate what a nightmare working in color can be.
Optical distortion might throw off some edges.
The far short rail may be difficult to detect, BUT you do this: find the inside lines for the two long rails, then search vertically between the two rails for the first strong horizontal-ish edge at the far side of the image. That'll be the far short rail.
Although you probably want to use your phone camera for convenience, using a Kinect camera or similar (preferably smaller) device would make the problem easier. Not only would you have both color data and 3D data, but you would eliminate some problems with lighting since the depth data wouldn't depend on visible lighting.
For your app, consider limiting the search region for rail edges to a perspective-distorted rectangle. The user might be able to adjust the search region. This could greatly simplify the processing, and could help you work around problems if the table isn't lit well (as can be the case).
If color segmentation (as suggested by #Dima) works, get the outline of the blob using contour following. Then simplify the outline to a quadrilateral (or a polygon of few sides) by the Douglas-Peucker algorithm. You should find the four table edges this way.
For more accuracy, you can refine the edge location by local search of transitions across it and perform line fitting. Then intersect the lines to get the corners.
The following answer assumes you have already found the positions of the lines in the image. This however can be done "easily" by directly looking at the pixels and seeing if they are in a "line". Usually it is easier to detect this if the image has been deskewed first as well, i.e. Rotated so the rectangle (pool table) is more like this: [] than like /=/. Then it is just a case of scanning the pixels and if there are ones of similar colour alongside it assuming a line is between them.
The code works by looping over the lines found in the image. Whenever the end points of each line falls within a tolerance on within the x and y coordinates it is marked as a corner. Once the corners are found I take the average value between them to find where the corner lies. For example:
A horizontal line ending at 10, 10 and a vertical line starting at 12, 12 will be found to be a corner if there is a tolerance of 2 or more. The corner found will be at: 11, 11
NOTE: This is just to find Top Left corners but can easily be adapted to find all of them. The reason it has been done like this is because in the application where I use it, it is faster to sort each array first into an order where relevant values will be found first, see: Why is processing a sorted array faster than an unsorted array?.
Also note that my code finds the first corner for each line which might not be applicable for you, this is mainly for performance reasons. However the code can easily be adapted to find all the corners with all the lines then either select the "more likely" corner or average through them all.
Also note my answer is written in C#.
private IEnumerable<Point> FindTopLeftCorners(IEnumerable<Line> horizontalLines, IEnumerable<Line> verticalLines)
{
List<Point> TopLeftCorners = new List<Point>();
Line[] laHorizontalLines = horizontalLines.OrderBy(l => l.StartPoint.X).ThenBy(l => l.StartPoint.Y).ToArray();
Line[] laVerticalLines = verticalLines.OrderBy(l => l.StartPoint.X).ThenBy(l => l.StartPoint.Y).ToArray();
foreach (Line verticalLine in laVerticalLines)
{
foreach (Line horizontalLine in laHorizontalLines)
{
if (verticalLine.StartPoint.X <= (horizontalLine.StartPoint.X + _nCornerTolerance) && verticalLine.StartPoint.X >= (horizontalLine.StartPoint.X - _nCornerTolerance))
{
if (horizontalLine.StartPoint.Y <= (verticalLine.StartPoint.Y + _nCornerTolerance) && horizontalLine.StartPoint.Y >= (verticalLine.StartPoint.Y - _nCornerTolerance))
{
int nX = (verticalLine.StartPoint.X + horizontalLine.StartPoint.X) / 2;
int nY = (verticalLine.StartPoint.Y + horizontalLine.StartPoint.Y) / 2;
TopLeftCorners.Add(new Point(nX, nY));
break;
}
}
}
}
return TopLeftCorners;
}
Where Line is the following class:
public class Line
{
public Point StartPoint { get; private set; }
public Point EndPoint { get; private set; }
public Line(Point startPoint, Point endPoint)
{
this.StartPoint = startPoint;
this.EndPoint = endPoint;
}
}
And _nCornerTolerance is an int of a configurable amount.
A playing area of a pool table typically has a distinctive color, like green or blue. I would try a color-based segmentation approach first. The Color Thresholder app in MATLAB gives you an easy way to try different color spaces and thresholds.

How to draw an effect (Gloss with metallic lustre) on concentric circle such as following?

I referred to following article. What I actually need to draw is concentric/ Concrete circles with an effect as shown in image below.
I am finding it difficult to a) Draw the white streaks radially b) Find some key terms to search for related articles to proceed further on this.
Any hint or link to read about this will be of great help.
Try these
Metallic Knob
Metallic Knob 2
http://maniacdev.com/2012/06/ios-source-code-example-making-reflective-metallic-buttons-like-the-music-app
This is a tutorial on making reflective metal buttons. You can apply the techniques from the source code to whatever object you're trying to make. The source code is found here on github. I just googled "ios objective c metal effect" because that's what you're trying to do, right? The metal effect appears in concentric circles and changes as you tilt your phone, just as the iOS6 music slider does.
I don't have any code for you but the idea is actually quite simple. You're drawing a number of lines radiating from a single, central point (say 50,50) to four different sets of points. First set is for x = 0 to 100, y = 0. Second set is for y = 0 to 100, x = 0. Third set is for x = 0 to 100, y = 100. Fourth set is for y = 0 to 100, x = 100. And for each step you need to either change the colour from white to black or white to grey in increments or use a look up table with your colour values in it.

HLSL - How can I set sampler Min/Mag/Mip filters to disable all filtering/anti-aliasing?

I have a tex2D sampler I want to only return precisely those colours that are present on my texture. I am using Shader Model 3, so cannot use load.
In the event of a texel overlapping multiple colours, I want it to pick one and have the whole texel be that colour.
I think to do this I want to disable mipmapping, or at least trilinear filtering of mips.
sampler2D gColourmapSampler : register(s0) = sampler_state {
Texture = <gColourmapTexture>; //Defined above
MinFilter = None; //Controls sampling. None, Linear, or Point.
MagFilter = None; //Controls sampling. None, Linear, or Point.
MipFilter = None; //Controls how the mips are generated. None, Linear, or Point.
//...
};
My problem is I don't really understand Min/Mag/Mip filtering, so am not sure what combination I need to set these in, or if this is even what I am after.
What a portion of my source texture looks like;
Screenshot of what the relevant area looks like after the full texture is mapped to my sphere;
The anti-aliasing/blending/filtering artefacts are clearly visible; I don't want these.
MSDN has this to say;
D3DSAMP_MAGFILTER: Magnification filter of type D3DTEXTUREFILTERTYPE
D3DSAMP_MINFILTER: Minification filter of type D3DTEXTUREFILTERTYPE.
D3DSAMP_MIPFILTER: Mipmap filter to use during minification. See D3DTEXTUREFILTERTYPE.
D3DTEXF_NONE: When used with D3DSAMP_MIPFILTER, disables mipmapping.
Another good link on understanding hlsl intrinsics.
RESOLVED
Not an HLSL issue at all! Sorry all. I seem to ask a lot of questions that are impossible to answer. Ogre was over-riding the above settings. This was fixed with;
Ogre::MaterialManager::getSingleton().setDefaultTextureFiltering(Ogre::FO_NONE , Ogre::FO_NONE, Ogre::FO_NONE);
What it looks to me is that you're getting the values from a lower level mip-map (unfiltered) than the highest detail you're showing.
MipFilter = None
should prevent that, unless something in the code overrides it. So look for calls to SetSamplerState.
What you have done should turn off filtering. There are 2 potential issues, that I can think of, though
1) The driver just ignores you and filters anyway (If this is happening there is nothing you can do)
2) You have some form of edge anti-aliasing enabled.
Looking at your resulting image that doesn't look much like bilinear filtering to me so I'd think you are suffering from having antialiasing turned on somewhere. Have you set the antialiasing flag when you create the device/render-texture?
If you want to have really just one texel, use load instead of sample. load takes (as far as i know) an int2as an argument, that specifies the actual array coordinates in the texture. load looks then up the entry in your texture at the given array coordinates.
So, just scale your float2, e.g. by using ceil(float2(texCoord.x*textureWidth, texCoord.y*textureHeight)).
MSDN for load: http://msdn.microsoft.com/en-us/library/bb509694(VS.85).aspx
When using just shader model 3, you could a little hack to achieve this: Again, let's assume that you know textureWidth and textureHeight.
// compute floating point stride for texture
float step_x = 1./textureWidth;
float step_y = 1./textureHeight;
// compute texel array coordinates
int target_x = texCoord.x * textureWidth;
int target_y = texCoord.y * textureHeight;
// round to values, such that they're multiples of step_x and step_y
float2 texCoordNew;
texCoordNew.x = target_x * step_x;
texCoordNew.y = target_y * step_y;
I did not test it, but I think it could work.

Resources