Missing depth info after first mesh - directx

I'm using SlimDX for a Direct3D 10 apps. In the apps I've loaded 2 to more mesh, with images loaded as texture and using a fx code for shader. The code was modified from SlimDX's sample "SimpleModel10"
I move the draw call, shader setup code into a class that manage 1 mesh, shader (effect) and draw call. Then I initialize 2 copy of this class, then call the draw function one after another.
The output, no matter how I change the Z position of the mesh, the one being draw later will always stay on top. Later, when I use PIX to debug the draw call, I found out that the 2nd mesh doesn't have depth while the first one does. I've tried with 3 meshes, 2nd and 3rd one will not have depth too. The funny thing is all of then are instantiated from the same class, using the same draw call.
What could have cause such problem?
Following is part of the code in the draw function of the class, I've omitted the rest as it's lengthy involved a few classes. I keep the existing OnRenderBegin() and OnRenderEnd() of the sample:
PanelEffect.GetVariableByName("world").AsMatrix().SetMatrix(world);
lock (this)
{
device.InputAssembler.SetInputLayout(layout);
device.InputAssembler.SetPrimitiveTopology(PrimitiveTopology.TriangleList);
device.InputAssembler.SetIndexBuffer(indices, Format.R32_UInt, 0);
device.InputAssembler.SetVertexBuffers(0, binding);
PanelEffect.GetTechniqueByIndex(0).GetPassByIndex(0).Apply();
device.DrawIndexed(indexCount, 0, 0);
device.InputAssembler.SetIndexBuffer(null, Format.Unknown, 0);
device.InputAssembler.SetVertexBuffers(0, nullBinding);
}
Edit: After much debugging and code isolation, I found out the culprit is Font.Draw() in my DrawString() function
internal void DrawString(string text)
{
sprite.Begin(SpriteFlags.None);
string[] texts = text.Split(new string[] {"\r\n"}, StringSplitOptions.None);
int y = PanelY;
foreach (string t in texts)
{
font.Draw(sprite, t, new System.Drawing.Rectangle(PanelX, y, PanelSize.Width, PanelSize.Height), FontDrawFlags.SingleLine, new Color4(Color.Red));
y += font.Description.Height;
}
sprite.End();
}
Comment out Font.Draw solve the problem. Maybe it automatically set some states which causes the next Mesh draw to discard depth. Looking into SlimDX's source code now.

After much debugging in PIX, this is the conclusion.
Calling Font.Draw() will automatically set DepthEnable to false and DepthFunction to D3D10_COMPARISON_NEVER, that's after comparing PIX's detail on the OutputMerger of before and after calling Font.Draw
Solution
Context10_1.Device.OutputMerger.DepthStencilState = depthStencilState;
Put that before the next Mesh draw call fixed the problem.
Previously I only set the DepthStencilState in the OnRenderBegin()

Related

How to determine if an object stopped moving or left the frame

Every tutorial, sample or blog I have read show various ways to track a moving object in a frame, as long as it is moving. This has become ubiquitous.
What I have been trying to figure out is how to determine if the object stopped moving or actually left the frame. When using background separation, when an object stops moving it becomes part of the foreground and as such "disappears". It "reappears" when it moves again. As far as I can tell the same behavior exists when an object leaves the frame, it just "disappears". For example the following code fragment demonstrates this:
**BackgroundSubtractorMOG2 _fgDetector = new BackgroundSubtractorMOG2();
CvBlobDetector _blobDetector = new CvBlobDetector();
CvTracks _tracker = new CvTracks();
CvBlobs _blobs = new CvBlobs();**
private int FindAndTrack()
{
CvInvoke.GaussianBlur(_analyzeMat, _smoothedFrame, new System.Drawing.Size(3, 3), 1);
#region use the BG/FG detector to find the forground mask
_fgDetector.Apply(_smoothedFrame, _foregroundMask);
#endregion use the BG/FG detector to find the forground mask
_blobDetector.Detect(_foregroundMask.ToImage<Gray, byte>(), _blobs);
_blobs.FilterByArea(_minimumAreaValue, int.MaxValue);
_tracker.Update(_blobs, 0.01 * _scaleValue, 1, 5);
return _tracker.Count;
}
I am no longer sure that background separation may be the answer.
What would give a definitive indication of a object leaving the frame?
Thanks,
Doug
Place tracker.update as condition for if loop if condition fails your object of interest has left the frame.
If you want to detect if object has moved or not then compare x & y values of bounding box of object with previous x & y values of bounding box if values are same than object has stopped moving else it has moved

Endless scrolling over a 3d map

I have some experience with Metal and quite a bit with Unity and am familiar with setting up meshes, buffers, and the backing data for drawing; but not so much the math/shader side. What I'm struggling with is how to get an endless scrolling world. So if I pan far to the right side I can see the left side and keep going.
The application of this would be a seamless terrain that a player could scroll in any direction forever and have it just wrap.
I don't want to duplicate everything on draw and offset it, that seems horrendously inefficient. I am hoping for a way to either use some magic matrix math or some sort of shader to get things wrapping/drawing where they should when panning the map. I've searched all over for some sort of guide or explanation of how to get this working but haven't come up with anything.
I know a lot of old (dos) games did this somehow, is it still possible? Is there a reason why it seems the industry has migrated away from this type of scrolling (bounding to edges vs wrapping)?
I have created a simple example demonstrating what you're looking for (I think).
The basic idea of it is that you draw the map in a repeating grid, using the drawPrimitives(type:vertexStart:vertexCount:instanceCount:) method on MTLRenderCommandEncoder. As the instance count you want to pass in the number of identical maps you want to draw, extending it as far as needed to not see where it ends. In my example I used a simple 5x5 grid.
To not have the user see the edge of the map, we're gonna calculate their position modulo 1 (or whatever size your map is):
func didDrag(dx: CGFloat, dy: CGFloat) {
// Move user position on drag, adding 1 to not get below 0
x += Float(dx) * draggingSpeed + 1
z += Float(dy) * draggingSpeed + 1
x.formTruncatingRemainder(dividingBy: 1)
z.formTruncatingRemainder(dividingBy: 1)
}
This is how it looks:
Just a follow up on what I have actually implemented. First I essentially have an array of x,y points with altitude, terrain type and all that jazz. Using some simple % and additions/subtractions it is trivial to get the nodes around a point to generate triangles
On a draw I calculate the first showing point and the last showing point and calculate the groups of triangles shown between those points. The first/last point take into account wrapping, it is then pretty trivial to have an endless wrapping world. For each group a translation offset is passed via a uniform matrix for that group which will position that section where it should belong.
I set it via renderEncoder.setVertexBytes(&uniform, length:..., offset:...)

Edge detection on pool table

I am currently working on an algorithm to detect the playing area of a pool table. For this purpose, I captured an image, transformed it to grayscale, and used a Sobel operator on it. Now I want to detect the playing area as a box with 4 corners located in the 4 corners of the table.
Detecting the edges of the table is quite straightforward, however, it turns out that detecting the 4 corners is not so easy, as there are pockets in the pool table. Now I just want to fit a line to each of the side edges, and from those lines, I can compute the intersects, which are the corners for my table.
I am stuck here, because I could not yet come up with a good solution to find these lines in my image. I can see it very easily when I used the Sobel operator. But what would be a good way of detecting it and computing the position of the corners?
EDIT: I added some sample Images
Basic Image:
Grayscale Image
Sobel Filter (horizontal only)
For a general solution, there will be many sources of noise: problems with cloth around the rails, wood texture (or no texture) on the rails, varying lighting, shadows, stains on the cloth, chalk on the rails, and so on.
When color and lighting aren't dependable, and when you want to find the edges of geometric objects, then it's best to think in terms of edge pixels rather than gray/color pixels.
A while back I was thinking of making a phone-based app to save ball positions for later review, including online, so I've though a bit about this problem. Although I can provide some guidance for your current question, it occurs to me you'll run into new problems each step of the way, so I'll try to provide a more complete answer.
Convert the image to grayscale. If we can't get an algorithm to work in grayscale, we'll inevitably run into problems with color. (See below)
[TBD] Do some preprocessing to reduce noise.
Find edge points using Sobel or (if you must) Canny.
Run Hough lines detection, but with a few caveats and parameterizations as described below.
Find the lines described a keystone-shaped quadrilateral. (This will likely be the inner quadrilateral of two: one inside the rail on the bed, and the other slightly larger quadrilateral at the cloth/wood rail edge at top.)
(Optional) Use the side pockets to help determine the orientation of the quadrilateral.
Use an affine transform to map the perspective-distorted table bed to a rectangle of [thankfully] known relative dimensions. We know the bed sizes in advance, so you can remap the distorted rectangle to a proper rectangle. (We'll ignore some optical effects for now.)
Remap the color image to the perspective-corrected rectangle. You'll probably need to tweak the positions of some balls.
General notes:
Filtering by color in the general sense can be difficult. It's tempting to think of the cloth as being simply green, blue, or red (or some other color), but when you look at the actual RGB values and try to separate colors you'll begin to appreciate what a nightmare working in color can be.
Optical distortion might throw off some edges.
The far short rail may be difficult to detect, BUT you do this: find the inside lines for the two long rails, then search vertically between the two rails for the first strong horizontal-ish edge at the far side of the image. That'll be the far short rail.
Although you probably want to use your phone camera for convenience, using a Kinect camera or similar (preferably smaller) device would make the problem easier. Not only would you have both color data and 3D data, but you would eliminate some problems with lighting since the depth data wouldn't depend on visible lighting.
For your app, consider limiting the search region for rail edges to a perspective-distorted rectangle. The user might be able to adjust the search region. This could greatly simplify the processing, and could help you work around problems if the table isn't lit well (as can be the case).
If color segmentation (as suggested by #Dima) works, get the outline of the blob using contour following. Then simplify the outline to a quadrilateral (or a polygon of few sides) by the Douglas-Peucker algorithm. You should find the four table edges this way.
For more accuracy, you can refine the edge location by local search of transitions across it and perform line fitting. Then intersect the lines to get the corners.
The following answer assumes you have already found the positions of the lines in the image. This however can be done "easily" by directly looking at the pixels and seeing if they are in a "line". Usually it is easier to detect this if the image has been deskewed first as well, i.e. Rotated so the rectangle (pool table) is more like this: [] than like /=/. Then it is just a case of scanning the pixels and if there are ones of similar colour alongside it assuming a line is between them.
The code works by looping over the lines found in the image. Whenever the end points of each line falls within a tolerance on within the x and y coordinates it is marked as a corner. Once the corners are found I take the average value between them to find where the corner lies. For example:
A horizontal line ending at 10, 10 and a vertical line starting at 12, 12 will be found to be a corner if there is a tolerance of 2 or more. The corner found will be at: 11, 11
NOTE: This is just to find Top Left corners but can easily be adapted to find all of them. The reason it has been done like this is because in the application where I use it, it is faster to sort each array first into an order where relevant values will be found first, see: Why is processing a sorted array faster than an unsorted array?.
Also note that my code finds the first corner for each line which might not be applicable for you, this is mainly for performance reasons. However the code can easily be adapted to find all the corners with all the lines then either select the "more likely" corner or average through them all.
Also note my answer is written in C#.
private IEnumerable<Point> FindTopLeftCorners(IEnumerable<Line> horizontalLines, IEnumerable<Line> verticalLines)
{
List<Point> TopLeftCorners = new List<Point>();
Line[] laHorizontalLines = horizontalLines.OrderBy(l => l.StartPoint.X).ThenBy(l => l.StartPoint.Y).ToArray();
Line[] laVerticalLines = verticalLines.OrderBy(l => l.StartPoint.X).ThenBy(l => l.StartPoint.Y).ToArray();
foreach (Line verticalLine in laVerticalLines)
{
foreach (Line horizontalLine in laHorizontalLines)
{
if (verticalLine.StartPoint.X <= (horizontalLine.StartPoint.X + _nCornerTolerance) && verticalLine.StartPoint.X >= (horizontalLine.StartPoint.X - _nCornerTolerance))
{
if (horizontalLine.StartPoint.Y <= (verticalLine.StartPoint.Y + _nCornerTolerance) && horizontalLine.StartPoint.Y >= (verticalLine.StartPoint.Y - _nCornerTolerance))
{
int nX = (verticalLine.StartPoint.X + horizontalLine.StartPoint.X) / 2;
int nY = (verticalLine.StartPoint.Y + horizontalLine.StartPoint.Y) / 2;
TopLeftCorners.Add(new Point(nX, nY));
break;
}
}
}
}
return TopLeftCorners;
}
Where Line is the following class:
public class Line
{
public Point StartPoint { get; private set; }
public Point EndPoint { get; private set; }
public Line(Point startPoint, Point endPoint)
{
this.StartPoint = startPoint;
this.EndPoint = endPoint;
}
}
And _nCornerTolerance is an int of a configurable amount.
A playing area of a pool table typically has a distinctive color, like green or blue. I would try a color-based segmentation approach first. The Color Thresholder app in MATLAB gives you an easy way to try different color spaces and thresholds.

locating a change between two images

I have two images that are similar, but one has a additional change on it. What I need to be able to do is locate the change between the two images. Both images have white backgrounds and the change is a line being draw. I don't need anything as complex as openCV I'm looking for a "simple" solution in c or c++.
If you just want to show the differences, so you can use the code below.
FastBitmap original = new FastBitmap(bitmap);
FastBitmap overlay = new FastBitmap(processedBitmap);
//Subtract the original with overlay and just see the differences.
Subtract sub = new Subtract(overlay);
sub.applyInPlace(original);
// Show the results
JOptionPane.showMessageDialog(null, original.toIcon());
For compare two images, you can use ObjectiveFideliy class in Catalano Framework.
Catalano Framework is in Java, so you can port this class in another LGPL project.
https://code.google.com/p/catalano-framework/
FastBitmap original = new FastBitmap(bitmap);
FastBitmap reconstructed = new FastBitmap(processedBitmap);
ObjectiveFidelity of = new ObjectiveFidelity(original, reconstructed);
int error = of.getTotalError();
double errorRMS = of.getErrorRMS();
double snr = of.getSignalToNoiseRatioRMS();
//Show the results
Disclaimer: I am the author of this framework, but I thought this would help.
Your description leaves me with a few unanswered questions. It would be good to see some example before/after images.
However at the face of it, assuming you just want to find the parameters of the added line, it may be enough to convert the frames to grey-scale, subtract them from one another, segment the result to black & white and then perform line segment detection.
If the resulting image only contains one straight line segment, then it might be enough to find the bounding box around the remaining pixels, with a simple check to determine which of the two possible line segments you have.
However it would probably be simpler to use one of the Hough Transform methods provided by OpenCV.
You can use memcmp() (Ansi C function to compare 2 memory blocks, much like strcmp()). Just activate it on the Arrays of pixels and it returns whether they are identical or not.
You can add a little tweak that you get as result the pointer to the memory block where the first change occurred. This will give you a pointer to the first pixel. You can than just go along its neighbors to find all the non white pixels (representing your line).
bool AreImagesDifferent(const char*Im1, const char* Im2, const int size){
return memcmp(Im1,Im2,size);
}
const char* getFirstDifferentPixel(const char*Im1, const char* Im2, const int size){
const char* Im1end = Im1+size;
for (;Im1<Im1end; Im1++, Im2++){
if ((*Im1)!=(*Im2))
return Im1;
}
}

HLSL - How can I set sampler Min/Mag/Mip filters to disable all filtering/anti-aliasing?

I have a tex2D sampler I want to only return precisely those colours that are present on my texture. I am using Shader Model 3, so cannot use load.
In the event of a texel overlapping multiple colours, I want it to pick one and have the whole texel be that colour.
I think to do this I want to disable mipmapping, or at least trilinear filtering of mips.
sampler2D gColourmapSampler : register(s0) = sampler_state {
Texture = <gColourmapTexture>; //Defined above
MinFilter = None; //Controls sampling. None, Linear, or Point.
MagFilter = None; //Controls sampling. None, Linear, or Point.
MipFilter = None; //Controls how the mips are generated. None, Linear, or Point.
//...
};
My problem is I don't really understand Min/Mag/Mip filtering, so am not sure what combination I need to set these in, or if this is even what I am after.
What a portion of my source texture looks like;
Screenshot of what the relevant area looks like after the full texture is mapped to my sphere;
The anti-aliasing/blending/filtering artefacts are clearly visible; I don't want these.
MSDN has this to say;
D3DSAMP_MAGFILTER: Magnification filter of type D3DTEXTUREFILTERTYPE
D3DSAMP_MINFILTER: Minification filter of type D3DTEXTUREFILTERTYPE.
D3DSAMP_MIPFILTER: Mipmap filter to use during minification. See D3DTEXTUREFILTERTYPE.
D3DTEXF_NONE: When used with D3DSAMP_MIPFILTER, disables mipmapping.
Another good link on understanding hlsl intrinsics.
RESOLVED
Not an HLSL issue at all! Sorry all. I seem to ask a lot of questions that are impossible to answer. Ogre was over-riding the above settings. This was fixed with;
Ogre::MaterialManager::getSingleton().setDefaultTextureFiltering(Ogre::FO_NONE , Ogre::FO_NONE, Ogre::FO_NONE);
What it looks to me is that you're getting the values from a lower level mip-map (unfiltered) than the highest detail you're showing.
MipFilter = None
should prevent that, unless something in the code overrides it. So look for calls to SetSamplerState.
What you have done should turn off filtering. There are 2 potential issues, that I can think of, though
1) The driver just ignores you and filters anyway (If this is happening there is nothing you can do)
2) You have some form of edge anti-aliasing enabled.
Looking at your resulting image that doesn't look much like bilinear filtering to me so I'd think you are suffering from having antialiasing turned on somewhere. Have you set the antialiasing flag when you create the device/render-texture?
If you want to have really just one texel, use load instead of sample. load takes (as far as i know) an int2as an argument, that specifies the actual array coordinates in the texture. load looks then up the entry in your texture at the given array coordinates.
So, just scale your float2, e.g. by using ceil(float2(texCoord.x*textureWidth, texCoord.y*textureHeight)).
MSDN for load: http://msdn.microsoft.com/en-us/library/bb509694(VS.85).aspx
When using just shader model 3, you could a little hack to achieve this: Again, let's assume that you know textureWidth and textureHeight.
// compute floating point stride for texture
float step_x = 1./textureWidth;
float step_y = 1./textureHeight;
// compute texel array coordinates
int target_x = texCoord.x * textureWidth;
int target_y = texCoord.y * textureHeight;
// round to values, such that they're multiples of step_x and step_y
float2 texCoordNew;
texCoordNew.x = target_x * step_x;
texCoordNew.y = target_y * step_y;
I did not test it, but I think it could work.

Resources