Shadow Maps artifacts - xna

We are working with XNA (Monogame) and currently trying to implement shadow maps. I believe that the topic of shadow maps issues has been already extensively covered on the Internet, however we can't quite match our particular problem to the existing solutions.
Below is a screenshot of the game:
sever aliasing
On the left is a depth texture, on the right - our shadow maps applied.
As you can see there are severe aliasing problems. We were considering making use of cascade shadow mapping at first, but then we realized it won't be helpful since the camera view (eye) has about the same Z distance to all the objects of the environment, thus, we cannot split the frustum into multiple subfrusta.
Below is the set up which yields no substantial aliasing:
moderate aliasing
Which makes sense, since the farther the object from the light source, the more pixels get mapped to the same texel. In the latter case, the object is closer to the light source, which reduces the aliasing.
Increasing the resolution of the shadow map gives better results, but we were wondering whether there is another way to mitigate this problem.
We would greatly appreciate your help!
Cheers

Related

how to use mat3.normalFromMat4

I'm trying to implement a moving light source in webGL.
I understand that the normalMatrix is the key to managing the lighting equation in the fragment shader but am not winning the battle to set it up correctly. The only tutorial I can find is the excellent "Introduction to Computer Graphics"
and he says:
It turns out that you need to drop the fourth row and the fourth column and then take something called the "inverse transpose" of the resulting 3-by-3 matrix. You don't need to know what that means or why it works.
I think I do need to understand this to really master this baby.
So I'm looking for guidance on how and why to use mat3.normalFromMat4.
(PS. I have achieved the moving light source using 3Js, but its handling of texture maps degrades the images to too great an extent for my application. In webGL I can achieve the desired resolution.)
For me, from reading this discussion, the answer appears to be simple. mat3.normalFromMat4 is only required if you scale the object non-uniformly (i.e. more in one dimension than the others) after the normals have been computed.
Since I'm not doing that, it's a null issue.

OpenCV - Detect rough, hand-drawn circles with obstructions

I've been trying to extract hand-drawn circles from a document for a while now but every attempt I make doesn't have the level of consistency I need.
Process Album
The problem I keep coming up against is when 2 "circles" are too close they become a single contour, ruining my attempt to detect if a contour is curved. I'm sure there must be a better way to extract these circles, but their imperfection and inconsistency are really stumping me.
I've tried many other ways to single out the curves, the most accurate of which being:
Rather than use dilation to bridge the gap between the segmented contours, find the endpoints and attempt to continue the curve until it hits another contour.
Problem: I can't effectively find the turning points of the contour, otherwise this would be my preferable method
I apologize if this question is deemed "too specific", but I feel like Computer Vision stuff like this can always be applied elsewhere.
Thanks ahead of time for any and all help, I'm about at the end of my rope here.
EDIT: I've just realized the album wasn't working correctly, I think it should be fixed now though.
It looks like a very challenging problem so it is very likely that the things I am going to write wouldn't work very well in practice.
In order to ease the problem, I would probably try to remove as much of other stuff from the image as possible.
If the template of the document is always the same, it might be worth trying to remove horizontal and vertical lines along with grayed areas. For example, given the empty template, substract it from the document that you are processing. Probably, it might be possible to get rid of the text also. This would result in an image with only parts of hand drawn circles.
On such image, detecting circles or ellipses with hough transform might give some results (although shapes might be far from circles or ellipses).

normal mapping in sceneKit editor

I'm trying to assign a normal map to my geometry object in SceneKit editor. I picked up a random rainbow image(this is my first time doing normal maps) and assign it to a normal property(image). This is what I got.
and this is my xcode setup:
This really isn't the effect I was hoping for. I imagined there will be some parts of my cube sticking out and some will be indent. There should also be visible shadow since I checked casts Shadows property of my spot lights.
EDIT: found this webpage for making normal maps link, but results are still disappointing. If you look at an appropriate angle you can see that there is no indentation and nothing sticks out of the cube. Not sure if my expectations are to high though...
A normal is the direction a surface points, not how far it is perturbed from a reference point. It will only add surface detail when "lighting" is involved (including techniques like cubemap reflections).
What you're talking about is a displacement map, which combines normal and height. Parallax mapping is the optimal mix, between normal mapping and displacement mapping, for today's hardware.

Identifying Polygons Checked in Depth Test (DX11)

I'm a fairly extreme newbie to the world of graphics programming, so forgive me if this question has an obvious answer, but I haven't managed to track it down.
When the depth test is being performed in DirectX 11, how do you identify which triangles are being tested at any given time? I'm trying to manually tell the depth test to fail unless specific polygons are being viewed through specific polygons (i.e., you can see one set of geometry through only one surface, and another set only through another surface).
I guess the real question is, when a pixel is being tested, how does the system reference the data for that pixel (position, color, etc.)? Is there just something I'm completely overlooking?
For the time being, I'm not interested in alternate solutions to the greater problem, just an answer to this particular question. Thanks for any help!

Imaging Interpolation - creating an intermediate image?

we are having a debate/punch up about image processing and was wonding if anyone could help?
is it possible to have a picture of an object at 0-degrees (front on), and another at 45-degrees, and interpolate the two images to create a intermediate image of the subject at 22.5-degrees?
has anything like this been done before? I'm pretty sure it can be done, my collegue says not.
thanks,
kay
It has been done for rendering purposes, in order to predict parts of the image instead of rendering the full image. I am only aware of academic implementations, none in a particular product.
There is an inherent problem to this approach however - holes, or incomplete information
If the geometry of the displayed object is different from a simple sphere, there will probably be some parts of it that are not visible neither at 0 deg. nor at 45 deg. that should be visible at 22.5 deg, but which you cannot interpolate, since they are obscured by other geometry, so you are simply missing this visual information.
Just rotate a nontrivial object, like a tea cup or a donut and you should be able to see what I mean.
The issue gets worse when dealing with complex scenes with multiple objects.
There are also several approaches to remedy the issue, like using multiple shifted cameras, trying to detect the holes visually and adjusting the number and position of the cameras accordingly. But none of the approaches guarantees a complete absence of artifacts.

Resources