360 FOV depth buffer by topology and 2D shadows with multiple lights - xna

Just implemented the idea with multiple lights (idea without multiple lights is here 360 FOV depth buffer by topology and 2D shadows), however i'm not sure if its correctly rendered http://www.youtube.com/watch?v=bFhDiZIHlYQ , i just render each scene to the screen with GraphicsDevice.BlendState = BlendState.Additive; with respect to a light, so scenes just added to each other.
And the question - is it seemed correct or not ?

To answer the question. Sorry no, but I have been corrupted, let me explain.
The human eye is logarithmic. To perceive something as twice as bright we need to square the amount of light coming into our eyes. The same goes for sound.
Yet the RGB values of the screen are linear RGB 128,128,128 is twice as bright as 64,64,64. It has to be linear or our shading algorithms would not work. The fall of would be too quick.
Well no our calculations are wrong but the people who manufacture our screens know this and correct it as best they can..
Rather than me explaining watch this Youtube Computer Color is Broken
So to get the correct mix you need to create the correct mix between the two renders.There is not blend state to solve this so you will have to create a custom pixel shader to do the job. output.color = sqrt( pow ( input1.color , 2 ) + pow ( input2.color , 2) );
It is very subtle, but change the two light source's colours and then switch between linear and logarithmic blending and you will wonder how you ever put up with the broken rendering output in the first place.
I do all rendering and lighting calculations as photon counts, squaring input colours and square rooting output colours.
What about alpha? yep? i am not sure.

Related

Perlin noise, how to detect bright/dark areas?

I need some help with perlin noise.
I want to create a random terrain generator. I am using perlin noise in order to determine where the mountains and sea should go. From the random noise, I get something like this:
http://prntscr.com/df0rqp
Now, how can I actually detect where the brighter and darker areas are?
I tried using display.colorSample which returns the RGB color and alpha of the pixel, but this doesn't really help me much.
If it was only white and red, I could easily detect where the bright area is (white would be really big, where red would be small number) and the opposite.
However, since I have red, GREEN AND BLUE, this makes it a hard job.
To sum up, how can I detect where the white and where the red areas at?
You have a fundamental misunderstanding here. The perlin noise function really only goes from (x,y)->p . [It also works in higher dimensions]. But what you are seeing is just your library being nice. The noise function goes from two reals to one. It is being helpful by mapping the one result value p to a color gradient. But that is only for visualization. p is not a color, just another number. Use that one directly! If p<0 you might do water.
I would suggest this:
1. Shift hue of the image into red color like here
2. Use red channel to retrieve some mask.
3. Optional: scale max/min brightness into 0-255 range.

Track a person by his clothes color using the Kinect sensor

I'm new to the image processing, and I'm working on a simple project to recognize people by their clothes color. I'm not sure what the best way is to do that. Since I'm using the Kinect (with Kinect-SDK), it is easy to detect people using the depth stream, and by mapping the depth data to the color data, I can get the color pixels of the people. I tried to build a color histogram for each person to recognize the person color. I'm not sure if this right or not!
What I'm doing is:
1- Get the depth data from the Kinect device.
2- Ensure if a pixel is a player pixel or not by using the Player Index.
3- Map player pixels to color pixels.
4- Build a color histogram for the player.
I have a problem dealing with step 4. This is how I'm trying to build the histogram (32 bins):
color = ColorPixelData[colorPixelIndex];
B_Values[color / 8]++;
color = ColorPixelData[colorPixelIndex + 1];
G_Values[color / 8]++;
color = ColorPixelData[colorPixelIndex + 2];
R_Values[color / 8]++;
I think I'm doing it in a wrong way. The colors values look very different every time I run the program on the same scene.
Could anyone give me some points?
Any help will be appreciated.
Color histogram wont help you. Back in the day when i was doing some face recognition tool, color histogram would give different values for the pictures that look almost alike. So it's not a way to go. Instead of building a color histogram you could see how much for example red color is present on the scene, if someone of your subjects is wearing a red jacket.

Does Kinect Infrared View Have an offset with the Kinect Depth View

I am working on a Kinect project using the infrared view and depth view. In the infrared view, using CVBlob library, I am able to extract some 2D points of interest. I want to find the depth of these 2D points. So I thought that I can use the depth view directly, something like this:
coordinates3D[0] = coordinates2D[0];
coordinates3D[1] = coordinates2D[1];
coordinates3D[2] = (USHORT*)(LockedRect.pBits)
[(int)coordinates2D[1] * Width + (int)coordinates2D[0]] >> 3;
I don't think this is the right formula to get the depth.
I am able to visualize the 2D points of interest in the depth view. If I get a point (x, y) in infrared view, then I draw it as a red point in the depth view at (x, y)
I noticed that the red points are not where I expect them to be (on an object). There is a systematic error in their locations.
I was of the opinion that the depth view and infrared views have one-to-one correspondence unlike the correspondence between the color view and depth view.
Is this indeed true or is there an offset between the IR and depth views? If there is an offset, can I somehow get the right depth value?
Depth and Color streams are not taken from the same point so they do not correspond to each other perfectly. Also they FOV (field of view) is different.
cameras
IR/Depth FOV 58.5° x 45.6°
Color FOV 62.0° x 48.6°
distance between cameras 25mm
my corrections for 640x480 resolution for both streams
if (valid depth)
{
ax=(((x+10-xs2)*241)>>8)+xs2;
ay=(((y+30-ys2)*240)>>8)+ys2;
}
x,y are in coordinates in depth image
ax,ay are out coordinates in color image
xs,ys = 640,480
xs2,ys2 = 320,240
as you can see my kinect has also y-offset which is weird (even bigger then x-offset). My conversion works well on ranges up to 2 m I did not measure it further but it should work even then
do not forget to correct space coordinates from depth and depth image coordinates
pz=0.8+(float(rawdepth-6576)*0.00012115165336374002280501710376283);
px=-sin(58.5*deg*float(x-xs2)/float(xs))*pz;
py=+sin(45.6*deg*float(y-ys2)/float(ys))*pz;
pz=-pz;
where px,py,pz is point coordinate in [m] in space relative to kinect
I use coordinate system for camera with opposite Z direction therefore the negation of sign
PS. I have old model 1414 so newer models have probably different calibration parameters
There is no offset between the "IR View" and "Depth View". Primarily because they are the same thing.
The Kinect has 2 cameras. A RGB color camera and a depth camera, which uses an IR blaster to generate a field light field that is used when processing the data. These give you a color video stream and a depth data stream; there is no "IR view" separate from the depth data.
UPDATE:
They are actually the same thing. What you are referring to as a "depth view" is simply a colorized version of of the "IR view"; the black-and-white image is the "raw" data, while the color image is a processed version of the same data.
In the Kinect for Windows Toolkit, have a look in the KinectWpfViewers project (if you installed the KinectExplorer-WPF example, it should be there). In there is the KinectDepthViewer and the DepthColorizer classes. They will demonstrate how the colorized "depth view" is created.
UPDATE 2:
Per comments below what I've said above is almost entirely junk. I'll likely go edit it out or just delete my answer in full in the near future, until then it shall stand as a testament to my once invalid beliefs on what was coming from where.
Anyways... Have a look at the CoordinateMapper class as another possible solution. The link will take you to the managed code docs (which is what I'm familiar with), I'm looking around the C++ docs to see if I can find the equivalent.
I've used this to map the standard color and depth views. It may also map the IR view just as well (I wouldn't see why not), but I'm not 100% sure of that.
I created a blog showing the IR and Depth views:
http://aparajithsairamkinect.blogspot.com/2013/06/kinect-infrared-and-depth-views_6.html
This code works for many positions of the trackers from the Kinect:
coordinates3D[0] = coordinates2D[0];
coordinates3D[1] = coordinates2D[1];
coordinates3D[2] = (USHORT*)(LockedRect.pBits)
[(int)(coordinates2D[1] + 23) * Width + (int)coordinates2D[0]] >> 3;

What is the theory behind the Light Glow effect of "After Effects"?

What is the theory behind the Light Glow effect of "After Effects"?
I wanna use GLSL to make it happen. But if I at least get closer to the theory behind it, I could replicate it.
I've recently been implementing something similar. My render pipeline looks something like this:
Render Scene to texture (full screen)
Filter scene ("bright pass") to isolate the high luminance, shiny bits
Down-sample (2) to smaller texture (for performance), and do H Gaussian blur
Perform a V Gaussian blur on (3).
Blend output from (4) with the output from (1)
Display to screen.
With some parameter tweaking, you get get it looking pretty nice. Google things like "bright pass" (low pass filter), Gaussian Blur, FBO (Frame Buffer Objects) and so on. Effects like "bloom" and "HDR" also have a wealth of information about different ways of doing each of these things. I tried out about 4 different ways of doing Gaussian blur before settling on my current one.
Look at how to make shadow volumes, and instead of stenciling out a shadow, you could run a multi-pass blur on the volume, set its material to a very emissive, additive blended shader, and I imagine you'll get a similar effect.
Atlernatively, you could do the GPUGems implementation:
I will answer my own question just in case someone gets to here at the same point. With more precision (actually 100% of precision) I got to the exact After Effects's glow. The way it works is:
Apply a gaussian blur to the original image.
Extract the luma of this blurred image
Like in After Effects, you have two colors (A and B). So the secret is to make a gradient map between these color, acoording to the desired "Color Looping". If you don't know, a gradient map is an interpolation between colors (A and B in this case). Following the same vocabulary of After Effects, you need to loop X times over the "Color Looping" you chose... it means, suppose you are using the Color Looping like A->B->A, it will be considered one loop over your image (one can try this on Photoshop).
Take the luma your extract on step 2 and use as a parameter of your gradient map... in other words: luma=(0%, 50%, 100%) maps to color (A, B, A) respectively... the mid points are interpolated.
Blend your image with the original image according to the "Glow Operation" desired (Add, Multiply, etc)
This procedure work like After Effects for every single pixel. The other details of the Glow may be easily done after in basic procedure... things like "Glow Intensity", "Glow Threshold" and so on needs to be calibrated in order to get the same results with the same parameters.

Given normal map in world space what is a suitable algorithm to find edges?

If I have the vertex normals of a normal scene showing up as colours in a texture in world space is there a way to calculate edges efficiently or is it mathematically impossible? I know it's possible to calculate edges if you have the normals in view space but I'm not sure if it is possible to do so if you have the normals in world space (I've been trying to figure out a way for the past hour..)
I'm using DirectX with HLSL.
if ( normalA dot normalB > cos( maxAngleDiff )
then you have an edge. It won't be perfect but it will definitely find edges that other methods won't.
Or am i misunderstanding the problem?
Edit: how about, simply, high pass filtering the image?
I assume you are trying to make cartoon style edges for a cell shader?
If so, simply make a dot product of the world space normal with the world space pixel position minus camera position. As long as your operands are all in the same space you should be ok.
float edgy = dot(world_space_normal, pixel_world_pos - camera_world_pos);
If edgy is near 0, it's an edge.
If you want a screen space sized edge you will need to render additional object id information on another surface and post process the differences to the color surface.
It will depend on how many colors your image contain, and how they merge: sharp edges, dithered, blended,...
Since you say you have the vertex normals I am assuming that you can access the color-information on a single plane.
I have used two techniques with varying success:
I searched the image for local areas of the same color (RGB) and then used the highest of R, G or B to find the 'edge' - that is where the selected R,G or B is no longer the highest value;
the second method I used is to reduce the image to 16 colors internally, and it is easy to find the outlines in this case.
To construct vectors would then depend on how fine you want the granularity of your 'wireframe'-image to be.

Resources