Rendering a point light using 6 spot lights? - mapping

I'm trying to render 6 spot lights to create a point light for a shadow mapping algorithm.
I'm not sure if I'm doing this right, I've more or less followed the instructions here when setting up my view and projection matrices but the end result looks like this:
White areas are parts which are covered by one of the 6 shadow maps, the darker areas are ones which aren't covered by the shadowmaps. Obviously I don't have a problem with the teapots and boxes having their shadows projected onto the scene, however as you can see the 6 shadow maps have blindspots. Is this how a cubed shadow map is supposed to look? It doesn't look like a shadowmap of a point light source...

Actually you can adjust your six spots to have cones that perfectly fill each face of your cubemap. You can achieve this by setting each cone's aperture to create a circumscribed circle around each cubemap face. In this case you don't have to worry about overlapping, since the would be overlapping parts are out of the faces' area.
In other terms: adjust the lights' projection matrix' FOV, so it won't the view frustum that includes the light cone, but the cone will include the view frustum.
The a whole implementation see this paper.

What you're seeing here are a circle and two hyperbolas -- conic sections -- exactly the result you might expect if you took a double ended cone and intersected it with a plane.
This math may seem removed from the situation but it explains your problem. A spotlight creates a cone of light, and you can't entirely fill a solid space with a bunch of cones coming from the same point. (I'd suggest rolling up a bunch of pieces of paper and taping them together at the points to try it out.)
However, as you get far from the origin of your simulated-point-source, the cones converge to their assymptotes, and there is an infinitesimally-narrow gap in the light.
One option to solve this is to change the focus of the cones so that they overlap slightly -- this will create areas that are overexposed, but the overexposure will only become obvious as you get farther away. So long as all of your objects are near the point light source, this might not be much of an issue.
Another option is to move the focus of all of the lights much closer to their sources. This way, they'd converge to their assymptotes more quickly.

Related

Proper way to illuminate a 2D surface in 3D space?

EDIT: I've solved the issue below the tilde line -- the missing chunks -- by fixing an elementary error in my for-loop dealing with calculating face normals. I now have a new problem though: strange, unwanted shadows on the surface itself. Some areas appear darker than others... See the next picture for the current issue.
I have an omni light added to my scene's root node as well as a directional light added in the same manner. For some reason I can't seem to light the underbelly of the surface otherwise. Notice the strange shadow on the inside of the concave surface (it's more pronounced when I remove the subdivision effect as I have done here) -->
Here is the surface from above -- notice how some areas seem strangely darker.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is a concave surface.
I have these smooth, curved planes in 3D space. Right now, they look rather cartoonish -- I would like to utilize some form of lighting to make them look more "3D-ish."
I have tried various combinations of ambient lighting, omni lighting, and default lighting, but nothing seems to work right. I get something quite strange when I apply something like a basic omni light --
Here is another look at a better angle using omni lighting. Looks like someone took a bite out of it --
Am I overlooking a specific type of light or lighting strategy?
I'd like to avoid used baked lighting, because the scene is rather simple. Thanks.
I'll outline my steps for the bold.
1: I specify the vertices for each of the four faces of a pyramid-like shape. Like this (apologies for my lack of artistic ability) -->
2: I specify the indices for the face, i.e., [0,1,2, 0,2,3, etc.]
I create a dictionary mapping each vertex to the sum of that vertex's adjacent, normalized face normals.
I append each of these summed up normalized per-vertex normals to a vector.
I combine the vertices, indices, and vector of normals to create an SCN Geometry.
To get the rounded look, I increase the subdivision count.
Pray that it works.
I'm new to the 3D world, so I could be way out in left field and not even know it.
This should give you a reasonable result with minimal effort and the least possible need to understand 3D lighting.
Open the Fox game example/sample from Apple:
https://developer.apple.com/library/prerelease/ios/samplecode/Fox/Introduction/Intro.html
Delete everything from the level.scn Scene Graph other than Lights, Camera and the Mountain.
And then add your geometry object to a node at the bottom, where I have the sphere highlighted at the bottom of the Scene Graph....
Now the material needs a bit of work, to make it useful.
Select the Mountain by clicking on it in the View, and goto the material editor and make it look like this, just keep checking against this image until yours matches the few (weird) changes I've made. And trust me this will work out fine:
When you want to get that lovely red you have, you simply change this property: DIFFUSE : It's right at the top of the Material settings.
Now you have a material and lighting setup that gives a reasonable approximation of curvature in a 3D space.
Applying this material to your object is a little weird, and unintuitive, you go here, and click on the add button, and pick the material with the same name as the one in the above image, that’s on the mountain.
You can improve this by adding two more lights in what’s known as a “3 point lighting setup”, google this phrase to see it explained.
Further, you can add off screen (out of camera) placards, usually white, to manage key reflections to further assist in users getting a feel for what’s being presented.

How to create sprite surface like in "cham cham"

My question maybe a bit too broad but i am going for the concept. How can i create surface as they did in "Cham Cham" app
https://itunes.apple.com/il/app/cham-cham/id760567889?mt=8.
I got most of the stuff done in the app but the surface change with user touch is quite different. You can change its altitude and it grows and shrinks. How this can be done using sprite kit what is the concept behind that can anyone there explain it a bit.
Thanks
Here comes the answer from Cham Cham developers :)
Let me split the explanation into different parts:
Note: As the project started quite a while ago, it is implemented using pure OpenGL. The SpiteKit implementation might differ, but you just need to map the idea over to it.
Defining the ground
The ground is represented by a set of points, which are interpolated over using Hermite Spline. Basically, the game uses a bunch of points defining the surface, and a set of points between each control one, like the below:
The red dots are control points, and eveyrthing in between is computed used the metioned Hermite interpolation. The green points in the middle have nothing to do with it, but make the whole thing look like boobs :)
You can choose an arbitrary amount of steps to make your boobs look as smooth as possible, but this is more to do with performance.
Controlling the shape
All you need to do is to allow the user to move the control points (or some of them, like in Cham Cham; you can define which range every point could move in etc). Recomputing the interpolated values will yield you an changed shape, which remains smooth at all times (given you have picked enough intermediate points).
Texturing the thing
Again, it is up to you how would you apply the texture. In Cham Cham, we use one big texture to hold the background image and recompute the texture coordinates at every shape change. You could try a more sophisticated algorithm, like squeezing the texture or whatever you found appropriate.
As for the surface texture (the one that covers the ground – grass, ice, sand etc) – you can just use the thing called Triangle Strips, with "bottom" vertices sitting at every interpolated point of the surface and "top" vertices raised over (by offsetting them against "bottom" ones in the direction of the normal to that point).
Rendering it
The easiest way is to utilize some tesselation library, like libtess. What it will do it covert you boundary line (composed of interpolated points) into a set of triangles. It will preserve texture coordinates, so that you can just feed these triangles to the renderer.
SpriteKit note
Unfortunately, I am not really familiar with SpriteKit engine, so cannot guarantee you will be able to copy the idea over one-to-one, but please feel free to comment on the challenging aspects of the implementation and I will try to help.

Get rectangle out of array of points

Using GPUImage, I am able to detect corners of a book/page in an image. But sometimes, it will pass more than 4 points, in which case I will need to process and figure out the best rectangle out of these points. Here's an example:
What's the most efficient way to figure out the best rectangle in this case?
Thanks
If you're using a corner detection algorithm, then you can filter results based on the relative strength of the detected corner. The contrast at the book corners relative to your current background appears to be much stronger than the contrast at the point found in the wood grain. Are there relative magnitudes associated with each point, or do you just get the points? Setting thresholds for edge strengths can mean a lot of fiddling unless the intensities of the foreground and background are relatively constant.
Your sample image could be blurred or morphed. For example, the right morphological "close" on light pixels could eliminate the texture in the wood grain without having an effect on the size and shape of the book. (http://en.wikipedia.org/wiki/Mathematical_morphology)
Another possibility is to shrink the image to a much smaller size and then perform detection on that. Resizing the image will tend to wipe out tiny details such as whatever wood grain pattern is currently being detected.
Picking the right lens and lighting can make the image easier to process. Try to simplify the image as much as possible before processing it. As mentioned above, "dark field" lighting that would illuminate just the book edges would present a much simpler image for processing. Writing down the constraints can make it more obvious which solution will be most robust and simplest to implement. Finding any rectangle anywhere in an image is very difficult; it's much easier to find a light rectangle on a dark background if the rectangle is at least 100 x 100 pixels in size, rotated no more than 15 degrees from square to the image edges, etc.
More involved solutions can be split into two approaches:
Solving the problem using given only 4 or more (x,y) points.
Using a different image processing technique altogether for the sample image.
1. Solving the program given only the points
If you generally only have 5 or 6 points, and if you are confident that 4 of those points will belong to the corners of the rectangles that you want, then you can try this:
Find the convex hull of all points. The convex hull is the N-gon that completely encompasses all points. If the points were pegs sticking up, and if you stretched a rubber band around them and let it snap into place, then the final shape of the rubber band is a convex hull. Algorithms that find convex hulls typically return a list of points that ordered counterclockwise from the bottom leftmost point.
Make a copy of your point list and remove points from the copy until only four points remain. These four remaining points will still be ordered counterclockwise.
Calculate the angle formed by each set of three successive points: points 1, 2, 3, then 2, 3, 4, then 3, 4, 1, and so on.
If an angle is outside a reasonable tolerance--less than 70 degrees or greater than 110 degrees--skip back to step 2 and remove the next point (or set of points).
Store the min and max angles for each set of 4 points.
Repeat steps 2 - 6, removing a different point (or points) each time.
Track the set of points for which the min and max angles are closest to 90 degrees.
http://en.wikipedia.org/wiki/Convex_hull
There are a number of other checks and constraints that could be introduced. For example, if the point-to-point distances for 3 successive points in the convex hull (pts N to N+1, and N+1 to N+2) are close to the expected width and height of the book, then you might mark these as known good points and only test the remaining points to see which is the fourth point.
The technique above can get unwieldy if you get quite a few points, but it may work if two or three of the book corner points are expected to be found on the convex hull.
For any geometric problem, I always recommend checking out GeometricTools.com, which has a lot of great, optimized source code for all sorts of problems. It's very handy to have the book as well, especially if you can find a cheap copy using AddAll.com.
http://www.geometrictools.com/
2. Other image processing techniques for your sample image
Although I could be wrong, it appears that GPUImage doesn't have many general-purpose image processing algorithms. Some other image processing algorithms could make this problem much simpler to solve.
Though there isn't space to go into it here, one of the keys to successful image processing is appropriate lighting. Make sure you're lighting is consistent. A diffuse light that evenly illuminates the book and the background would work well. You can simplify the problem using funkier lighting: if you have four lights (or a special ring light), you can provide horizontal illumination from the top, bottom, left, and right that will cause the edges of the book to appear bright and other surfaces to appear dark.
http://www.benderassoc.com/mic/lighting/nerlite/Darkfield.htm
If you can use some other GPU libraries to do image processing, then one of the following techniques could work nicely:
Connected component labeling (a.k.a. finding blobs). It shouldn't be too hard to use either binary thresholding or a watershed algorithm to separate the white blob that is the book from the rest of the background. Once the blob for the book is identified, finding the corners is easier. (http://en.wikipedia.org/wiki/Connected-component_labeling) In OpenCV you can find the "contours."
Generate an list of edge points, then have four separate line-fitting tools search from top to bottom, right to left, bottom to top, and left to right to find the four strong (and mostly straight) edges associated with the book. In your sample image, though, either the book cover is slightly warped or the camera lens has introduced barrel distortion.
Use a corner detector designed to find light corners on a dark background. If you will always be looking for a white book on a wood grain background, you can create a detector to find white corners on a brown background.
Use a Hough technique to find the four strongest lines in the image. (http://en.wikipedia.org/wiki/Hough_transform)
The algorithmic technique that works best will depend on your constraints: are you looking for rectangles only of a certain size? is the contrast between foreground and background consistent? can you introduce lighting to simplify the appearance of the image? and so on.

Sprite pixel parsing to determine Vector

Given an image that can contain any variety of solid color images, what is the best method for parsing the image at a given point and then determining the slope (or Vector if you prefer) of that area?
Being new to XNA development, I feel there must be an established method for doing this sort of thing but I have Googled this issue for awhile now.
By way of example, I have mocked up a quick image to demonstrate what I am trying to do. The white portion of the image (where the labels are shown) would be transparent pixels. The "ground" would be a RenderTarget2D or Texture2D object that will provide the Color array of pixels.
Example
What you are looking for is the tangent, which is 90 degrees to the normal (which is more commonly used). These two terms should assist you in your searching.
This is trivial if you've got the polygon outline data. If all you have is an image, then you have to come up with a way to convert it into a polygon.
It may not be entirely suitable for your problem, but the first place I would go is the Farseer Physics Engine, which has a "texture to polygon" feature you could possibly reuse.
If you are using the terrain as some kind of "ground", you can possibly cheat a bit by looking at the adjacent column of pixels and using that to determine the ground slope at that exact point. Kind of like what Lemmings and Worms do.
If you make that determination at the boundary between each pixel, you can get gradients of rise:run between two pixels horizontally. Usually you just break it into categories: so flat (1:1), 45 degrees (2:1) or too steep (>3:1). With a more complicated algorithm, that looks outwards to more columns, you can get better resolution.

rendering a photoshop style brush in openGL

I have lines that are programmatically defined by my program. what I want to do is render a brush stroke along them.
the way I think the type of brush I want works is, it simply has a texture, mostly transparent, and what you do is, render this texture centered on EVERY PIXEL in the path, and they blend together to create the stroke.
now assuming this even works, I'm going to make a bet that it will be WAY too expensive (targeting the ipad and other mobile chips, which HATE fillrate and alpha blending)
so, what other options are there?
if it could be done in realtime (that is, the path spline updating every frame) that would be ideal. but if not, within a fraction of a second on the ipad would be good too (the splines connect nodes, the user can drag nodes around thus transforming the spline, but it would be acceptable to revert to a simpler fill for the spline while it was moving around, then recalculate the brush once they release it)
for those wondering, I'm trying to get it so the thick lines look like they have been made with a pencil. it should look as real life as possible.
I considered just rendering the brushed spline to a texture, but as the spline can be any length, in any direction, dedicating a WHOLE rectangular texture to encompass the whole spline would be way to costly...
the spline is inevitably broken up into quads for rendering, so I thought of initially rendering the brush to a texture, then generating an optimized texture with each of the quads separated and packed as neatly as possible into the texture.
but two renders to texture... algorithm to create the optimized texture, making it so the quads still seamlessly blend with each other... sounds like a nightmare, and thats not even making it realtime.
so yeah, any ideas on how to draw thick, pencil like lines that follow a spline in real time on the ipad in openGL?
From my point of view, what you want is to render a line that:
is textured
has the edges fade off (i.e. no sharp edge to it)
follows a spline
To achieve these goals I would first of all break the spline up into a series of line segments that closely approximate the curve (you can make it more or less fine-grained depending on how accurate you want it to be versus how fast you want it to render).
Once you have these, you will need to make each segment into 3 quads, one that goes over the middle of the line segment that serves as the fully opaque part of the line and one on each edge of the line that will fade out to be totally transparent.
You will need to use a little bit of math to make sure that you extrude the quads along a vector that bisects 2 segments equally (i.e. so that the angle between the each segment and the extrusion vector are equal). This will ensure that you don't have gaps in the obtuse part of the join and overlaps in the acute parts.
After all of this, you just need to use the vertex positions as the UV co-ordinates (probably scaled though) and allow the texture to wrap around.
Using this method, you should end up with a mesh that has a solid thick line running through the middle of your spline with "fins" that taper off into complete transparency. This should approximate the effect you want quite closely while only rendering to relevant pixels (i.e. no giant areas of completely transparent pixels) and with very litter memory overhead.
I've been a little vague here as its kind of hard to explain with text alone and without writing an in depth tutorial. If you need more info, just comment on what your stuck on and I'll elaborate further.

Resources