Per object runtime lightmap - xna

I am building a game in XNA, but this question could apply to most 3D frameworks.
My scene contains several point lights. Each light has a position, intensity, color and radius. It also contains several objects that can be lit by the point lights.
I have successfully used a simple forward renderer to light objects these with up to 8 point lights at a time.
However, I also want to try some other effects, such as lit objects where the light slowly fades away each frame.
I want to use a multipass approach:
Each lit object has a texture and a lightmap (a rendertexture)
Then, each frame and for each object:
Clear the lightmap (or fade it)
Draw each light to the lightmap, with falloff etc.
Render the object using texture * lightmap in the pixel shader
My problem is related to texture coordinates. The lightmap should use the same UV mapping as the texture map, so that when I draw the objects, the color is:
tex2D(TextureSampler, uv) * tex2D(LightMapSampler, uv)
Or, put another way, I need to compute the world position of each pixel in the lightmap in the lightmap pixel shader for computing its distance to each light.
What coordinate transforms are required to achieve this?

Related

OpenGL Image warping using lookup table

I am working on an Android application that slims or fatten faces by detecting it. Currently, I have achieved that by using the Thin-plate spline algorithm.
http://ipwithopencv.blogspot.com.tr/2010/01/thin-plate-spline-example.html
The problem is that the algorithm is not fast enough for me so I decided to change it to OpenGL. After some research, I see that the lookup table texture is the best option for this. I have a set of control points for source image and new positions of them for warp effect.
How should I create lookup table texture to get warp effect?
Are you really sure you need a lookup texture?
Seems that it`d be better if you had a textured rectangular mesh (or a non-rectangular mesh, of course, as the face detection algorithm you have most likely returns a face-like mesh) and warped it according to the algorithm:
Not only you`d be able to do that in a vertex shader, thus processing each mesh node in parallel, but also it`s less values to process compared to dynamic texture generation.
The most compatible method to achieve that is to give each mesh point a Y coordinate of 0 and X coordinate where the mesh index would be stored, and then pass a texture (maybe even a buffer texture if target devices support it) to the vertex shader, where at the needed index the R and G channels contain the desired X and Y coordinates.
Inside the vertex shader, the coordinates are to be loaded from the texture.
This approach allows for dynamic warping without reloading geometry, if the target data texture is properly updated — for example, inside a pixel shader.

Drawing Curves using XNA

I've been making progress in a fan-replicated game I'm coding, but I'm stuck with this problem.
Right now I'm drawing a texture pixel by pixel on the curve path, but this cuts down frames per second from 4000 to 50 on curves with long lengths.
I need to store pixel by pixel Vector2 + length data anyway, so I can produce static speed movement along it, looping through it to draw the curve as well.
Curves I need to be able to draw are Bezier, Circular and Catmull.
Any ideas of how to make it more efficient?
Maybe I have misunderstood the question but I did this once:
Create the curve and sample x points on it. (Red dots)
Create a mesh from it by calculating the cross vector of each point. (Green lines)
Build a quad between all of these. So basically 5 of them in my picture.
Set the U coordinate to be on the perpendicular plane and V coordinate follows the curve length. So 0 at the start an 1 at the end of it.
You can of course scale V if you want you texture to repeat.
Any ideas of how to make it more efficient?
Assuming the texture needs to be dynamic, draw the texture on the GPU-side using a shader. Drawing it on the CPU-side is not only slow, but bogs down both the CPU and GPU when you need to send it back to the GPU every frame. Much better to draw it GPU-side.
I need to store pixel by pixel Vector2 + length data anyway
The shader can store additional information into the texture. e.g. even though you may allocate a RGBA texture, it doesn't mean that it needs to store color information when it is your shaders that will interpret the data.

What are the correct cube map coordinates for openGL ES 2.0 on iOS after loading the cube map with GLKit?

In order to debug my shader, I am trying to display the just the front face of the cube map.
The cube map is a 125x750 image with the 6 faces on top of each other:
First, I load the cube map with GLKit:
_cubeTexture = [GLKTextureLoader cubeMapWithContentsOfFile:[[NSBundle mainBundle] pathForResource:#"uffizi_cube_map_ios" ofType:#"png"] options:kNilOptions error:&error];
Then I load it into the shader:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_CUBE_MAP, self.cubeTexture.name);
glUniform1i( glGetUniformLocation( self.shaderProgram, "cube"), 0);
Then in the fragment shader:
gl_FragColor = textureCube(cube, vec3(-1.0+2.0*(gl_FragCoord.x/resolution.x),-1.0+2.0*(gl_FragCoord.y/resolution.y),1.0));
This displays a distorted image which seems to be a portion of the top of the cube map:
It shouldn't be distorted, and it should show the right face, not the top face.
I can't find any documentation that describes how the coordinates map to the cube, so what am I doing wrong?
It seems that there is a problem with cubeMapWithContentsOfFile. The cubeMapWithContentsOfFiles method (the one that takes an array of 6 images) works perfectly on the simulator. (There is a different issue with both methods on device).
To visualize how texture coordinates work for cube maps, picture a cube centered at the origin, with the faces at distance 1 from the origin, and with the specified cube map image on each face.
The texture coordinates can then be seen as direction vectors. Starting at the origin, the 3 components define a vector that can point in any direction. The ray defined by the vector will then intersect one of the 6 cube faces at a given point. This is the point where the corresponding cube map image is sampled during texturing.
For example, take a vector that points in a direction that is closest to the positive z axis. The ray defined by this vector intersects the top face of the cube. Therefore, the top (POSITIVE_Z) image of the cube map is sampled, at the point where the ray intersects the face.
Equivalent rules applies to all other directions. The face corresponding to the largest absolute value of one of the vector components determines which face is sampled, and the intersection point determines the position within the image.
The exact rules and formula can be found in the spec document. For example in the latest spec (OpenGL 4.5), see Section 8.13 "Cube Map Texture Selection", with the matching table 8.19. But as long as you understand that the texture coordinates define a direction vector, you have the main aspect covered.
How you determine the texture coordinates really depends on what you want to achieve. Common cases include:
Using normal vector as the cube map texture coordinates. This can for example be used for pre-computed lighting effects, where the content of the cube map image contains pre-computed lighting results for each possible normal direction.
Using the reflection vector as the cube map texture coordinate. This supports the implementation of environment mapping. The content of the cube map is a picture of the environment.

Water surface sample for iOS

I'm looking for an water surface effect sample like Pocket pond HD. I have found some tutorials:
iPhone OpenGL demo water waves
Waves effect
However, it's sketchy.
It is very simple.
You just have to make a 2D heightmap (2D array of water height at that particular place). With heightmap, you can calculate (approximate, interpolate) a normal at each place depending on the nearest height points.
Then you perform a "simple raytracing". You "refract each ray" depending on normal, intersect it with plane (bottom) and get a color from texture at that place.
Practically: you make a triangle mesh from height map and render those triangles. You can send normals in Vertex Buffer or compute them in Vertex Shader. Raytracing is done in Fragment Shader. Direction of each ray can be (0, 0, 1). You refract it by current normal and scale the result, so Z coordinate equals water depth. The new X and Y coordinates are texture coordinates.
To make an animation, just update the heightmap in time.

How to multiply two sprites in SpriteBatch Draw XNA (2D)

I am writing simple hex engine for action-rpg in XNA 3.1. I want to light ground near hero and torches just as they were lighted in Diablo II. I though the best way to do so was to calculate field-of-view, hide any tiles and their's content that player can't see and draw special "Light" texture on top of any light source: Texture that is black with white, blurred circle in it's center.
I wanted to multiply this texture with background (as in blending mode: multiply), but - unfortunately - I do not see option for doing that in SpriteBatch. Could someone point me in right direction?
Or perhaps there is other - better - way to achive lighting model as in Diablo II?
If you were to multiply your light texture with the scene, you will darken the area, not brighten it.
You could try rendering with additive blending; this won't quite look right, but is easy and may be acceptable. You will have to draw your light with a fairly low alpha for the light texture to not just over saturate that part of the image.
Another, more complicated, way of doing lighting is to draw all of your light textures (for all the lights in the scene) additively onto a second render target, and then multiply this texture with your scene. This should give much more realistic lighting, but has a larger performance overhead and is more complex.
Initialisation:
RenderTarget2D lightBuffer = new RenderTarget2D(graphicsDevice, screenWidth, screenHeight, 1, SurfaceFormat.Color);
Color ambientLight = new Color(0.3f, 0.3f, 0.3f, 1.0f);
Draw:
// set the render target and clear it to the ambient lighting
graphicsDevice.SetRenderTarget(0, lightBuffer);
graphicsDevice.Clear(ambientLight)
// additively draw all of the lights onto this texture. The lights can be coloured etc.
spriteBatch.Begin(SpriteBlendMode.Additive);
foreach (light in lights)
spriteBatch.Draw(lightFadeOffTexture, light.Area, light.Color);
spriteBatch.End();
// change render target back to the back buffer, so we are back to drawing onto the screen
graphicsDevice.SetRenderTarget(0, null);
// draw the old, non-lit, scene
DrawScene();
// multiply the light buffer texture with the scene
spriteBatch.Begin(SpriteBlendMode.Additive, SpriteSortMode.Immediate, SaveStateMode.None);
graphicsDevice.RenderState.SourceBlend = Blend.Zero;
graphicsDevice.RenderState.DestinationBlend = Blend.SourceColor;
spriteBatch.Draw(lightBuffer.GetTexture(), new Rectangle(0, 0, screenWidth, screenHeight), Color.White);
spriteBatch.End();
As far as I know there is no way to do this without using your own custom shaders.
A custom shader for this would work like so:
Render your scene to a texture
Render your lights to another texture
As a post process on a blank quad, sample the two textures and the result is Scene Texture * Light Texture.
This will output a lit scene, but it won't do any shadows. If you want shadows I'd suggest following this excellent sample from Catalin Zima
Perhaps using the same technique as in the BloomEffect component could be an idea.
Basically what the effect does is grabbing the rendered scene, calculates a bloom image from the brightest areas in the scene, the blurs and combines the two. The result is highlighting areas depending on color.
The same approach could be used here. It will be simpler since you won't have to calculate the bloom image based on the background, only based on the position of the character.
You could even reuse this further to provide highlighting for other light sources as well, such as torches, magic effects and whatnot.

Resources