I am writing simple hex engine for action-rpg in XNA 3.1. I want to light ground near hero and torches just as they were lighted in Diablo II. I though the best way to do so was to calculate field-of-view, hide any tiles and their's content that player can't see and draw special "Light" texture on top of any light source: Texture that is black with white, blurred circle in it's center.
I wanted to multiply this texture with background (as in blending mode: multiply), but - unfortunately - I do not see option for doing that in SpriteBatch. Could someone point me in right direction?
Or perhaps there is other - better - way to achive lighting model as in Diablo II?
If you were to multiply your light texture with the scene, you will darken the area, not brighten it.
You could try rendering with additive blending; this won't quite look right, but is easy and may be acceptable. You will have to draw your light with a fairly low alpha for the light texture to not just over saturate that part of the image.
Another, more complicated, way of doing lighting is to draw all of your light textures (for all the lights in the scene) additively onto a second render target, and then multiply this texture with your scene. This should give much more realistic lighting, but has a larger performance overhead and is more complex.
Initialisation:
RenderTarget2D lightBuffer = new RenderTarget2D(graphicsDevice, screenWidth, screenHeight, 1, SurfaceFormat.Color);
Color ambientLight = new Color(0.3f, 0.3f, 0.3f, 1.0f);
Draw:
// set the render target and clear it to the ambient lighting
graphicsDevice.SetRenderTarget(0, lightBuffer);
graphicsDevice.Clear(ambientLight)
// additively draw all of the lights onto this texture. The lights can be coloured etc.
spriteBatch.Begin(SpriteBlendMode.Additive);
foreach (light in lights)
spriteBatch.Draw(lightFadeOffTexture, light.Area, light.Color);
spriteBatch.End();
// change render target back to the back buffer, so we are back to drawing onto the screen
graphicsDevice.SetRenderTarget(0, null);
// draw the old, non-lit, scene
DrawScene();
// multiply the light buffer texture with the scene
spriteBatch.Begin(SpriteBlendMode.Additive, SpriteSortMode.Immediate, SaveStateMode.None);
graphicsDevice.RenderState.SourceBlend = Blend.Zero;
graphicsDevice.RenderState.DestinationBlend = Blend.SourceColor;
spriteBatch.Draw(lightBuffer.GetTexture(), new Rectangle(0, 0, screenWidth, screenHeight), Color.White);
spriteBatch.End();
As far as I know there is no way to do this without using your own custom shaders.
A custom shader for this would work like so:
Render your scene to a texture
Render your lights to another texture
As a post process on a blank quad, sample the two textures and the result is Scene Texture * Light Texture.
This will output a lit scene, but it won't do any shadows. If you want shadows I'd suggest following this excellent sample from Catalin Zima
Perhaps using the same technique as in the BloomEffect component could be an idea.
Basically what the effect does is grabbing the rendered scene, calculates a bloom image from the brightest areas in the scene, the blurs and combines the two. The result is highlighting areas depending on color.
The same approach could be used here. It will be simpler since you won't have to calculate the bloom image based on the background, only based on the position of the character.
You could even reuse this further to provide highlighting for other light sources as well, such as torches, magic effects and whatnot.
Related
I an rendering a simple box:
MDLMesh(boxWithExtent: ...)
In my draw loop when I turn off back-face culling:
renderCommandEncoder.setCullMode(.none)
All depth comparison is disabled and sides of the box are drawn completely wrong with back-facing quads in front of front-facing.
Huh?
My intent is to include back-facing surfaces in the depth comparison not ignore them. This is important for when I have, for example, a shape with semi-transparent textures that reveal the shape's internals which have a different shading style. How to I force depth comparison?
UPDATE
So Warren's suggestion is an improvement but it is still not correct.
My depthStencilDescriptor:
let depthStencilDescriptor = MTLDepthStencilDescriptor()
depthStencilDescriptor.depthCompareFunction = .less
depthStencilDescriptor.isDepthWriteEnabled = true
depthStencilState = device.makeDepthStencilState(descriptor: depthStencilDescriptor)
Within my draw loop I set depth stencil state:
renderCommandEncoder.setDepthStencilState(depthStencilState)
The resultant rendering
Description. This is a box mesh. Each box face uses a shader the paints a disk texture. The texture is transparent outside the body of the disk. The shader paints a red/white spiral texture on front-facings quads and a blue/black spiral texture on back-facing quads. The box sits in front of a camera aligned quad textured with a mobil image.
Notice how one of the textures paints over the rear back-facing quad with the background texture color. Notice also that the rear-most back-facing quad is not drawn at all.
Actually it is not possible to achieve the effect I am after. I basically want to do a simple composite - Porter/Duff - here but that is order dependent. Order cannot be guaranteed here so I am basically hosed.
I am building a game in XNA, but this question could apply to most 3D frameworks.
My scene contains several point lights. Each light has a position, intensity, color and radius. It also contains several objects that can be lit by the point lights.
I have successfully used a simple forward renderer to light objects these with up to 8 point lights at a time.
However, I also want to try some other effects, such as lit objects where the light slowly fades away each frame.
I want to use a multipass approach:
Each lit object has a texture and a lightmap (a rendertexture)
Then, each frame and for each object:
Clear the lightmap (or fade it)
Draw each light to the lightmap, with falloff etc.
Render the object using texture * lightmap in the pixel shader
My problem is related to texture coordinates. The lightmap should use the same UV mapping as the texture map, so that when I draw the objects, the color is:
tex2D(TextureSampler, uv) * tex2D(LightMapSampler, uv)
Or, put another way, I need to compute the world position of each pixel in the lightmap in the lightmap pixel shader for computing its distance to each light.
What coordinate transforms are required to achieve this?
I have two functions that I want to combine the results of:
drawAmbient
drawDirectional
They each work fine individually, drawing the scene with the ambient light only, or the directional light only. I want to show both the ambient and directional light but am having a bit of trouble. I try this:
[self drawAmbient];
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);
[self drawDirectional];
glDisable(GL_BLEND);
but I only see the results from first draw. I calculate the depth in the same way for both sets of draw calls. I could always just render to texture and blend the textures, but that seems redundant. Is there I way that I can add the lighting together when rendering to the default framebuffer?
You say you calculate the depth the same way in both passes. This is of course correct, but as the default depth comparison function is GL_LESS, nothing will actually be rendered in the second pass, since the depth is never less than what is currently in the depth buffer.
So for the second pass just change the depth test to
glDepthFunc(GL_EQUAL);
and then back to
glDepthFunc(GL_LESS);
Or you may also set it to GL_LEQUAL for the whole runtime to cover both cases.
As far as I know, you should render lighting to separate render targets and then combine them. So you will have rendered scene into these targets:
textured without lighting
summary diffuse lighting (fill with ambient color and additively render all light sources)
summary specular lighting (if you use specular component)
Then combine textures, so final_color = textured * diffuse + specular.
I am rendering point sprites (using OpenGL ES 2.0 on iOS) as a user's drawing strokes. I am storing these points in vertex buffer objects such that I need to perform depth testing in order for the sprites to appear in the correct order when they're submitted for drawing.
I'm seeing an odd effect when rendering these drawing strokes, as shown by the following screenshot:
Note the background-coloured 'border' around the edge of the blue stroke, where it is drawn over the green. The user drew the blue stroke after the green stroke, but when the VBOs are redrawn the blue stroke gets drawn first. When it comes to draw the green stroke, depth testing kicks in and sees that it should be behind the blue stroke, and so does this, with some success. It appears to me to be some kind of blending issue, or to do with incorrectly calculating the colour in the fragment shader? The edges of all strokes should be transparent, however it appears that the fragment shader combines it with the background texture when processing those fragments.
In my app I have created a depth renderbuffer and called glEnable(GL_DEPTH_TEST) using glDepthFunc(GL_LEQUAL). I have experimented with glDepthMask() to no avail. Blending is set to glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA), and the point sprite colour uses premultiplied alpha values. The drawing routine is very simple:
Bind render-to-texture FBO.
Draw background texture.
Draw point sprites (from a number of VBOs).
Draw this FBO's texture to the main framebuffer.
Present the main framebuffer.
EDIT
Here is some code from the drawing routine.
Setup state prior to drawing:
glDisable(GL_DITHER);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
Drawing routine:
[drawingView setFramebuffer:drawingView.scratchFramebuffer andClear:YES];
glUseProgram(programs[PROGRAM_TEXTURE]);
[self drawTexture:[self textureForBackgroundType:self.backgroundType]];
glUseProgram(programs[PROGRAM_POINT_SPRITE]);
// ...
// Draw all VBOs containing point sprite data
// ...
[drawingView setFramebuffer:drawingView.defaultFramebuffer andClear:YES];
glUseProgram(programs[PROGRAM_TEXTURE]);
[self drawTexture:drawingView.scratchTexture];
[drawingView presentFramebuffer:drawingView.defaultFramebuffer];
Thanks for any help.
If you want to draw non opaque geometries you have to z-sort them from back to front. This has been the only way to get a proper blending for many years. These days there are some algorithms for order independent transparency like Dual Depth Peeling but they are not applicable to iOS.
I have a game object that manages several sprite objects. Each of the sprites overlap each other a bit, and drawing them looks just fine when they are at 100% opacity. If I set their opacity to say, 50% that is when it all goes to pot because any overlapping area is not 50% opaque due to the multiple layers.
EDIT: Ooops! For some reason I thought that I couldn't upload images. Anyway....
http://postimage.org/image/2fhcmn6s/ --> Here it is. Guess I need more rep for proper inclusion.
From left to right:
1. Multiple sprites, 100% opacity. Great!
2. Both are 50%, but notice how the overlap region distinguishes them as two sprites.
3. This is the desired behavior. They are 50% opaque, but in terms of the composite image.
What is the best way to mitigate this problem? Is a render target a good idea? What if I have hundreds of these 'multi-sprites'?
Hope this makes sense. Thanks!
Method 1:
If you care about the individual opacity of each sprite, then render the image on the background to a rendertarget texture of the same size using 50% or whatever opacity you want the sprite to have against the background. Then draw this rendertarget with 100% opacity.
In this way, all sprites will be blended against the background only, and other sprites will be ignored.
Method 2:
If you don't care about setting the individual opacity of each sprite, then you can just draw all sprites with 100% opacity to a rendertarget. Then draw that render target over your background at 50% opacity.
Performance concerns:
I mentioned two examples of drawing to rendertargets, each for a different effect.
Method 1:
You want to be able to specify a different opacity for each sprite.
If so, you need to render every sprite to a rendertarget and then draw that rendertarget texture to the final texture. Effectively, this is the same cost as drawing twice as many sprites as you need. In this case, that's 400 draw calls, which can be very expensive.
If you batch the calls though, and use a single large rendertarget for all of the sprites, you might get away with just 2 draw calls (depending on how big your sprites are, and the max size of a texture).
Method 2:
You don't need different opacity per each sprite.
In this case you can almost certainly get away with just 2 draw calls, regardless of sprite size.
Just batch all draw calls of the sprites (with 100% opacity) to draw to a rendertarget. That's one draw call.
Now draw that rendertarget on top of your background image with the desired opacity (e.g. 50% opacity), and all sprites will have this opacity.
This case is easier to implement.
The first thing your example images reminded me of is the "depth-buffer and translucent surfaces" problem.
In a 3D game you must sort your translucent surfaces from back-to-front and draw them only after you have rendered the rest of your scene - all with depth reading and writing turned on. If you don't do this you end up with your 3rd image, when you normally want your 2nd image with the glass being translucent over the top of what is behind it.
But you want the 3rd image - with some transparent surfaces obscuring other ones - so you could just deliberately cause this depth problem!
To do this you need to turn on depth reads and writes and set your depth function so that a second sprite drawn at the same depth as a previously drawn sprite does not render.
To achieve this in XNA 4.0 you need to pass, to SpriteBatch.Begin, a DepthStencilState with its DepthBufferFunction set to CompareFunction.Less (by default it is less-than-or-equal-to) and DepthBufferEnable and DepthBufferWriteEnable set to true.
There may be interactions with the sprite's layerDepth parameter (I cannot remember how it maps to depth by default).
You may also need to use BasicEffect as your shader for your sprite batch - specifically so you can set a projection matrix with appropriate near and far planes. This article explains how to do that. And you may also need to clear your depth buffer before hand.
Finally, you need to draw your sprites in the correct order - with the unobscured sprite first.
I am not entirely sure if this will work and if it will work reliably (perhaps you will get some kind of depth fighting issue, I am not sure). But I think it's worth a try, given that you can leave your rendering code essentially normal and just adjust your render state.
You should try the stuff in Andrew's answer first, but if that doesn't work, you could still render all of the sprites (assuming they all have the same opacity) onto a RenderTarget(2D) with 100% opacity, and then render that RenderTarget to the screen with 50%.
Something like this in XNA 4.0:
RenderTarget2D rt = new RenderTarget2D(graphicsDevice,
graphicsDevice.PresentationParameters.BackBufferWidth,
graphicsDevice.PresentationParameters.BackBufferHeight);
GraphicsDevice.SetRenderTarget(rt);
//Draw sprites
GraphicsDevice.SetRenderTarget(null);
//Then draw rt (also a Texture2D) with 50% opacity. For example:
spriteBatch.Begin();
spriteBatch.Draw(rt, Vector2.Zero, Color.FromArgb(128, Color.White));
spriteBatch.End();