Adjust brightness/contrast/gamma of scene in DirectX? - directx

In a Direct3D application filled with Sprite objects from D3DX, I would like to be able to globally adjust the brightness and contrast. The contrast is important, if at all possible.
I've seen this question here about OpenGL: Tweak the brightness/gamma of the whole scene in OpenGL
But that doesn't give me what I would need in a DirectX context. I know this is something I could probably do with a pixel shader, too, but that seems like shooting a fly with a bazooka and I worry about backwards compatibility with older GPUs that would have to do any shaders in software. It seems like this should be possible, I remember even much older games like the original Half Life having settings like this well before the days of shaders.
EDIT: Also, please note that this is not a fullscreen application, so this would need to be something that would just affect the one Direct3D device and that is not a global setting for the monitor overall.

A lot of games do increasing brightness by, literally, applying a full screen poly over the screen with additive blending. ie
SRCBLEND = ONE
DESTBLEND = ONE
and then applying a texture with a colour of (1, 1, 1, 1) will increase the brightness(ish) of every pixel by 1.
To adjust contrast then you need to do similar but MULTIPLY by a constant factor. This will require blend settings as follows
SRCBLEND = DESTCOLOR
DESTBLEND = ZERO
This way if you blend a pixel with a value of (2, 2, 2, 2) then you will change the contrast.
Gamma is a far more complicated beast and i'm not aware of a way you can fake it like above.
Neither of these solutions is entirely accurate but they will give you a result that looks "slightly" correct. Its much more complicated doing either way properly but by using those methods you'll see effects that will look INCREDIBLY similar to various games you've played, and that's because that's EXACTLY what they are doing ;)

For a game I made in XNA, I used a pixel shader. Here is the code. [disclaimer] I'm not entirely sure the logic is right and some settings can have weird effects (!) [/disclaimer]
float offsetBrightness = 0.0f; //can be set from my C# code, [-1, 1]
float offsetContrast = 0.0f; //can be set from my C# code [-1, 1]
sampler2D screen : register(s0); //can be set from my C# code
float4 PSBrightnessContrast(float2 inCoord : TEXCOORD0) : COLOR0
{
return (tex2D(screen, inCoord.xy) + offsetBrightness) * (1.0 + offsetContrast);
}
technique BrightnessContrast
{
pass Pass1 { PixelShader = compile ps_2_0 PSBrightnessContrast(); }
}

You didn't specify which version of DirectX you're using, so I'll assume 9. Probably the best you can do is to use the gamma-ramp functionality. Pushing the ramp up and down increases or decreases brightness. Increasing or decreasing the gradient alters contrast.

Another approach is to edit all your textures. It would basically be the same approach as the gamma map, except you apply the ramp manually to the pixel data in each of your image files to generate the textures.

Just use a shader. Anything that doesn't at least support SM 2.0 is ancient at this point.

The answers in this forum may be of help, as I have never tried this.
http://social.msdn.microsoft.com/Forums/en-US/windowsdirectshowdevelopment/thread/95df6912-1e44-471d-877f-5546be0eabf2

Related

Fading a 3D object into the background, using D3D9, SH3 & HLSL

I have a simple program that renders a couple of 3D objects, using DirectX 3D 9 and HLSL. I'm just starting off with HLSL, I have no experience with 3D rendering.
I am able to change the texture & color of the models and fade between two textures without problems, however I was wondering what the best way to simply fade a 3D object (blend it with the background) would be. I would assume that it wouldn't be done as fading between two textures (using lerp), since I want the object faded to the entire background, so there would be many different textures behind it.
I'm using the LPD3DXEFFECT as my effect class, DrawIndexedPrimitive as the drawing function in each pass, and I only have a single pass. I'm also using Shader Model 3, as this is an older project.
The only way that I thought it possible would be to simply get the color of the pixel before you apply any changes, and then do calculations on it with the color of the texture of the model to attain a faded pixel. However, after looking over the internet, it does not appear that it's actually possible to get the color of a pixel before doing anything to it with HLSL.
Is it even possible to do something like this using HLSL? Am I missing something that could assist me here?
Any help is appreciated!
Forgive me if I'm misunderstanding, but it sounds like you're trying to simulate transparency instead of using built-in transparency.
If you're trying to get the color of the pixels behind the object and want to avoid using transparency, I'd start by trying to use the last rendered frame as a texture, then reference that texture in your current shader. There may be some way to do it within the same frame - to force all other rendering to go first, then handle the one object - but I don't know it.
After a long grind, I finally found a very good workaround for my problem, and I will try to explain my understanding of it for anyone else that has a smillar issue. Thanks to Alexander Stewart for suggesting that there may be an in-built way to do it.
Method Description
Instead of taking care of the background fade in the HLSL pixel shader, there is another way to do it, using a method called Frame Buffer Alpha Blending (full MS Docs documentation: https://learn.microsoft.com/en-us/windows/win32/direct3d9/frame-buffer-alpha).
The basic idea behind this method is to provide a simple way of blending a given pixel that is to be rendered, with the existing pixel on the screen. There is a formula that is followed: FinalColor = ObjectPixelColor * SourceBlendFactor + BackgroundPixelColor * DestinationBlendFactor, all of these "variables" being groups of 4 float values, in the format (R, G, B, A).
How I Implemented it
Before doing anything with the actual shaders, in my Visual Studio C++ file I have to pass a few flags to my render device (I used LPDIRECT3DDEVICE9 as my device class). I had to set render states for both D3DRS_SRCBLEND and D3DRS_DESTBLEND, which are reffering to ObjectPixelColor and DestinationBlendFactor respectivelly in the formula above. These will be my factors that will be multiplying each one of my object and background pixel colors. There are many possible values that can be assigned to D3DRS_SRCBLEND and D3DRS_DESTBLEND, full list is available in the MS Docs link above, but in order to achieve what I wanted to (simply a way to fade an object into the background with an alpha number going from 0 to 1), I figured out the flags should be like this: SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA); SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);.
After setting these flags, before passing through my shaders & rendering, I just needed to set one more flag: SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);. I was also able to alternate between TRUE and FALSE here without changing anything else with no rendering problems (although my project was very simple, it will probably cause issues on larger projects). You can then pass any arguments you want, such as the alpha number, to the HLSL shader as a global variable (I did it using SetValue()).
Going back to my HLSL shader, after these changes, passing a color float4 variable taken from the tex2D() function from my pixel shader with an alpha value between 0 and 1 yielded the correct alpha, provided there aren't other issues (another issue that I had but hadn't realized at the time was the fact that my transparent object was actually rendering before the background, so I can only reccomend to check the rendering order when working on rendering projects).
I'm sure there could have probably been a better way of implementing this with the latest DirectX, but my compiler only supports Shader Model 3 and lower.

Dissolve SKShader works as expected on simulator, strange behaviour on actual device

I encountered weird behaviour when trying to create dissolve shader for iOS spritekit. I have this basic shader that for now only changes alpha of texture depending on black value of noise texture:
let shader = SKShader(source: """
void main() {\
vec4 colour = texture2D(u_texture, v_tex_coord);\
float noise = texture2D(noise_tex, v_tex_coord).r;\
gl_FragColor = colour * noise;\
}
""", uniforms: [
SKUniform(name: "noise_tex", texture: spriteSheet.textureNamed("dissolve_noise"))
])
Note that this code is called in spriteSheet preload callback.
On simulator this consistently gives expected result ie. texture with different alpha values all over the place. On actual 14.5.1 device it varies:
Applied directly to SKSpriteNode - it makes whole texture semi-transparent with single value
Applied to SKEffectNode with SKSpriteNode as its child - I see miniaturized part of a whole spritesheet
Same as 2 but texture is created from image outside spritesheet - it works as on simulator (and as expected)
Why does it behave like this? Considering this needs to work on iOS 9 devices I'm worried 3 won't work everywhere. So I'd like to understand why it happens and ideally get sure way to force 1 or at least 2 to work on all devices.
After some more testing I finally figured out what is happening. The textures in the shader are whole spritesheets instead of separate textures on devices, so the coordinates go all over the place. (which actually makes more sense than simulator behaviour now that I think about it)
So depending if I want 1 or 2 I need to apply different maths. 2 is easier, since display texture is first rendered onto a buffer, so v_text_coord will take full [0.0, 1.0], so all I need is noise texture rect to do appropriate transform. For 1 I need to additionally provide texture rect to first change it into [0.0, 1.0] myself and then apply that to noise coordinates.
This will work with both spritesheets loaded into the shader or separate images, just in later case it will do some unnecessary calculations.

Firemonkey does strange, bizarre things with Alpha

Working with Delphi / Firemonkey XE8. Had some decent luck with it recently, although you have to hack the heck out of it to get it to do what you want. My current project is to evaluate it's Low-Level 3D capabilities to see if I can use them as a starting point for a Game Project. I also know Unity3D quite well, and am considering using Unity3D instead, but I figure that Delphi / Firemonkey might give me some added flexibility in my game design because it is so minimal.
So I decided to dig into an Embarcadero-supplied sample... specifically the LowLevel3D sample. This is the cross-platform sample that shows you how to do nothing other than draw a rotating square on the screen with some custom shaders of your choice and have it look the same on all platforms (although it actually doesn't work AT ALL the same on all platforms... but we won't get into that).
Embc does not supply the original uncompiled shaders for the project (which I might add is really lame), and requires you to supply your own compatible shaders (some compiled, some not) for the various platforms you're targeting (also lame)...so my first job has been to create a shader that would work with their existing sample that does something OTHER than what the sample already does. Specifically, if I'm creating a 2D game, I wanted to make sure that I could do sprites with alpha transparency, basic stuff.... if I can get this working, I'll probably never have to write another shader for the whole game.
After pulling my hair out for many hours, I came up with this little shader that works with the same parameters as the demo.
Texture2D mytex0: register(t0);
Texture2D mytex1: register(t1);
float4 cccc : register(v0) ;
struct PixelShaderInput
{
float4 Pos: COLOR;
float2 Tex: TEXCOORDS;
};
SamplerState g_samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
RasterizerState MyCull {
FrontCounterClockwise = FALSE;
};
float4 main(PixelShaderInput pIn): SV_TARGET
{
float4 cc,c;
float4 ci = mytex1.Sample(g_samLinear, pIn.Tex.xy);
c = ci;
c.a = 0;//<----- DOES NOT actually SET ALPHA TO ZERO ... THIS IS A PROBLEM
cc = c;
return cc;
}
Never-mind that it doesn't actually do much with the parameters, but check out the line where I set the output's ALPHA to 0. Well.... I found that this actually HAS NO EFFECT!
But it gets spookier than this. I found that turning on CULLING in the Delphi App FIXED this issue. So I figure... no big deal then, I'll just manually draw both sides of the sprite... right? Nope! When I manually drew a double sided sprite.. the problem came back!
Check this image: shader is ignoring alpha=0 when double-sided
In the above picture, clearly alpha is SOMEWHAT obeyed because the clouds are not surrounded by a black box, however, the cloud itself is super saturated (I find that if I multiply rgb*a, then the colors come out approximately right, but I'm not going to do that in real-life for obvious reasons.
I'm new to the concept of writing custom shaders. Any insight is appreciated.

Image/Rectangle partial fade in xna

How do I partially fade a rectangle or an image in xna like so:
I'm using xna 3.1 and SpriteBatch.Draw(). I need it to be partially transparent so I can see what is behind it.
Just to add to Andrew's answer, there is a third (much slower) way to do this without writing a shader or a new batcher. Just use Texture2D's GetData method to extract the pixel data, go through it in a for loop changing the alpha values the way you want, and then use SetData to put it back. This is HORRIBLE way to do things if you are constantly changing the alpha value's, but it looks like you just want to change the alpha values once, so you'll just have additional overhead when loading the program and everything should work smoothly after that. Also, if you are only doing this to a small number of images, the performance difference is practically negligible. Here's some code to get you started:
Color[] texColors = new Color[myTexture.Width * myTexture.Height];
myTexture.GetData<Color>(texColors);
for(int i = 0; i < texColors.Length; i++)
{
//change alpha values the way you want
}
myTexture.SetData<Color>(texColors);
The "correct" way to do this would be to stop using SpriteBatch and manually draw quads or write your own sprite batcher instead. This way you could individually control the vertex alpha values.
If you want a quick, somewhat hacky way to do it, add a custom pixel shader to your sprite batch. In this shader, take the texture-coordinates as input and use them to modulate the output alpha. Or alternately use a second texture to modulate the alpha values.

How to scale on-screen pixels?

I have written a 2D Jump&Run Engine resulting in a 320x224 (320x240) image. To maintain the old school "pixely"-feel to it, I would like to scale the resulting image by 2 or 3 or 4, according to the resolution of the user.
I don't want to scale each and every sprite, but the resulting image!
Thanks in advance :)
Bob's answer is correct about changing the filtering mode to TextureFilter.Point to keep things nice and pixelated.
But possibly a better method than scaling each sprite (as you'd also have to scale the position of each sprite) is to just pass a matrix to SpriteBatch.Begin, like so:
sb.Begin(/* first three parameters */, Matrix.CreateScale(4f));
That will give you the scaling you want without having to modify all your draw calls.
However it is worth noting that, if you use floating-point offsets in your game, you will end up with things not aligned to pixel boundaries after you scale up (with either method).
There are two solutions to this. The first is to have a function like this:
public static Vector2 Floor(Vector2 v)
{
return new Vector2((float)Math.Floor(v.X), (float)Math.Floor(v.Y));
}
And then pass your position through that function every time you draw a sprite. Although this might not work if your sprites use any rotation or offsets. And again you'll be back to modifying every single draw call.
The "correct" way to do this, if you want a plain point-wise scale-up of your whole scene, is to draw your scene to a render target at the original size. And then draw your render target to screen, scaled up (with TextureFilter.Point).
The function you want to look at is GraphicsDevice.SetRenderTarget. This MSDN article might be worth reading. If you're on or moving to XNA 4.0, this might be worth reading.
I couldn't find a simpler XNA sample for this quickly, but the Bloom Postprocess sample uses a render target that it then applies a blur shader to. You could simply ignore the shader entirely and just do the scale-up.
You could use a pixelation effect. Draw to a RenderTarget2D, then draw the result to the screen using a Pixel Shader. There's a tool called Shazzam Shader Editor that let's you try out pixel shaders and it includes one that does pixelation:
http://shazzam-tool.com/
This may not be what you wanted, but it could be good for allowing a high-resolution mode and for having the same effect no matter what resolution was used...
I'm not exactly sure what you mean by "resulting in ... an image" but if you mean your end result is a texture then you can draw that to the screen and set a scale:
spriteBatch.Draw(texture, position, source, color, rotation, origin, scale, effects, depth);
Just replace the scale with whatever number you want (2, 3, or 4). I do something similar but scale per sprite and not the resulting image. If you mean something else let me know and I'll try to help.
XNA defaults to anti-aliasing the scaled image. If you want to retain the pixelated goodness you'll need to draw in immediate sort mode and set some additional parameters:
spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.None);
GraphicsDevice.SamplerStates[0].MagFilter = TextureFilter.Point;
GraphicsDevice.SamplerStates[0].MinFilter = TextureFilter.Point;
GraphicsDevice.SamplerStates[0].MipFilter = TextureFilter.Point;
It's either the Point or the None TextureFilter. I'm at work so I'm trying to remember off the top of my head. I'll confirm one way or the other later today.

Resources