Fading a 3D object into the background, using D3D9, SH3 & HLSL - directx

I have a simple program that renders a couple of 3D objects, using DirectX 3D 9 and HLSL. I'm just starting off with HLSL, I have no experience with 3D rendering.
I am able to change the texture & color of the models and fade between two textures without problems, however I was wondering what the best way to simply fade a 3D object (blend it with the background) would be. I would assume that it wouldn't be done as fading between two textures (using lerp), since I want the object faded to the entire background, so there would be many different textures behind it.
I'm using the LPD3DXEFFECT as my effect class, DrawIndexedPrimitive as the drawing function in each pass, and I only have a single pass. I'm also using Shader Model 3, as this is an older project.
The only way that I thought it possible would be to simply get the color of the pixel before you apply any changes, and then do calculations on it with the color of the texture of the model to attain a faded pixel. However, after looking over the internet, it does not appear that it's actually possible to get the color of a pixel before doing anything to it with HLSL.
Is it even possible to do something like this using HLSL? Am I missing something that could assist me here?
Any help is appreciated!

Forgive me if I'm misunderstanding, but it sounds like you're trying to simulate transparency instead of using built-in transparency.
If you're trying to get the color of the pixels behind the object and want to avoid using transparency, I'd start by trying to use the last rendered frame as a texture, then reference that texture in your current shader. There may be some way to do it within the same frame - to force all other rendering to go first, then handle the one object - but I don't know it.

After a long grind, I finally found a very good workaround for my problem, and I will try to explain my understanding of it for anyone else that has a smillar issue. Thanks to Alexander Stewart for suggesting that there may be an in-built way to do it.
Method Description
Instead of taking care of the background fade in the HLSL pixel shader, there is another way to do it, using a method called Frame Buffer Alpha Blending (full MS Docs documentation: https://learn.microsoft.com/en-us/windows/win32/direct3d9/frame-buffer-alpha).
The basic idea behind this method is to provide a simple way of blending a given pixel that is to be rendered, with the existing pixel on the screen. There is a formula that is followed: FinalColor = ObjectPixelColor * SourceBlendFactor + BackgroundPixelColor * DestinationBlendFactor, all of these "variables" being groups of 4 float values, in the format (R, G, B, A).
How I Implemented it
Before doing anything with the actual shaders, in my Visual Studio C++ file I have to pass a few flags to my render device (I used LPDIRECT3DDEVICE9 as my device class). I had to set render states for both D3DRS_SRCBLEND and D3DRS_DESTBLEND, which are reffering to ObjectPixelColor and DestinationBlendFactor respectivelly in the formula above. These will be my factors that will be multiplying each one of my object and background pixel colors. There are many possible values that can be assigned to D3DRS_SRCBLEND and D3DRS_DESTBLEND, full list is available in the MS Docs link above, but in order to achieve what I wanted to (simply a way to fade an object into the background with an alpha number going from 0 to 1), I figured out the flags should be like this: SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA); SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);.
After setting these flags, before passing through my shaders & rendering, I just needed to set one more flag: SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);. I was also able to alternate between TRUE and FALSE here without changing anything else with no rendering problems (although my project was very simple, it will probably cause issues on larger projects). You can then pass any arguments you want, such as the alpha number, to the HLSL shader as a global variable (I did it using SetValue()).
Going back to my HLSL shader, after these changes, passing a color float4 variable taken from the tex2D() function from my pixel shader with an alpha value between 0 and 1 yielded the correct alpha, provided there aren't other issues (another issue that I had but hadn't realized at the time was the fact that my transparent object was actually rendering before the background, so I can only reccomend to check the rendering order when working on rendering projects).
I'm sure there could have probably been a better way of implementing this with the latest DirectX, but my compiler only supports Shader Model 3 and lower.

Related

SceneKit - How to use different blend modes - e.g. additive blending? [duplicate]

I can't see an obvious way to change the blending function (glBlendFunc) for a scene kit node or geometry - it doesn't seem to be part of the material, and it isn't very obvious from the scene kit documentation how it organises render passes.
Do I need to make a Render delegate for the node which just changes the GLblending mode, or do I need to somehow set up different render passes etc. (It's not obvious from the documentation how I even control things like render passes)?
Will It Blend? - SceneKit
Yes! In iOS 9 and OS X 10.11 (currently in beta), blendMode is an attribute on materials, so you can render any SceneKit content with additive, multiplicative, or other kinds of blending.
But for while you're still supporting earlier OS versions... SceneKit in iOS 8.x and OS X 10.8 through 10.10 doesn't offer API for blend modes.
There are a couple of options you can look at for working around this.
1. Set the GL state yourself
If you call glBlendFunc and friends before SceneKit draws, SceneKit will render using the blend state you've selected. The trick is setting the state at an appropriate time for drawing your blended content and leaving the state as SceneKit expects for un-blended content.
If you set your GL state in renderer:willRenderScene:atTime: and unset it in renderer:didRenderScene:atTime:, you'll apply blending to the entire scene. Probably not what you want. And you can't use a node renderer delegate for only the node you want blended because then SceneKit won't render your node content.
If you can find a good way to wedge those calls in, though, they should work. Try rendering related nodes with a custom program and set your state in handleBindingOfSymbol:usingBlock:, maybe?
2. Use Programmable Blending (iOS only)
The graphics hardware in iOS devices supports reading the color value of a destination fragment in the shader. You can combine this value with the color you intend to write in any number of ways — for example, you can create Photoshop-style blend modes.
In SceneKit, you can use this with a fragment shader modifier — read from gl_LastFragData and write to _output. The example here uses that to do a simple additive blend.
#pragma transparent
#extension GL_EXT_shader_framebuffer_fetch : require
#pragma body
_output.color = gl_LastFragData[0] + _output.color;
From what I can tell after several hours of experimenting, there is no way to actually set the blend mode used to render a piece of geometry, or to control the overall blend mode used to render a pass using SCNTechnique.
SceneKit appears to only have two different blending modes - one where blending is off - if it considers the material opaque, and a "transparent" blending mode (GL_ONE, GL_ONE_MINUS_SRC_ALPHA) when it considers a material transparent. This is bad news if you want to render things like glows, because it doesn't seem possible to get anything like a (GL_ONE, GL_ONE) blend mode you'd want for rendering light beams or glows.
However, I've found a hack to get around this which doesn't give you proper control over blending, but which works if you're wanting to render glowing things like light beams:
Because SceneKit uses GL_ONE, GL_ONE_MINUS_SRC_ALPHA blending mode all you should have to do is render your geometry with an alpha channel of 0. Unfortunately, it's not that simple because the default SceneKit shader discards fragments with an alpha channel of 0, so nothing will actually get rendered. A quick-and-dirty workaround is to use a diffuse colour map which has an alpha channel of 1 (assuming an 8 bit per channel map with values from 1-255). Because the alpha channel is nearly 0, pretty much all of the background image will show through. This mostly works, but because the alpha isn't quite zero it will still produce noticeable artefacts in bright areas.
So to work around this problem you can just use a standard texture map with a solid alpha chanel, but attach a shader modifier to "SCNShaderModifierEntryPointFragment" which simply sets the alpha channel of the output colour to zero. This works because fragment shader modifiers come after the zero-alpha culling.
here's that shader modifier in its entirety :
#pragma transparent
#pragma body
_output.color.a = 0;
note the "#pragma transparent" declaration in the first line - this is necessary to force SceneKit to use its transparent blending mode even when it otherwise wouldn't.
This is not a complete solution, because it's not real control over blending - it's only a useful hack for producing light beam glows etc - and the shading process certainly isn't as optimal as it could be, but it works well for this case.

iOS: Smooth button Glow effect by blending between images

I am creating a custom button that needs to be able to glow to a varying degree
How would I use these pictures to make a button that 'glows' the diamond when it is pressed, and have this glow gradually fade back to inert state?
I want to churn out several different colours of diamond as well... I am hoping to generate all different coloured diamonds from the same stock images presented here.
I would like to get my head around the basic methods available, in enough detail that I can see each one through and make a decision which path to take...
My tangled efforts so far... ( I will delete all of this, or move it into possibly several answers as a solution unfolds... )
I can see 3 potential solution paths:
GL
it looks as though GL has everything it takes to get complete fine-grained control over the process, although functions exposed by core graphics come tantalisingly close, and that would save several hundred lines of code spread over a bunch of source files, which seems a bit ridiculous for such a basic task.
core graphics, and core animation to accomplish the blending
documentation goes on to say
Anything underneath the unpainted samples, such as the current fill color or other drawing, shows through.
so I can chroma-key mask the left image, setting {0,0,0} ie Black as the key.
this at least secures a transparent background, now I have to work on making it yellow instead of grey.
so maybe I could have started instead with setting a yellow back colour for my image context, then use some CGContextSetBlendMode(...) to imprint the diamond on the yellow, THEN use chroma-key masking to get a transparent background
ok, this covers at least getting the basic unlit image on-screen
now I could overlay the sparkly image, using some blend mode, maybe I could keep it in its current greyscale state, and that would just boost the colours of the original
only problem with this is that it is a lot of heavy real-time blending
so maybe I could pre-calculate every image in the animation... this is looking increasingly mucky...
Cocos2D
if this allows me to set the blend mode to additive blending then I could just composite the glowing image over the original image with an appropriate Alpha setting.
After digging through a lot of documentation, the optimal solution seems to be to use core graphics functions to get the source images into a single 2-component GL texture, and then use GL to blend between them.
I will need to pass a uniform value glow_factor into the shader
The obvious solution might seem to simply use
r,g,b = in_r,g,b * { (1 - glow_factor) * inertPixel + glow_factor * shinyPixel }
(where inertPixel is the appropriate pixel of the inert diamond etc)...
it looks like I would also do well to manufacture my own sparkles and add them over the top; a gem should sparkle white irrespective of its characteristic colour.
After having looked at this problem a little more, I can see several solutions
Solution A -- store the transition from glow=0 to glow=1 as 60 frames in memory, then load the appropriate frame into a GL texture every time it is required.
this has an obvious benefit that a graphic designer could construct the entire sequence and I could load it in as a bunch of PNG files.
another advantage is that these frames wouldn't need to be played in sequence... the appropriate frame can be chosen on-the-fly
however, it has a potential drawback of a lot of sending data RAM->VRAM
this can be optimised by using glTexSubImage2D; several frames can be sent simultaneously and then unpacked from within GL... in fact maybe the entire sequence. if this is so, then it would make sense to use PVRT texture compression.
iOS: playing a frame-by-frame greyscale animation in a custom colour
Solution B -- load glow=0 and glow=1 images as GL textures, and manually write shader code that takes in the glow factor as a uniform and performs the blend
this has an advantage that it is close to the wire and can be tweaked in all sorts of ways. Also it is going to be very efficient. This advantage is that it is a big extra slice of code to maintain.
Solution C -- set glBlendMode to perform additive blending.
then draw the glow=0 image image, setting eg alpha=0.2 on each vertex.
then draw the glow=1 image image, setting eg alpha=0.8 on each vertex.
this has an advantage that it can be achieved with a more generic code structure -- ie a very general ' draw textured quad / sprite ' class.
disadvantage is that without some sort of wrapper it is a bit messy... in my game I have a couple of dozen diamonds -- at any one time maybe 2 or 3 are likely to be glowing. so first-pass I would render EVERYTHING ( just need to set Alpha appropriately for everything that is glowing ) and then on the second pass I could draw the glowing sprite again with appropriate Alpha for everything that IS glowing.
it is worth noting that if I pursue solution A, this would involve creating some sort of real-time movie player object, which could be a very useful reusable code component.

HLSL: Handle lack of TexCoords?

I'm in the process of writing my first few shaders, usually writing a shader to accomplish features as I realize that the main XNA library doesn't support them.
The trouble I'm running into is that not all of my models in a particular scene have texture data in them, and I can't figure out how to handle that. The main XNA libraries seem to handle it by using a wrapper class for BasicEffect, loading it through the content manager and selectively enabling or disabling texture processing accordingly.
How difficult is it to accomplish this for a custom shader? What I'm writing is an generic "hue shift" effect, that is, I want whatever gets drawn with this technique to have its texture colors (if any) and its vertex color hue shifted by a certain degree. Do I need to write separate shaders, one with textures and one without? If so, when I'm looping through my MeshParts, is there any way to detect if a given part has texture coordinates so that I can apply the correct effect?
Yes, you will need separate shaders, or rather different "techniques" - it can still be the same effect and use much of the same code. You can see how BasicEffect (at least the pre-XNA 4.0 version) does it by reading the source code.
To detect whether or not a model mesh part has texture coordinates, try this:
// Note: this allocates an array, so do it at load-time
var elements = meshPart.VertexBuffer.VertexDeclaration.GetVertexElements();
bool result = elements.Any(e =>
e.VertexElementUsage == VertexElementUsage.TextureCoordinate);
The way the content pipeline sets up its BasicEffect is via BasicMaterialContent. The BasicEffect.TextureEnabled property is simply turned on if Texture is set.

Image/Rectangle partial fade in xna

How do I partially fade a rectangle or an image in xna like so:
I'm using xna 3.1 and SpriteBatch.Draw(). I need it to be partially transparent so I can see what is behind it.
Just to add to Andrew's answer, there is a third (much slower) way to do this without writing a shader or a new batcher. Just use Texture2D's GetData method to extract the pixel data, go through it in a for loop changing the alpha values the way you want, and then use SetData to put it back. This is HORRIBLE way to do things if you are constantly changing the alpha value's, but it looks like you just want to change the alpha values once, so you'll just have additional overhead when loading the program and everything should work smoothly after that. Also, if you are only doing this to a small number of images, the performance difference is practically negligible. Here's some code to get you started:
Color[] texColors = new Color[myTexture.Width * myTexture.Height];
myTexture.GetData<Color>(texColors);
for(int i = 0; i < texColors.Length; i++)
{
//change alpha values the way you want
}
myTexture.SetData<Color>(texColors);
The "correct" way to do this would be to stop using SpriteBatch and manually draw quads or write your own sprite batcher instead. This way you could individually control the vertex alpha values.
If you want a quick, somewhat hacky way to do it, add a custom pixel shader to your sprite batch. In this shader, take the texture-coordinates as input and use them to modulate the output alpha. Or alternately use a second texture to modulate the alpha values.

Partial re-colorizing a Bitmap at runtime

I'm drawing some cars. They're Bitmap's, loaded from PNG's in the library. I need to be able to color the cars-- red ones and green ones and blue ones, whatever. However, when you paint the car green, the tires should stay black, and the windows stay window-color.
I know of two ways to handle this, neither one of which makes me happy. First, I could have two bitmaps for each car; one underneath for the body color, and one on top for detail bits. The underneath bitmap gets its transform.colorTransform set to turn the white car-body into whatever color I need. Not great, because I end up with twice as many Bitmap's running around on screen at runtime.
Second, I could programmatically search-and-replace "white" with "car-body" color when I load the bitmap for each car. Not great either, because the amount of memory I take up multiplies by however many colors I need.
What I would LIKE would be a way to say "draw this Bitmap with JUST THE WHITE PARTS turned into this other color" at runtime. Is there anything like this available? I will be less than surprised if the answer is "no," but I figure it's worth asking.
You might have answered the question yourself.
I think your first approach would need only two transparent images: one with pixels of the parts that need to change colour, one with the rest of the image. You will use colorTransform or ColorMatrix filter by case. It might even work with having the pixels the need the colour change covered with Sprite with a flat colour set on overlay ?
The downside would be that you will need to create a 'colour map'/set of pixels to replace for each different item that will need colour replacement.
For the second approach:
You might isolate the areas using something like threshold().
For speed, you might want either to store the indices of the pixels you need to replace in an Vector.<int> object that could be used in conjuction with BitmapData's getVector() method. (You would loop once to fetch the pixel indices that need to be replaced)
Since you will use the same image(same dimensions) to fill the same content with a different colour, you'll always loop through the same pixels. Also keep in mind that you will gain a bit of speed by using lock() before your loop to setPixel() and unlock() after the loop.
Alternatively you could use Pixel Bender and try some green screen/background subtraction techniques. It should be fast and wouldn't delay the execution of the rest of your as3 code as Pixel Bender code runs in it's own thread.
Also check out Lee's Pixel Bender subtraction technique too.
Although it's a bit old now, you can use some knowledge from #Quasimondo's article too.
HTH
I'm a little confused where you see the difference between your second approach and the one you would like to have. You can go over your loaded bitmap pixel by pixel and read out the color. If it turns out to be white replace it with another color. I do not see occurence of multiplied memory consumption.
You might want to try my selective color transform: http://www.quasimondo.com/archives/000614.php - it's from 2006, so some parts of it could probably be replaced by a pixel bender filter now.
Why not just load the pieces separately, perform the color transform on the one you want to change, then do a BitmapData.copyPixels() with the result? The blit routine runs in machine code, so is wicked fast. Doing it pixel by pixel in ActionScript would be glacially slow in comparison.
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/display/BitmapData.html#copyPixels()

Resources