Firemonkey does strange, bizarre things with Alpha - delphi

Working with Delphi / Firemonkey XE8. Had some decent luck with it recently, although you have to hack the heck out of it to get it to do what you want. My current project is to evaluate it's Low-Level 3D capabilities to see if I can use them as a starting point for a Game Project. I also know Unity3D quite well, and am considering using Unity3D instead, but I figure that Delphi / Firemonkey might give me some added flexibility in my game design because it is so minimal.
So I decided to dig into an Embarcadero-supplied sample... specifically the LowLevel3D sample. This is the cross-platform sample that shows you how to do nothing other than draw a rotating square on the screen with some custom shaders of your choice and have it look the same on all platforms (although it actually doesn't work AT ALL the same on all platforms... but we won't get into that).
Embc does not supply the original uncompiled shaders for the project (which I might add is really lame), and requires you to supply your own compatible shaders (some compiled, some not) for the various platforms you're targeting (also lame)...so my first job has been to create a shader that would work with their existing sample that does something OTHER than what the sample already does. Specifically, if I'm creating a 2D game, I wanted to make sure that I could do sprites with alpha transparency, basic stuff.... if I can get this working, I'll probably never have to write another shader for the whole game.
After pulling my hair out for many hours, I came up with this little shader that works with the same parameters as the demo.
Texture2D mytex0: register(t0);
Texture2D mytex1: register(t1);
float4 cccc : register(v0) ;
struct PixelShaderInput
{
float4 Pos: COLOR;
float2 Tex: TEXCOORDS;
};
SamplerState g_samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
RasterizerState MyCull {
FrontCounterClockwise = FALSE;
};
float4 main(PixelShaderInput pIn): SV_TARGET
{
float4 cc,c;
float4 ci = mytex1.Sample(g_samLinear, pIn.Tex.xy);
c = ci;
c.a = 0;//<----- DOES NOT actually SET ALPHA TO ZERO ... THIS IS A PROBLEM
cc = c;
return cc;
}
Never-mind that it doesn't actually do much with the parameters, but check out the line where I set the output's ALPHA to 0. Well.... I found that this actually HAS NO EFFECT!
But it gets spookier than this. I found that turning on CULLING in the Delphi App FIXED this issue. So I figure... no big deal then, I'll just manually draw both sides of the sprite... right? Nope! When I manually drew a double sided sprite.. the problem came back!
Check this image: shader is ignoring alpha=0 when double-sided
In the above picture, clearly alpha is SOMEWHAT obeyed because the clouds are not surrounded by a black box, however, the cloud itself is super saturated (I find that if I multiply rgb*a, then the colors come out approximately right, but I'm not going to do that in real-life for obvious reasons.
I'm new to the concept of writing custom shaders. Any insight is appreciated.

Related

Render an SCNGeometry as a wireframe

I'm using SceneKit on iOS and I have a geometry I want to render as a wireframe. So basically I want to draw only the lines, so no textures.
I figured out that I could use the shaderModifiers property of the used SCNMaterial to accomplish this. Example of a shader modifier:
material.shaderModifiers = [
SCNShaderModifierEntryPointFragment: "_output.color.rgb = vec3(1.0) - _output.color.rgb;"
]
This example apparently simply inverts the output colors. I know nothing about this 'GLSL' language I have to use for the shader fragment.
Can anybody tell me what code I should use as the shader fragment to only draw near the edges, to make the geometry look like a wireframe?
Or maybe there is a whole other approach to render a geometry as a wireframe. I would love to hear it.
Try setting the material fillMode to .lines (iOS 11+, and macOS 10.13+):
sphereNode.geometry?.firstMaterial?.fillMode = .lines
Now it is possible (at least in Cocoa) with:
gameView.debugOptions.insert(SCNDebugOptions.showWireframe)
or you can do it interactively if enabling the statistics with:
gameView.showsStatistics = true
(gameView is an instance of SCNView)
This is not (quite) an answer, because this a question without an easy answer.
Doing wireframe rendering entirely in shader code is a lot more difficult than it seems like it should be, especially on mobile where you don't have a geometry shader. The problem is that the vertex shader (and subsequently the fragment shader) just doesn't have the information needed to know where polygon edges are.
I know nothing about this 'GLSL' language I have to use for the shader fragment.
If you really want to tackle this problem, you'll need to learn some more about GLSL (the OpenGL Shading Language). There are loads of books and tutorials out there for that.
Once you've got some GLSL under your belt, take a look at some of the questions (like this one pulled from the Related sidebar) and other stuff people have written about the problem. (Note that when you're looking for mobile-specific limitations, OpenGL ES has the same limitations as WebGL on the desktop.)
With SceneKit, you have the additional wrinkle that you probably don't have a barycentric-coordinates vertex attribute (aka SCNGeometrySource) for the geometry you're working with, and you probably don't want to do the hard work of generating one. In OS X, you can use an SCNProgram with a geometryShader to add barycentric coordinates before the vertex/fragment shaders run — but then you have to do your own shading (i.e. you can't piggyback on the SceneKit shading like you can with shader modifiers). And that isn't available in iOS — the hardware there doesn't do geometry shaders. You might be able to fake it using texture coordinates if those happen to be lined up right in your geometry.
It might be easier to just draw the object using lines — try making a new SCNGeometry from the sources and elements of your original (solid) geometry, but when recreating the SCNGeometryElement, use SCNPrimitiveTypeLine.

Getting the color of the back buffer in GLSL

I am trying to extract the color behind my shader fragment. I have searched around and found various examples of people doing this as such:
vec2 position = ( gl_FragCoord.xy / u_resolution.xy );
vec4 color = texture2D(u_backbuffer, v_texCoord);
This makes sense. However nobody has shown an example where you pass in the back buffer uniform.
I tried to do it like this:
int backbuffer = glGetUniformLocation(self.shaderProgram->program_, "u_backbuffer");
GLint textureID;
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &textureID);//tried both of these one at a time
glGetIntegerv(GL_RENDERBUFFER_BINDING, &textureID);//tried both of these one at a time
glUniform1i(backbuffer, textureID);
But i just get black. This is in cocos2d iOS FYI
Any suggestions?
You can do this, but only on iOS 6.0. Apple added an extension called GL_EXT_shader_framebuffer_fetch which lets you read the contents of the current framebuffer at the fragment you're rendering. This extension introduces a new variable, called gl_lastFragData, which you can read in your fragment shader.
This question by RayDeeA shows en example of this in action, although you'll need to change the name of the extension as combinatorial points out in their answer.
This should be supported on all devices running iOS 6.0 and is a great way to implement custom blend modes. I've heard that it's a very low cost operation, but haven't done much profiling myself yet.
That is not allowed. You cannot simultaneously sample from an image that you're currently writing to as part of an FBO.

Distance Fog XNA 4.0

I've been working on a project that helps create a virtual reality experience on the laptop and/or desktop. I am using XNA 4.0 on Visual Studio 2010. The current scenario looks like this. I have interfaced the movements of a persons head through kinect. So if the person moves his head right relative to the laptop, the scene seen in the image is rotated towards the left giving the effect of a virtual tour or like looking through the window experience.
To enhance the visual appeal, I want to add a darkness at the back plane. Like the box looks as if it was a tunnel.
The box was made using trianglestrips. The BasicEffect used for the planes of the box is called effect.
effect.VertexColorEnabled = true;
effect.EnableDefaultLighting();
effect.FogEnabled = true;
effect.FogStart = 35.0f;
effect.FogEnd = 100.0f;
effect.FogColor = new Vector3(0.0f, 0.0f, 0.0f);
effect.World = world;
effect.View = cam.view;
effect.Projection = cam.projection;
On compiling the error is regarding some normals.
I have no clue what they mean by that. I have dug the internet hard enough. (I was first under the impression that ill put a black omnilight in the backside of the box).
The error is attached below:
'verts' is the VertexPositionColor [][] that is used to build the box.
How do I solve this error ? Is the method/approach correct ?
Any help shall be welcome.
Thanks.
Your Vertex has Position and Color channels, but is has no normals... so you have to provide vertex has it.
You can use VertexPostionNormalTexture if you don't need the color, or build a custom struct that provides the normal...
Here your are a custom implementation: VertexPositionNormalColor
You need to add a normal (vector3) to your vertex type.
Also if you want Distance fog you will have to write your own shader as BasicEffect only implements depth fog (which while not looking as good is faster)

Using a custom shader in Silverlight 5 XNA

I've been recently trying to create some 3D rendering code in Silverlight 5 with XNA. Unfortunately I have been having trouble getting anything ( using my custom shader ) to work.
The basic effect is used on a cube and uses only VertexPositionColor information but when I switch to using a custom shader nothing seems to render ( or renders off-screen ).
To try and help myself with this issue I even got hold of the BasicEffect hlsl code but it doesn't do anything I am not doing.
The code takes in a world, view and projection matrix and multiplies each one by a position in the following order:
float4 pos_ws = mul(position, World);
float4 pos_vs = mul(pos_ws, View);
float4 pos_ps = mul(pos_vs, Projection);
I changed my code to do the same thing ( instead of passing in a single WorldViewProjection matrix ) and my shader uses this to calculate a position and then just applies a color to the pixel. Yet nothing is rendering.
I'm pretty stuck on this, I'm passing ok at basic 3D but passing ok doesn't seem to cut it! :)
So it turns out the issue is fairly simple!
I actually deleted this question initially because I knew the issue was likely my matrices and so it was unlikely I'd get much help!
After some stumbling on google, and more coffee than I'd like to admit to, I found the answer.
XNA transposes it's matricies on the sly and doesn't tell you! I had tried transposing the view and projection matrices in some vain hope that I'd know what I was doing but it wasn't helping.
Instead I now pass in a single WorldViewProjection_Transposed matrix which is calculated using the following.
Matrix worldViewProjection_Transpose = Matrix.Transpose(world * view * projection);
This seems to work at the moment and I am hoping it is this simple.
I am sure I will come across a million more problems as the models I need to render become more complex but I decided to leave this on in case anyone in a similar situation ( and experience level ) to me is struggling :)

Adjust brightness/contrast/gamma of scene in DirectX?

In a Direct3D application filled with Sprite objects from D3DX, I would like to be able to globally adjust the brightness and contrast. The contrast is important, if at all possible.
I've seen this question here about OpenGL: Tweak the brightness/gamma of the whole scene in OpenGL
But that doesn't give me what I would need in a DirectX context. I know this is something I could probably do with a pixel shader, too, but that seems like shooting a fly with a bazooka and I worry about backwards compatibility with older GPUs that would have to do any shaders in software. It seems like this should be possible, I remember even much older games like the original Half Life having settings like this well before the days of shaders.
EDIT: Also, please note that this is not a fullscreen application, so this would need to be something that would just affect the one Direct3D device and that is not a global setting for the monitor overall.
A lot of games do increasing brightness by, literally, applying a full screen poly over the screen with additive blending. ie
SRCBLEND = ONE
DESTBLEND = ONE
and then applying a texture with a colour of (1, 1, 1, 1) will increase the brightness(ish) of every pixel by 1.
To adjust contrast then you need to do similar but MULTIPLY by a constant factor. This will require blend settings as follows
SRCBLEND = DESTCOLOR
DESTBLEND = ZERO
This way if you blend a pixel with a value of (2, 2, 2, 2) then you will change the contrast.
Gamma is a far more complicated beast and i'm not aware of a way you can fake it like above.
Neither of these solutions is entirely accurate but they will give you a result that looks "slightly" correct. Its much more complicated doing either way properly but by using those methods you'll see effects that will look INCREDIBLY similar to various games you've played, and that's because that's EXACTLY what they are doing ;)
For a game I made in XNA, I used a pixel shader. Here is the code. [disclaimer] I'm not entirely sure the logic is right and some settings can have weird effects (!) [/disclaimer]
float offsetBrightness = 0.0f; //can be set from my C# code, [-1, 1]
float offsetContrast = 0.0f; //can be set from my C# code [-1, 1]
sampler2D screen : register(s0); //can be set from my C# code
float4 PSBrightnessContrast(float2 inCoord : TEXCOORD0) : COLOR0
{
return (tex2D(screen, inCoord.xy) + offsetBrightness) * (1.0 + offsetContrast);
}
technique BrightnessContrast
{
pass Pass1 { PixelShader = compile ps_2_0 PSBrightnessContrast(); }
}
You didn't specify which version of DirectX you're using, so I'll assume 9. Probably the best you can do is to use the gamma-ramp functionality. Pushing the ramp up and down increases or decreases brightness. Increasing or decreasing the gradient alters contrast.
Another approach is to edit all your textures. It would basically be the same approach as the gamma map, except you apply the ramp manually to the pixel data in each of your image files to generate the textures.
Just use a shader. Anything that doesn't at least support SM 2.0 is ancient at this point.
The answers in this forum may be of help, as I have never tried this.
http://social.msdn.microsoft.com/Forums/en-US/windowsdirectshowdevelopment/thread/95df6912-1e44-471d-877f-5546be0eabf2

Resources