I’m creating 2d animation using Metal and LiquidFun. I want to simulate petrol. I want my animation to be yellow with gray shadows, similar to this:
Here is my current animation, it's totally yellow without any gray shadows, so it doesn't look realistic:
My fragment shader is very simple now, I only pass yellow color to it:
fragment half4 fragment_shader(VertexOut in [[stage_in]],
float2 pointCoord [[point_coord]]) {
float4 out_color = float4(0.7, 0.5, 0.1, 0.07);
return half4(out_color);
};
I’ve checked various tutorials about adding shadows on MTKView, but they all suggest things that don’t work for me. The first thing that doesn’t work is creating various vertexes and setting color for each of them. In my code, I don’t have definite vertexes, I have a particle system which I pass to the vertex buffer:
particleCount = Int(LiquidFun.particleCount(forSystem: particleSystem))
let positions = LiquidFun.particlePositions(forSystem: particleSystem)
let bufferSize = MemoryLayout<Float>.size * particleCount * 2
vertexBuffer = device.makeBuffer(bytes: positions!, length: bufferSize, options: [])
Another thing I’ve tried is setting ambient, diffuse and specular colors, but it also didn’t work because my animation is 2D, not 3D.
I’ve also tried setting color based on particle position. My code inside fragment shader was close to this:
if (in.position.y < 1500.0) {
out_color = float4(0.7, 0.5, 0.1, 0.07);
} else if (in.position.y > 1500.0) {
out_color = float4(0.6, 0.5, 0.1, 0.07);
}
But it also didn’t work as expected: color transitions were not smooth, it didn’t look like shadows. Plus my animation is increasing, so setting color to definite positions was not a good idea.
Could you please suggest something? I feel like I’m missing something very important.
Any help is appreciated!
Related
Above is an example of my problem. I have two alpha masks that are exactly the same, just a circle white gradient with transparent background.
I am drawing to a RenderTexture2D that is rendered above the screen to creating lighting. It clears a semi transparent black color, and then the alpha masks are drawn in the correct position to appear like lights..
On their own it works fine, but if two clash, like the below "torch" against the blue glowing mushrooms, you can see the bounding box transparency is overwriting the already drawn orange glow.
Here is my approach:
This is creating the render target:
RenderTarget2D = new RenderTarget2D(Global.GraphicsDevice, Global.Resolution.X+4, Global.Resolution.Y+4);
SpriteBatch = new SpriteBatch(Global.GraphicsDevice);
This is drawing to the render target:
private void UpdateRenderTarget()
{
Global.GraphicsDevice.SetRenderTarget(RenderTarget2D);
Global.GraphicsDevice.Clear(ClearColor);
// Draw textures
float i = 0;
foreach (DrawableTexture item in DrawableTextures)
{
i += 0.1f;
item.Update?.Invoke(item);
SpriteBatch.Begin(SpriteSortMode.Immediate, item.Blend,
SamplerState.PointClamp, DepthStencilState.Default,
RasterizerState.CullNone);
SpriteBatch.Draw(
item.Texture,
(item.Position - Position) + (item.Texture.Size() / 2 * (1 - item.Scale)),
null,
item.Color,
0,
Vector2.Zero,
item.Scale,
SpriteEffects.None,
i
);
SpriteBatch.End();
}
Global.GraphicsDevice.SetRenderTarget(null);
}
I have heard about depth stencils etc.. and I feel like I have tried so many combinations of things but I am still getting the issue. I haven't had any troubles with this while building all the other graphics in my game.
Any help is greatly appreciated thanks! :)
Ah, this turned out to be a problem with the BlendState itself rather than the SpriteBatch. I had created a custom BlendState "Multiply" which I picked up online that was causing the issue.
"whats causing" the problem was the real question here.
This was the solution to get my effect without "overlapping":
public static BlendState Lighting = new BlendState
{
ColorSourceBlend = Blend.One,
ColorDestinationBlend = Blend.One,
AlphaSourceBlend = Blend.Zero,
AlphaDestinationBlend = Blend.InverseSourceColor
};
This allows the textures to overlap, and also "subtracts" from the "darkness" layer. It would be easier to see if the darkness was more opaque.
I have answered this just incase some other fool mistakes a blend state problem with the sprite batch itself.
Context: I'm doing all of the following using OpenGLES 2 on iOS 11
While implementing different blend modes used to blend two textures together I came across a weird issue that I managed to reduce to the following:
I'm trying to blend the following two textures together, only using the fragment shader and not the OpenGL blend functions or equations. GL_BLEND is disabled.
Bottom - dst:
Top - src:
(The bottom image is the same as the top image but rotated and blended onto an opaque white background using "normal" (as in Photoshop 'normal') blending)
In order to do the blending I use the
#extension GL_EXT_shader_framebuffer_fetch
extension, so that in my fragment shader I can write:
void main()
{
highp vec4 dstColor = gl_LastFragData[0];
highp vec4 srcColor = texture2D(textureUnit, textureCoordinateInterpolated);
gl_FragColor = blend(srcColor, dstColor);
}
The blend doesn't perform any blending itself. It only chooses the correct blend function to apply based on a uniform blendMode integer value. In this case the first texture gets drawn with an already tested normal blending function and then the second texture gets drawn on top with the following blendTest function:
Now here's where the problem comes in:
highp vec4 blendTest(highp vec4 srcColor, highp vec4 dstColor) {
highp float threshold = 0.7; // arbitrary
highp float g = 0.0;
if (dstColor.r > threshold && srcColor.r > threshold) {
g = 1.0;
}
//return vec4(0.6, g, 0.0, 1.0); // no yellow lines (Case 1)
return vec4(0.8, g, 0.0, 1.0); // shows yellow lines (Case 2)
}
This is the output I would expect (made in Photoshop):
So red everywhere and green/yellow in the areas where both textures contain an amount of red that is larger than the arbitrary threshold.
However, the results I get are for some reason dependent on the output value I choose for the red component (0.6 or 0.8) and none of these outputs matches the expected one.
Here's what I see (The grey border is just the background):
Case 1:
Case 2:
So to summarize: If I return a red value that is larger than the threshold, e.g
return vec4(0.8, g, 0.0, 1.0);
I see vertical yellow lines, whereas if the red component is less than the threshold there will be no yellow/green in the result whatsoever.
Question:
Why does the output of my fragment shader determine whether or not the conditional statement is executed and even then, why do I end up with green vertical lines instead of green boxes (which indicates that the dstColor is not being read properly)?
Does it have to do with the extension that I'm using?
I also want to point out that the textures are both being loaded in and bound properly. I can see them just fine if I just return the individual texture info without blending or even with a normal blending function that I've implemented everything works as expected.
I found out what the problem was (and I realize that it's not something anyone could have known from just reading the question):
There is an additional fully transparent texture being drawn between the two textures you can see above, which I had forgotten about.
Instead of accounting for that and just returning the dstColor in case the srcColor alpha is 0, the transparent texture's color information (which is (0.0, 0.0, 0.0, 0.0)) was being used when blending, therefore altering the framebuffer content.
Both the transparent texture and the final texture were drawn with the blendTest function, so the output of the first function call was then being read in when blending the final texture.
I am creating a particle emitter with a texture that is a white circle with alpha. Unable to color the sprites using color passed to the fragment shader.
I tried the following:
gl_FragColor = texture2D(Sampler, gl_PointCoord) * colorVarying;
This seems to be doing some kind of additive coloring.
What I am attempting is porting this:
http://maniacdev.com/2009/07/source-code-particle-based-explosions-in-iphone-opengl-es/
from ES 1.1 to ES 2.0
with your code, consider the following example:
texture2D = (1,0,0,1) = red - fully opaque
colorVarying = (0,1,0,0.5) = green - half transparent
then gl_FragColor would be (0,0,0,0.5) black - half transparent.
Generally, you can use mix to interpolate values, but if I understood your problem then its even easier.
Basically, you only want the alpha channel from your texture and apply it to another color, right? then you could do this:
gl_FragColor = vec4(colorVarying.rgb, texture2D(Sampler, gl_PointCoord).a)
I have an application that renders multiple textured quads (images) in an essentially 2D context, which has worked fine. However after modifying it so that portions of some textures are transparent, I've ground to a halt trying to get it to behave in a seemingly standard, theoretically simplistic fashion: I just want it to draws the textures sequentially (as it has been doing), and when a texture has transparent pixels, to show whatever was previously drawn in those spots.
But what it is instead doing is showing a scaled version of each previously-drawn texture, behind the transparent sections, rather than the previously-rendered portion of the render target. So for instance if I tried to draw an opaque background texture and then a smaller entirely transparent texture, then the background would draw fine, but the transparent image would show the entire background scaled to the size/location of the new transparent image.
Subsequent rendered textures continue in this fashion, showing whatever the previous rendered texture ended up as (including elements from textures previous to it).
I'm obviously missing something fundamental about how textures/pixel shaders in DirectX work (which is no surprise, since I'm relatively new to it), but after reading everything online I could scrounge up, and experimenting in countless ways, I still can't figure out what I need to do.
I'm using one texture in the pixel shader, which may or may not be part of the problem. Each time I render the scene, I loop through all the textures I want to render, calling PSSetShaderResources() to bind a different texture to that pixel shader texture, each loop, and call DrawIndexed() after each time I change it. It seems like this is an effective way to proceed, since it doesn't seem to make sense to have a ton of shader textures when the pixel shader can't seem to be made to use an arbitrary one (it needs to be precompiled, no?).
At any rate, I'm hoping the symptoms will be sufficient for someone more knowledgeable than I to immediately realize the mistake I'm making. The code is pretty simple in these areas, but I might as well include a couple sections:
ever scene, for each shaderRV:
m_pd3d11ImmDevContext->PSSetShaderResources(0, 1, &shaderRV);
m_pd3d11ImmDevContext->DrawIndexed( ... )
Shader:
Texture2D aTexture : register(t0);
SamplerState samLinear : register(s0);
struct VS_INPUT
{
float3 position : POSITION;
float3 texcoord0 : TEXCOORD0;
};
struct VS_OUTPUT
{
float4 hposition : SV_POSITION;
float3 texcoord0 : TEXCOORD0;
};
struct PS_OUTPUT
{
float4 color : COLOR;
};
// vertex shader
VS_OUTPUT CompositeVS( VS_INPUT IN )
{
VS_OUTPUT OUT;
float4 v = float4( IN.position.x,
IN.position.y,
0.1f,
1.0f );
OUT.hposition = v;
OUT.texcoord0 = IN.texcoord0;
OUT.texcoord0.z = IN.position.z ;
return OUT;
}
// pixel shader
PS_OUTPUT CompositePS( VS_OUTPUT IN ) : SV_Target
{
PS_OUTPUT ps;
ps.color = aTexture.Sample(samLinear, IN.texcoord0);
return ps;
}
Blend Description settings (don't think the problem's here):
blendDesc.RenderTarget[0].BlendEnable = true;
blendDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA;
blendDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
blendDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
blendDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ZERO;
blendDesc.RenderTarget[0].DestBlendAlpha= D3D11_BLEND_ZERO;
blendDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
Please let me know if any other code segments would be useful!
I am using render-to-texture to do postprocessing and then blending several 2D layers together.
Currently I am using stencil mask to make "holes" in render-to-texture targets and leaving some of the areas transparent. However, this is little cumbersome in my case. I'd rather ignore the stencil mask and then just would use normal polyfill operations to draw the holes.
What kind of methods there exist for rendering "fill to alpha 0.0" areas in the scene? I.e. the existing rendet-to-texture destination alpha value would be ignored and just replaced with 0.0 value. I assume you can set OpenGL mode bits so (how?) that this can done, without the need of using a custom fragment shader.
I already know how to set depth mask to ignore mode, so I can redraw over the top of the existing polygons.
You just have to use the THREE.NoBlending blending mode in the material used in the polygons you draw to make the holes.The material should be a ShaderMaterial so you can write the desired alpha, like here:
var r = 0.5;
var g = 0;
var b = 0;
var a = 0.8;
var material = new THREE.ShaderMaterial( {
uniforms: {
col: { type: "v4", value: new THREE.Vector4( r, g, b, a ) }
},
fragmentShader: "uniform vec4 col; void main() {\n\tgl_FragColor = col;\n}",
side: THREE.DoubleSide
} );
material.transparent = true;
material.blending = THREE.NoBlending;
(Note that the DoubleSide parameter is not related to the problem but it is useful sometimes.)