Alpha channel for SKShader shader.fsh is ignored - ios

void main() {
gl_FragColor = vec4(1.0,0.0,0.0,0.0);
}
Such an example renders a red rectangle, but the alpha is equal to zero. Why?
How can I render a semitransparent object?

From Apple's documentation (emphasis added):
This function must set the gl_FragColor variable to a color value to use in the blend stage. Typically, the color value you return in this variable should already be premultiplied by the fragment’s alpha value.
So if you want full red at alpha=0.5, use vec4(0.5, 0.0, 0.0, 0.5)
Documentation here: https://developer.apple.com/documentation/spritekit/skshader/creating_a_custom_fragment_shader
See also this previous answer: Shader with SpriteKit only register alpha for black color

Related

Shadows in Metal, Swift

I’m creating 2d animation using Metal and LiquidFun. I want to simulate petrol. I want my animation to be yellow with gray shadows, similar to this:
Here is my current animation, it's totally yellow without any gray shadows, so it doesn't look realistic:
My fragment shader is very simple now, I only pass yellow color to it:
fragment half4 fragment_shader(VertexOut in [[stage_in]],
float2 pointCoord [[point_coord]]) {
float4 out_color = float4(0.7, 0.5, 0.1, 0.07);
return half4(out_color);
};
I’ve checked various tutorials about adding shadows on MTKView, but they all suggest things that don’t work for me. The first thing that doesn’t work is creating various vertexes and setting color for each of them. In my code, I don’t have definite vertexes, I have a particle system which I pass to the vertex buffer:
particleCount = Int(LiquidFun.particleCount(forSystem: particleSystem))
let positions = LiquidFun.particlePositions(forSystem: particleSystem)
let bufferSize = MemoryLayout<Float>.size * particleCount * 2
vertexBuffer = device.makeBuffer(bytes: positions!, length: bufferSize, options: [])
Another thing I’ve tried is setting ambient, diffuse and specular colors, but it also didn’t work because my animation is 2D, not 3D.
I’ve also tried setting color based on particle position. My code inside fragment shader was close to this:
if (in.position.y < 1500.0) {
out_color = float4(0.7, 0.5, 0.1, 0.07);
} else if (in.position.y > 1500.0) {
out_color = float4(0.6, 0.5, 0.1, 0.07);
}
But it also didn’t work as expected: color transitions were not smooth, it didn’t look like shadows. Plus my animation is increasing, so setting color to definite positions was not a good idea.
Could you please suggest something? I feel like I’m missing something very important.
Any help is appreciated!

Fragment shader output interferes with conditional statement

Context: I'm doing all of the following using OpenGLES 2 on iOS 11
While implementing different blend modes used to blend two textures together I came across a weird issue that I managed to reduce to the following:
I'm trying to blend the following two textures together, only using the fragment shader and not the OpenGL blend functions or equations. GL_BLEND is disabled.
Bottom - dst:
Top - src:
(The bottom image is the same as the top image but rotated and blended onto an opaque white background using "normal" (as in Photoshop 'normal') blending)
In order to do the blending I use the
#extension GL_EXT_shader_framebuffer_fetch
extension, so that in my fragment shader I can write:
void main()
{
highp vec4 dstColor = gl_LastFragData[0];
highp vec4 srcColor = texture2D(textureUnit, textureCoordinateInterpolated);
gl_FragColor = blend(srcColor, dstColor);
}
The blend doesn't perform any blending itself. It only chooses the correct blend function to apply based on a uniform blendMode integer value. In this case the first texture gets drawn with an already tested normal blending function and then the second texture gets drawn on top with the following blendTest function:
Now here's where the problem comes in:
highp vec4 blendTest(highp vec4 srcColor, highp vec4 dstColor) {
highp float threshold = 0.7; // arbitrary
highp float g = 0.0;
if (dstColor.r > threshold && srcColor.r > threshold) {
g = 1.0;
}
//return vec4(0.6, g, 0.0, 1.0); // no yellow lines (Case 1)
return vec4(0.8, g, 0.0, 1.0); // shows yellow lines (Case 2)
}
This is the output I would expect (made in Photoshop):
So red everywhere and green/yellow in the areas where both textures contain an amount of red that is larger than the arbitrary threshold.
However, the results I get are for some reason dependent on the output value I choose for the red component (0.6 or 0.8) and none of these outputs matches the expected one.
Here's what I see (The grey border is just the background):
Case 1:
Case 2:
So to summarize: If I return a red value that is larger than the threshold, e.g
return vec4(0.8, g, 0.0, 1.0);
I see vertical yellow lines, whereas if the red component is less than the threshold there will be no yellow/green in the result whatsoever.
Question:
Why does the output of my fragment shader determine whether or not the conditional statement is executed and even then, why do I end up with green vertical lines instead of green boxes (which indicates that the dstColor is not being read properly)?
Does it have to do with the extension that I'm using?
I also want to point out that the textures are both being loaded in and bound properly. I can see them just fine if I just return the individual texture info without blending or even with a normal blending function that I've implemented everything works as expected.
I found out what the problem was (and I realize that it's not something anyone could have known from just reading the question):
There is an additional fully transparent texture being drawn between the two textures you can see above, which I had forgotten about.
Instead of accounting for that and just returning the dstColor in case the srcColor alpha is 0, the transparent texture's color information (which is (0.0, 0.0, 0.0, 0.0)) was being used when blending, therefore altering the framebuffer content.
Both the transparent texture and the final texture were drawn with the blendTest function, so the output of the first function call was then being read in when blending the final texture.

Adding semi-transparent images as textures in Scenekit

When I add a semi-transparent image (sample) as a texture for a SCNNode, how can I specify a color attribute for the node where the image is transparent. Since I am able to specify either color or image as a material property, I am unable to specify the color value to the node. Is there a way to specify both color and image for the material property or is there a workaround to this problem.
If you are assigning the image to the contents of the transparent material property, you can change the materials transparencyMode to be either .AOne or .RGBZero.
.AOne means that transparency is derived from the images alpha channel.
.RGBZero means that transparency is derived from the luminance (the total red, green, and blue) in the image.
You cannot configure an arbitrary color to be treated as transparency without a custom shader.
However, from the looks of your sample image, I would think that assigning the sample image to the transparent material properties contents and using the .AOne transparency mode would give you the result you are looking for.
I'm posting this as a new answer because it's different from the other answer.
Based on your comment, I understand that you want to want to use an image with transparency as the diffuse content of a material, but use a background color wherever the image is transparent. In other words, you won't to use a composite of the image over a color as the diffuse contents.
Using UIImage
There are a few different ways you can achieve this composited image. The easiest and likely most familiar solution is to create a new UIImage that draws the image over the color. This new image will have the same size and scale as your image, but can be opaque since it has a solid background color.
func imageByComposing(image: UIImage, over color: UIColor) -> UIImage {
UIGraphicsBeginImageContextWithOptions(image.size, true, image.scale)
defer {
UIGraphicsEndImageContext()
}
let imageRect = CGRect(origin: .zero, size: image.size)
// fill with background color
color.set()
UIRectFill(imageRect)
// draw image on top
image.drawInRect(imageRect)
return UIGraphicsGetImageFromCurrentImageContext()
}
Using this image as the contents of the diffuse material property will give you the effect that you're after.
Using Shader Modifiers
If you find yourself having to change the color very frequently (possibly animating it), you could also use custom shaders or shader modifiers to composite the image over the color.
In that case, you want to composite the image A over the color B, so that the output color (CO) is:
CO = CA + CB * (1 - ɑA)
By passing the image as the diffuse contents, and assigning the output to the diffuse content, the expression can be simplified as:
Cdiffuse = Cdiffuse + Ccolor * (1 - ɑdiffuse)
Cdiffuse += Ccolor * (1 - ɑdiffuse)
Generally the output alpha would depend on the alpha of A and B, but since B (the color) is opaque (1), the output alpha is also 1.
This can be written as a small shader modifier. Since the motivation for this solutions was to be able to change the color, the color is created as a uniform variable which can be updated in code.
// Define a color that can be set/changed from code
uniform vec3 backgroundColor;
#pragma body
// Composit A (the image) over B (the color):
// output = image + color * (1-alpha_image)
float alpha = _surface.diffuse.a;
_surface.diffuse.rgb += backgroundColor * (1.0 - alpha);
// make fully opaque (since the color is fully opaque)
_surface.diffuse.a = 1.0;
This shader modifier would then be read from the file, and set in the materials shader modifier dictionary
enum ShaderLoadingError: ErrorType {
case FileNotFound, FailedToLoad
}
func shaderModifier(named shaderName: String, fileExtension: String = "glsl") throws -> String {
guard let url = NSBundle.mainBundle().URLForResource(shaderName, withExtension: fileExtension) else {
throw ShaderLoadingError.FileNotFound
}
do {
return try String(contentsOfURL: url)
} catch {
throw ShaderLoadingError.FailedToLoad
}
}
// later, in the code that configures the material ...
do {
let modifier = try shaderModifier(named: "Composit") // the name of the shader modifier file (assuming 'glsl' file extension)
theMaterial.shaderModifiers = [SCNShaderModifierEntryPointSurface: modifier]
} catch {
// Handle the error here
print(error)
}
You would then be able to change the color by setting a new value for the "backgroundColor" of the material. Note that there is no initial value, so one would have to be set.
let backgroundColor = SCNVector3Make(1.0, 0.0, 0.7) // r, g, b
// Set the color components as an SCNVector3 wrapped in an NSValue
// for the same key as the name of the uniform variable in the sahder modifier
theMaterial.setValue(NSValue(SCNVector3: backgroundColor), forKey: "backgroundColor")
As you can see, the first solution is simpler and the one I would recommend if it suits your needs. The second solution is more complicated, but enabled the background color to be animated.
Just in case someone comes across this in the future... for some tasks, ricksters solution is likely the easiest. In my case, I wanted to display a grid on top of an image that was mapped to a sphere. I originally composited the images into one and applied them, but over time I got more fancy and this started getting complex. So I made two spheres, one inside the other. I put the grid on the inner one and the image on the outer one and presto...
let outSphereGeometry = SCNSphere(radius: 20)
outSphereGeometry.segmentCount = 100
let outSphereMaterial = SCNMaterial()
outSphereMaterial.diffuse.contents = topImage
outSphereMaterial.isDoubleSided = true
outSphereGeometry.materials = [outSphereMaterial]
outSphere = SCNNode(geometry: outSphereGeometry)
outSphere.position = SCNVector3(x: 0, y: 0, z: 0)
let sphereGeometry = SCNSphere(radius: 10)
sphereGeometry.segmentCount = 100
sphereMaterial.diffuse.contents = gridImage
sphereMaterial.isDoubleSided = true
sphereGeometry.materials = [sphereMaterial]
sphere = SCNNode(geometry: sphereGeometry)
sphere.position = SCNVector3(x: 0, y: 0, z: 0)
I was surprised that I didn't need to set sphereMaterial.transparency, it seems to get this automatically.

How to draw a spritebatch without Color?

I'm drawing a Texture2D like this
//background_texture is white in color
spritebatch.Draw(content.Load<Texture2D>("background_texture"),
new Rectangle(10, 10, 100, 100),
Color.Red)
The texture is white; however, on screen it's displayed as red.
Why is the draw method requiring a Color?
How does one simply draw the texture, and only the texture without having Color.something distort the graphic?
take a look at the documentation here:
http://msdn.microsoft.com/en-us/library/ff433986.aspx
you want to try Color.White, that additional parameter of a color typically refers to a tint, while a white "tint" should display the sprite without a tint
Color.White does not change the color of your image. Use
spritebatch.Draw(content.Load<Texture2D>("background_texture"),
new Rectangle(10, 10, 100, 100),
Color.White);
Instead of Color.Red, which applies a tint.
Note: Be careful. Intellisense will want to make this Color.Wheat, so be sure to type the first 3 letters before you hit space.
Color.White is uneccesary because in default sprite shader it looks like this:
PixelShader....
{
....
return Texture * Color;
}
Where color is Color that is given from Vertex shader defined by that Color in spritebatch.Draw... if it would be null, black, it would create invisible sprites. Whole point is that by this you set vertex color of each vertex that is used as multiplicative to texture you set for sprite.

Applying color to a OpenGL ES 2.0 Point Sprite texture in Fragment shader?

I am creating a particle emitter with a texture that is a white circle with alpha. Unable to color the sprites using color passed to the fragment shader.
I tried the following:
gl_FragColor = texture2D(Sampler, gl_PointCoord) * colorVarying;
This seems to be doing some kind of additive coloring.
What I am attempting is porting this:
http://maniacdev.com/2009/07/source-code-particle-based-explosions-in-iphone-opengl-es/
from ES 1.1 to ES 2.0
with your code, consider the following example:
texture2D = (1,0,0,1) = red - fully opaque
colorVarying = (0,1,0,0.5) = green - half transparent
then gl_FragColor would be (0,0,0,0.5) black - half transparent.
Generally, you can use mix to interpolate values, but if I understood your problem then its even easier.
Basically, you only want the alpha channel from your texture and apply it to another color, right? then you could do this:
gl_FragColor = vec4(colorVarying.rgb, texture2D(Sampler, gl_PointCoord).a)

Resources