Metal alphaBlendOperation .max weird behavior - ios

I'm using metal to draw some lines, my drawing canvas has a texture in MTLRenderPassDescriptor and when I draw inside it blending is enabled MTLRenderPipelineDescriptor and I'm using alphaBlendOperation = .max
renderPassDescriptor = MTLRenderPassDescriptor()
let attachment = renderPassDescriptor?.colorAttachments[0]
attachment?.texture = self.texture
attachment?.loadAction = .load
attachment?.storeAction = .store
let rpd = MTLRenderPipelineDescriptor()
rpd.colorAttachments[0].pixelFormat = .rgba8Unorm
let attachment = rpd.colorAttachments[0]!
attachment.isBlendingEnabled = true
attachment.rgbBlendOperation = .max
attachment.alphaBlendOperation = .max
I can change the properties in brush (size, opacity, hardness "blur"). However first two brushes are working really great as in the image bellow
But I have only one weird behavior when I use blurred brush with faded sides where lines are connected the faded areas is not blending as expected and an empty small line created between the connection. the image bellow described this issue, please check the single line and single point and then check the connections you can see this behavior very clear
MTLRenderPassDescriptor Should choose even the bellow alpha from down texture or brush alpha but when tap in the second and third point its making empty line instead of choosing a one of the alpha, Its like making alpha zero in these areas.
This is my faded brush you can see there is a gradian of color but i don't know if there is a problem with it
Please share with me any idea you have to solve it

Related

Can I make shadow that can look through transparent object with scenekit and arkit?

I made transparent object with scenekit and linked with arkit.
I made a shadow with lightning material but can't see the shadow look through the transparent object.
I made a plane and placed the object on it.
And give the light to a transparent object.
the shadow appears behind the object but can not see through the object.
Here's code that making the shadow.
let light = SCNLight()
light.type = .directional
light.castsShadow = true
light.shadowRadius = 200
light.shadowColor = UIColor(red: 0, green: 0, blue: 0, alpha: 0.3)
light.shadowMode = .deferred
let constraint = SCNLookAtConstraint(target: model)
lightNode = SCNNode()
lightNode!.light = light
lightNode!.position = SCNVector3(model.position.x + 10, model.position.y + 30, model.position.z+30)
lightNode!.eulerAngles = SCNVector3(45.0, 0, 0)
lightNode!.constraints = [constraint]
sceneView.scene.rootNode.addChildNode(lightNode!)
And the below code is for making a floor under the bottle.
let floor = SCNFloor()
floor.reflectivity = 0
let material = SCNMaterial()
material.diffuse.contents = UIColor.white
material.colorBufferWriteMask = SCNColorMask(rawValue:0)
floor.materials = [material]
self.floorNode = SCNNode(geometry: floor)
self.floorNode!.position = SCNVector3(x, y, z)
self.sceneView.scene.rootNode.addChildNode(self.floorNode!)
I think it can be solved with simple property but I can't figure out.
How can I solve the problem?
A known issue with deferred shading is that it doesn’t work with transparency so you may have to remove that line and use the default forward shading again. That said, the “simple property” you are looking for is the .renderingOrder property on the SCNNode. Set it to 99 for example. Normally the rendering order doesn’t matter because the z buffer is used to determine what pixel is in front of others. For the shadow to show up through the transparant part of the object you need to make sure the object is rendered last.
On a different note, assuming you used some of the material settings I posted on your other question, try setting the shininess value to something like 0.4.
Note that this will still create a shadow as if the object was not transparent at all, so it won’t create a darker shadow for the label and cap. For additional realism you could opt to fake the shadow entirely, as in using a texture for the shadow and drop that on a plane which you rotate and skew as needed. For even more realism, you could fake the caustics that way too.
You may also want to add a reflection map to the reflective property of the material. Almost the same as texture map but in gray scale, where the label and cap are dark gray (not very reflective) and a lighter gray for the glass portion (else it will look like the label is on the inside of the glass). Last tip: use a Shell modifier (that’s what it’s called in 3Ds max anyway) to give the glass model some thickness.

Glass effect in SceneKit material

I want to make glass effect in SceneKit.
I searched in google but there's no perfect answer.
So I'm finding SceneKit warrior who can solve my problem clearly.
There's an image that I'm going to make.
It should be looks like real.
The glass effect, reflection and shadow are main point here.
I have obj and dae file already.
So, Is there anyone to help me?
Create a SCNMaterial and configure the following properties and assign it to the bottle geometry of a SCNNode :
.lightingModel = .blinn
.transparent.content = // an image/texture whose alpha channel defines
// the area of partial transparency (the glass)
// and the opaque part (the label).
.transparencyMode = .dualLayer
.fresnelExponent = 1.5
.isDoubleSide = true
.specular.contents = UIColor(white: 0.6, alpha: 1.0)
.diffuse.contents = // texture image including the label (rest can be gray)
.shininess = // somewhere between 25 and 100
.reflective.contents = // glass won’t look good unless it has something
// to reflect, so also configure this as well.
// To at least a gray color with value 0.7
// but preferably an image.
Depending on what else is in your scene, the background, and the lighting used, you will probably have to tune the values above to get the desired results. If you want a bottle without the label, use the .transparency property (set its contents to a gray color) instead of the .transparent property.

Reverse a CALayer mask

I am trying to use a CALayer with an image as contents for masking a UIView. For the mask I have complex png image. If I apply the image as a view.layer.mask I get the opposite behaviour of what I want.
Is there a way to reverse the CAlayer? Here is my code:
layerMask = CALayer()
guard let layerMask = layerMask else { return }
layerMask.contents = #imageLiteral(resourceName: "mask").cgImage
view.layer.mask = layerMask
// What I would like to to is
view.layer.mask = layerMask.inverse. // <---
I have seen several posts on reverse CAShapeLayers and Mutable paths, but nothing where I can reverse a CALayer.
What I could do is reverse the image in Photoshop so that the alpha is inverted, but the problem with that is that I won't be able to create an image with the exact size to fit all screen sizes. I hope it does make sense.
What I would do is construct the mask in real time. This is easy if you have a black image of the logo. Using standard techniques, you can draw the logo image into an image that you construct in real time, so that you are in charge of the size of the image and the size and placement of logo within it. Using a "Mask To Alpha" CIFilter, you can then convert the black to transparent for use as a layer mask.
So, to illustrate. Here's the background image: this is what we want to see wherever we punch a hole in the foreground:
Here's the foreground image, lying on top of the background and completely hiding it:
Here's the logo, in black (ignore the grey, which represents transparency):
Here's the logo drawn in code into a white background of the correct size:
And finally, here's that same image converted into a mask with the Mask To Alpha CIFilter and attached to the foreground image view as its mask:
Okay, I could have chosen my images a little better, but this is what I had lying around. You can see that wherever there was black in the logo, we are punching a hole in the foreground image and seeing the background image, which I believe is exactly what you said you wanted to do.
The key step is the last one, namely the conversion of the black-on-white image of the logo (im) to a mask; here's how I did that:
let cim = CIImage(image:im)
let filter = CIFilter(name:"CIMaskToAlpha")!
filter.setValue(cim, forKey: "inputImage")
let out = filter.outputImage!
let cgim = CIContext().createCGImage(out, from: out.extent)
let lay = CALayer()
lay.frame = self.iv.bounds
lay.contents = cgim
self.iv.layer.mask = lay
If you're using a CALayer as a mask for another CALayer, you can invert the mask by creating a large opaque layer and subtracting out the mask shape with the xor blend mode.
For example, this code subtracts a given layer from a large opaque layer to create an mask layer:
// Create a large opaque layer to serve as the inverted mask
let largeOpaqueLayer = CALayer()
largeOpaqueLayer.bounds = .veryLargeRect
largeOpaqueLayer.backgroundColor = UIColor.black.cgColor
// Subtract out the mask shape using the `xor` blend mode
let maskLayer = ...
largeOpaqueLayer.addSublayer(maskLayer)
maskLayer.compositingFilter = "xor"
Then you can use that layer as the mask for some other CALayer. For example here I'm using it as the mask of a small blue rectangle:
smallBlueRectangle.mask = largeOpaqueLayer
So you can see the mask is inverted! On the other hand if you just use the un-inverted maskLayer directly as a mask, you can see the mask is not inverted:

Swift Progress Indicator Image Mask

To start, this project has been built using Swift.
I want to create a custom progress indicator that "fills up" as the script runs. The script will call a JSON feed that is pulled from the remote server.
To better visualize what I'm after, I made this:
My guess would be to have two PNG images; one white and one red, and then simply do some masking based on the progress amount.
Any thoughts on this?
Masking is probably overkill for this. Just redraw the image each time. When you do, you draw the red rectangle to fill the lower half of the drawing, to whatever height you want it; then you draw the droplet image (a PNG), which has transparency in the middle so the red rectangle shows through. So, one PNG is enough because the red rectangle can be drawn "live" each time you redraw.
I liked your drawing so much that I wanted to bring it to life, so here's my working code (my PNG is called tear.png and iv is a UIImageView in my interface; percent should be a CGFloat between 0 and 1):
func redraw(percent:CGFloat) {
let tear : UIImage! = UIImage(named:"tear")!
if tear == nil {return}
let sz = tear.size
let top = sz.height*(1-percent)
UIGraphicsBeginImageContextWithOptions(sz, false, 0)
let con = UIGraphicsGetCurrentContext()
UIColor.redColor().setFill()
CGContextFillRect(con, CGRectMake(0,top,sz.width,sz.height))
tear.drawAtPoint(CGPointMake(0,0))
self.iv.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
I also hooked up a UISlider whose action method converts its value to a CGFloat and calls that method, so that moving the slider back and forth moves the red fill up and down in the teardrop. I could play with this for hours!

DirectX 11 Blending transparent objects, not desirable result

I'm trying to implement transparent objects in D3D11. I've setup my blend state like this:
D3D11_BLEND_DESC blendDesc;
ZeroMemory(&blendDesc, sizeof (D3D11_BLEND_DESC));
blendDesc.RenderTarget[0].BlendEnable = TRUE;
blendDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA;
blendDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
blendDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
blendDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ZERO;
blendDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO;
blendDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
blendDesc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL; //0x0f;
// set blending
m_d3dDevice->CreateBlendState(&blendDesc, &blendState);
float blendFactor[4] = {1,1,1, 1 };
m_d3dContext->OMSetBlendState(blendState, blendFactor, 0xffffffff);
Rendering transparent object on top of nontransparent object looks fine. Problem is, when I draw transparent object, and another transparent object on top of it, their colors add up and are less transparent. How to prevent this? Thank you very much
Your alphablending follows the formula ResultingColor = alpha * BackbufferColor + (1-alpha) * RenderedColor. At the overlapping parts of your transparent objects this formula will be applied twice. For example if your alpha is 0.5, the first object will replace the backbuffercolor for 50%. The second object interpolates its color for 50% from the previous color, which is 50% background and 50% first object, leading to a total of 25% of your background. This is why overlapping transparent objects looks more oqaque.
If you want an equal transparency over the whole screen, you could render your transparent objects onto a offscreen texture. Afterwards you render this texture over the backbuffer with a fix transparency or encode the transparency in the texture if you need different values.

Resources