I have the following image:
On a UIImageView of the exact same frame size, I want to show everything but the red fill.
let mask = CALayer()
mask.contents = clippingImage.CGImage
self.myImageView.layer.mask = mask
I thought the black color would show through when applied as a mask, but when I set the mask the whole view is cleared. What's happening here?
When creating masks from images, you use the Alpha channel, rather than any RGB channel. Even if your mask is black, you need to set its Alpha value to 0, as the mask pays attention only to Alpha channel. Any by default black is [0,0,0,255] in terms of RGBA. If you load RGB, it will of course convert it into RGBA with A = 1.
Related
I am trying to use a CALayer with an image as contents for masking a UIView. For the mask I have complex png image. If I apply the image as a view.layer.mask I get the opposite behaviour of what I want.
Is there a way to reverse the CAlayer? Here is my code:
layerMask = CALayer()
guard let layerMask = layerMask else { return }
layerMask.contents = #imageLiteral(resourceName: "mask").cgImage
view.layer.mask = layerMask
// What I would like to to is
view.layer.mask = layerMask.inverse. // <---
I have seen several posts on reverse CAShapeLayers and Mutable paths, but nothing where I can reverse a CALayer.
What I could do is reverse the image in Photoshop so that the alpha is inverted, but the problem with that is that I won't be able to create an image with the exact size to fit all screen sizes. I hope it does make sense.
What I would do is construct the mask in real time. This is easy if you have a black image of the logo. Using standard techniques, you can draw the logo image into an image that you construct in real time, so that you are in charge of the size of the image and the size and placement of logo within it. Using a "Mask To Alpha" CIFilter, you can then convert the black to transparent for use as a layer mask.
So, to illustrate. Here's the background image: this is what we want to see wherever we punch a hole in the foreground:
Here's the foreground image, lying on top of the background and completely hiding it:
Here's the logo, in black (ignore the grey, which represents transparency):
Here's the logo drawn in code into a white background of the correct size:
And finally, here's that same image converted into a mask with the Mask To Alpha CIFilter and attached to the foreground image view as its mask:
Okay, I could have chosen my images a little better, but this is what I had lying around. You can see that wherever there was black in the logo, we are punching a hole in the foreground image and seeing the background image, which I believe is exactly what you said you wanted to do.
The key step is the last one, namely the conversion of the black-on-white image of the logo (im) to a mask; here's how I did that:
let cim = CIImage(image:im)
let filter = CIFilter(name:"CIMaskToAlpha")!
filter.setValue(cim, forKey: "inputImage")
let out = filter.outputImage!
let cgim = CIContext().createCGImage(out, from: out.extent)
let lay = CALayer()
lay.frame = self.iv.bounds
lay.contents = cgim
self.iv.layer.mask = lay
If you're using a CALayer as a mask for another CALayer, you can invert the mask by creating a large opaque layer and subtracting out the mask shape with the xor blend mode.
For example, this code subtracts a given layer from a large opaque layer to create an mask layer:
// Create a large opaque layer to serve as the inverted mask
let largeOpaqueLayer = CALayer()
largeOpaqueLayer.bounds = .veryLargeRect
largeOpaqueLayer.backgroundColor = UIColor.black.cgColor
// Subtract out the mask shape using the `xor` blend mode
let maskLayer = ...
largeOpaqueLayer.addSublayer(maskLayer)
maskLayer.compositingFilter = "xor"
Then you can use that layer as the mask for some other CALayer. For example here I'm using it as the mask of a small blue rectangle:
smallBlueRectangle.mask = largeOpaqueLayer
So you can see the mask is inverted! On the other hand if you just use the un-inverted maskLayer directly as a mask, you can see the mask is not inverted:
I'm trying to do image masking here is my output and code.
This is my masking reference image(Image size doesn't matter),
mask.png,
This is image on which i'm performing masking,
imggo.png,
This is the Output.
I'm using Swift, Here is my code...
override func viewDidLoad() {
super.viewDidLoad()
var maskRefImg : UIImage? = UIImage(named: "mask.png")
var maskImg :UIImage? = UIImage(named: "imggo.png")
var imgView: UIImageView? = UIImageView(frame: CGRectMake(20, 50, 99, 99))
imgView?.image = maskImage(maskRefImg!, maskImage: maskImg!)
self.view.addSubview(imgView!)
}
func maskImage(image:UIImage,maskImage:UIImage)-> UIImage{
var maskRef:CGImageRef = maskImage.CGImage
var mask:CGImageRef = CGImageMaskCreate(CGImageGetWidth(maskRef), CGImageGetHeight(maskRef), CGImageGetBitsPerComponent(maskRef), CGImageGetBitsPerPixel(maskRef), CGImageGetBytesPerRow(maskRef), CGImageGetDataProvider(maskRef), nil, true)
var masked:CGImageRef = CGImageCreateWithMask(image.CGImage, mask)
return UIImage(CGImage: masked)!
}
So,How can I make Go! image colourful.?
Would anyone provide code.?
You are calling the function maskImage with the wrong order of arguments:
The maskImage function wants the image to mask first, and then the mask. But when you call maskImage(maskRefImg!, maskImage: maskImg!) you have it exactly swapped.
So you need to call maskImage(maskImg!, maskImage: maskRefImg!)
I'm guessing that what you want to have is the tilted rectangle with the word "Go!" and that the result image should be exactly the same size as the mask image.
When you swap the images (as you must), the mask image is scaled to the "Go!" image size. But instead, you probably want the mask image centered over your "Go!" image. So you need to create a new image with the same size as your "Go!" image and draw the mask centered into that temporary image. You then use the temporary image as the actual mask to apply.
The example image when you swap the arguments also shows that the "outside" is also green. This probably because your mask image is transparent there and CGImageMaskCreate converts it to black. But the documentation of CGImageCreateWithMask basically tells you that the created image will blend the "Go!" image so that parts where your mask image is black will have the "Go!" image visible and where your mask image is white it will be transparent.
The step-by-step instructions thus are:
Create a new, temporary image that is of the same size as your input image (the "Go!" image).
Fill it with white.
Draw your mask image centered into the temporary image.
Create the actual mask by calling CGImageMaskCreate with the temporary image.
Call CGImageCreateWithMask with the "Go!" image as first argument and the actual mask we've just created as second argument.
The result might be too big (have a lot of transparency surrounding it). If you don't want that you need to crop the result image (e.g. to the size of your original mask image; make sure to crop to the center).
You can probably skip the CGImageCreateWithMask part if you immediately create the temporary image in the DeviceGray color space, as CGImageCreateWithMask wants the second argument to be an image in this color space. In that case, I suggest you modify your mask.png so it does not contain any transparency: it should be white where it's transparent now.
I have an image I want to partially mask (wallSprite), an image to act as a mask over it (wallMaskBox), and a node to hold both (wallCropNode). When I simply add both images as children of wallCropNode, both image display correctly:
var wallSprite = SKSpriteNode(imageNamed: "wall.png")
var wallCropNode = SKCropNode()
var wallMaskBox = SKSpriteNode(imageNamed: "blacksquaretiny.png")
wallMaskBox.zPosition = 100
wallCropNode.addChild(wallSprite)
wallCropNode.addChild(wallMaskBox)
gameplayContainerNode.addChild(wallCropNode)
But when I set the mask image as a maskNode property of the crop node:
var wallSprite = SKSpriteNode(imageNamed: "wall.png")
var wallCropNode = SKCropNode()
var wallMaskBox = SKSpriteNode(imageNamed: "blacksquaretiny.png")
wallMaskBox.zPosition = 100
wallCropNode.addChild(wallSprite)
wallCropNode.maskNode = wallMaskBox
gameplayContainerNode.addChild(wallCropNode)
the wallSprite image disappears entirely, instead of being partly cropped. Any ideas?
The issue is your black square image is completely opaque. Some (or all) of its pixels should be transparent (i.e., alpha = 0). The pixels that correspond to the mask node's transparent pixels will be masked out (i.e., not rendered) in the cropped node. To demonstrate this, I used your code to create the following.
Here's the original image:
Here's the mask image that I used for the maskNode. Note that the white regions are transparent (i.e., alpha = 0). From Apple's documentation,
When rendering its children, each pixel is verified against the
corresponding pixel in the mask. If the pixel in the mask has an alpha
value of less than 0.05, the image pixel is masked out. Any pixel not
rendered by the mask node is automatically masked out.
and here's the cropped node. I took a screenshot of the scene from the iPhone 6 simulator.
I'm trying to implement transparent objects in D3D11. I've setup my blend state like this:
D3D11_BLEND_DESC blendDesc;
ZeroMemory(&blendDesc, sizeof (D3D11_BLEND_DESC));
blendDesc.RenderTarget[0].BlendEnable = TRUE;
blendDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA;
blendDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
blendDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
blendDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ZERO;
blendDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO;
blendDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
blendDesc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL; //0x0f;
// set blending
m_d3dDevice->CreateBlendState(&blendDesc, &blendState);
float blendFactor[4] = {1,1,1, 1 };
m_d3dContext->OMSetBlendState(blendState, blendFactor, 0xffffffff);
Rendering transparent object on top of nontransparent object looks fine. Problem is, when I draw transparent object, and another transparent object on top of it, their colors add up and are less transparent. How to prevent this? Thank you very much
Your alphablending follows the formula ResultingColor = alpha * BackbufferColor + (1-alpha) * RenderedColor. At the overlapping parts of your transparent objects this formula will be applied twice. For example if your alpha is 0.5, the first object will replace the backbuffercolor for 50%. The second object interpolates its color for 50% from the previous color, which is 50% background and 50% first object, leading to a total of 25% of your background. This is why overlapping transparent objects looks more oqaque.
If you want an equal transparency over the whole screen, you could render your transparent objects onto a offscreen texture. Afterwards you render this texture over the backbuffer with a fix transparency or encode the transparency in the texture if you need different values.
Here's what I'm trying to do: On the left is a generic, uncolorized RGBA image that I've created off-screen and cached for speed (it's very slow to create initially, but very fast to colorize with any color later, as needed). It's a square image with a circular swirl. Inside the circle, the image has an alpha/opacity of 1. Outside the circle, it has an alpha/opacity of 0. I've displayed it here inside a UIView with a background color of [UIColor scrollViewTexturedBackgroundColor]. On the right is what happens when I attempt to colorize the image by filling a solid red rectangle over the top of it after setting CGContextSetBlendMode(context, kCGBlendModeColor).
That's not what I want, nor what I expected. Evidently, colorizing a completely transparent pixel (e.g., alpha value of 0) results in the full-on fill color for some strange reason, rather than remaining transparent as I would have expected.
What I want is actually this:
Now, in this particular case, I can set the clipping region to a circle, so that the area outside the circle remains untouched — and that's what I've done here as a workaround.
But in my app, I also need to be able to colorize arbitrary shapes where I don't know the clipping/outline path. One example is colorizing white text by overlaying a gradient. How is this done? I suspect there must be some way to do it efficiently — and generally, with no weird path/clipping tricks — using image masks... but I have yet to find a tutorial on this. Obviously it's possible because I've seen colored-gradient text in other games.
Incidentally, what I can't do is start with a gradient and clip/clear away parts I don't need — because (as shown in the example above) my uncolorized source images are, in general, grayscale rather than pure white. So I really need to start with the uncolorized image and then colorize it.
p.s. — kCGBlendModeMultiply also has the same flaws / shortcomings / idiosyncrasies when it comes to colorizing partially transparent images. Does anyone know why Apple decided to do it that way? It's as if the Quartz colorizing code treats RGBA(0,0,0,0) as RGBA(0,0,0,1), i.e., it completely ignores and destroys the alpha channel.
One approach that you can take that will work is to construct a mask from the original image and then invoke the CGContextClipToMask() method before rendering your image with the multiply blend mode set. Here is the CoreGraphics code that would set the mask before drawing the image to color.
CGContextRef context = [frameBuffer createBitmapContext];
CGRect bounds = CGRectMake( 0.0f, 0.0f, width, height );
CGContextClipToMask(context, bounds, maskImage.CGImage);
CGContextDrawImage(context, bounds, greyImage.CGImage);
The slightly more tricky part will be to take the original image and generate a maskImage. What you can do for that is write a loop that will examine each pixel and write either a black or white pixel as the mask value. If the original pixel in the image to color is completely transparent, then write a black pixel, otherwise write a white pixel. Note that the mask value will be a 24BPP image. Here is some code to give you the right idea.
uint32_t *inPixels = (uint32_t*) MEMORY_ADDR_OF_ORIGINAL_IMAGE;
uint32_t *maskPixels = malloc(numPixels * sizeof(uint32_t));
uint32_t *maskPixelsPtr = maskPixels;
for (int rowi = 0; rowi < height; rowi++) {
for (int coli = 0; coli < width; coli++) {
uint32_t inPixel = *inPixels++;
uint32_t inAlpha = (inPixel >> 24) & 0xFF;
uint32_t cval = 0;
if (inAlpha != 0) {
cval = 0xFF;
}
uint32_t outPixel = (0xFF << 24) | (cval << 16) | (cval << 8) | cval;
*maskPixelsPtr++ = outPixel;
}
}
You will of course need to fill in all the details and create the graphics contexts and so on. But the general idea is to simply create your own mask to filter out drawing of the red parts around the outside of the circle.