Create iOS mask image on the fly - ios

I got an image I want to mask on the fly. The mask is basically shaped like a part-circle and changes in volume from time to time. Therefor I need to create an in-memory image, draw the mask circle stuff to it and do the masking on the original image like described in How to Mask an UIImageView.
The thing is that I got no idea how to create an in-memory image I may use for masking and I can apply basic drawing opperations on.t

If your mask is only a semi-circle, it might be easier to create a clipping path with CGContext* calls and draw your image as a CGImage with the clipping path applied. See the documentation for CGContextClip() for details.

Related

Mask/crop an image

What is the best approach to display a cropped/masked image in Flutter?
Lets say I have one image with a mask (eg. an irregular star shape with transparent background) and the other image which I want to mask with this star, so that only the part inside the star of original image would be rendered.
I'm aiming for something like PorterDuffXfermode on Android (similar question here - Android how to apply mask on ImageView?).
In case of simple mask shapes is going the RenderClipOval way a good approach?
I would just paint it using a CustomPainter, setting the Paint.BlendMode on the Paint you pass to the method when you paint the image.
See https://docs.flutter.io/flutter/dart-ui/Canvas/drawImageRect.html and https://docs.flutter.io/flutter/dart-ui/Paint/blendMode.html and https://docs.flutter.io/flutter/widgets/CustomPaint-class.html.

IOS Extract Output Image from Masked Image

I have created an image mask based on user input, and the result includes both foreground and the created mask (mixed).
Masked Image
I am trying to find a way to return only the masked image, and have been unsuccessful identifying a way to change the alpha value of the mask. Can anyone help? I want to get to a final image of the selected area.
The code I am working with to create the mask and return the masked image is here: Code Sample
After continuous research and testing, I used the getForeground:andMask method to separate the target into two sets of values. Then, I inverted the mask and added the resulting image and inverted mask to a SubView with a clear background color.

Masking performance

I'm creating an animation that uncovers the underlying image. There's a virtual shape (e.g. star) moving chaotically and uncovering different parts of the image.
So I had two bitmaps so far:
mask (trace of a shape moving here'n'there)
image (underlying image)
So far in every drawRect() I was:
creating a newMask bitmap by copying the current mask
drawing a stamp on a newMask
creating a resulting bitmap (apply newMask onto an image)
drawing a resulting bitmap to screen context
I'm struggling with performance in this approach. Any ideas how to improve it?
In particular:
Is it possible to skip step 1. & 2. and draw onto mask directly (rather than clone it).
Should I start experimenting with CALayer approach (if this kind of masking is at all possible there)
Should I use OpenGL
Is there any other approach to tackle this?
No, you should not manipulate bitmaps. That is likely to be very CPU-intensive as well as jerky (not smooth animation.)
Instead you should use a CAShapeLayer as a mask and Core Animation.
With a shape layer you can install a path (a CGPath, which can be created easily from a UIBezierPath) into the layer. Then you create a CABasicAnimation that switches the path to a new path. The trick is to always keep the same number and type of control points in the starting and ending paths of the animation. (If the number and/or type of control points in the two paths are different you get very, very strange results. Note that the path calls that create arcs of circles actually generate different numbers of control points based on how much of a circle your arc covers, so circle arcs require special handling.)
I have a sample project on Github that demonstrates various Core Animation techniques, including a demonstration of a "clock wipe" animation that reveals/hides and image view much like you describe.
https://github.com/DuncanMC/iOS-CAAnimation-group-demo
The animation looks like this:
Note that the jerky nature of that image is because it's a GIF. The actual animation on a device is buttery-smooth. It's also possible to create very complex smooth animations like this one:
(That isn't a mask animation but it could be.)

How to create a non-rectangular UIImageView

I'm trying to make a ViewController that presents info from a webpage like this:
However, I'm confused on one thing. How did they get the imageView to display an image that's cut off at the corner, i.e. not rectangular? Do you think they created that player card in Photoshop and used it as the background for the imageView image, or did they create it programmatically?
I wonder because the image is behind the picture of the bear, so I imagine if they created the background in Photoshop, how would they get the image behind the bear head? They can't have just created the card with the player's picture as part of it, then loaded the whole image because if they traded the player, because I'm sure they pull the player info and picture from the web so they can have a card for all players, even if they trade or acquire a new player mid-season, without having to update the app (and add the finished image to images.xcassets).
This can be composed from two CALayers at runtime. Put the picture on the bottom layer; the picture can come from anywhere - the web, the bundle, etc. the image source could be dynamic.
Put another CALayer on top, with the frame rendered with opaque colors, and a transparent cut-out for the picture in the middle:
There are a bunch of ways to do this. A simple and flexible way to do it is to create a CAShapeLayer that's the same size as the image view, with it's origin at 0,0, and add it as the UIImageView's layer's mask.
You'd create a filled UIBezierPath that maps out the part of the image you want to show, and install the bezier path's CGPath into the mask layer's path property.
The result would be that the image view is cropped so that only the part inside the shape is drawn.

How to mask a UIView to highlight a selection?

The problem that I am facing is simple (and less abstract than the question itself). I am looking for a solution to highlight an area of an image (the selection) while the rest of the image is faded or grayed out. You can compare the effect with the interface you see in, for example, Photoshop when you crop an image. The image is grayed out and the the area that will be cropped is clear.
My initial idea was to use masking for this (hence the question), but I am not sure if this is a viable approach and, if it is, how to proceed.
Not sure if this is the best way, but it should work.
First, you create a screenshot of the view.
UIGraphicsBeginImageContextWithOptions(captureView.bounds.size, view.opaque, 0.0);
[captureView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This snippet is 'stolen' and slightly modified from here:
Low quality of capture view context on iPad
Then you could create a grayscale mask image of same dimensions as the original (screenshot).
Follow the clear & simple instructions on How to Mask an Image.
Then you create UIImageView, set the masked image as it's image and add it on top of your original view. You also might want to set the backgroundColor of this UIImageView to your liking.
EDIT:
A simpler way would probably be using view.layer.mask, which is "an optional layer whose alpha channel is used as a mask to select between the layer's background and the result of compositing the layer's contents with its filtered background." (from CALayer class reference)
Some literature:
UIView Class Reference
CALayer Class Reference
And a simple example how mask can be made with (possibly hidden) another UIView:
Masking a UIView

Resources