Swift Progress Indicator Image Mask - ios

To start, this project has been built using Swift.
I want to create a custom progress indicator that "fills up" as the script runs. The script will call a JSON feed that is pulled from the remote server.
To better visualize what I'm after, I made this:
My guess would be to have two PNG images; one white and one red, and then simply do some masking based on the progress amount.
Any thoughts on this?

Masking is probably overkill for this. Just redraw the image each time. When you do, you draw the red rectangle to fill the lower half of the drawing, to whatever height you want it; then you draw the droplet image (a PNG), which has transparency in the middle so the red rectangle shows through. So, one PNG is enough because the red rectangle can be drawn "live" each time you redraw.
I liked your drawing so much that I wanted to bring it to life, so here's my working code (my PNG is called tear.png and iv is a UIImageView in my interface; percent should be a CGFloat between 0 and 1):
func redraw(percent:CGFloat) {
let tear : UIImage! = UIImage(named:"tear")!
if tear == nil {return}
let sz = tear.size
let top = sz.height*(1-percent)
UIGraphicsBeginImageContextWithOptions(sz, false, 0)
let con = UIGraphicsGetCurrentContext()
UIColor.redColor().setFill()
CGContextFillRect(con, CGRectMake(0,top,sz.width,sz.height))
tear.drawAtPoint(CGPointMake(0,0))
self.iv.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
I also hooked up a UISlider whose action method converts its value to a CGFloat and calls that method, so that moving the slider back and forth moves the red fill up and down in the teardrop. I could play with this for hours!

Related

Achieving erase/restore drawing on UIImage in Swift

I'm trying to make a simple image eraser tool, where the user can erase and restore as drawing into an image, just like in this image:
After many attempts and testing, I have achieved the sufficient "erase" functionality with the following code on the UI side:
// Drawing code - on user touch
// `currentPath` is a `UIBezierPath` property of the containing class.
guard let image = pickedImage else { return }
UIGraphicsBeginImageContextWithOptions(imageView.frame.size, false, 0)
if let context = UIGraphicsGetCurrentContext() {
mainImageView.layer.render(in: context)
context.addPath(currentPath.cgPath)
context.setBlendMode(.clear)
context.setLineWidth(translatedBrushWidth)
context.setLineCap(.round)
context.setLineJoin(.round)
context.setStrokeColor(UIColor.clear.cgColor)
context.strokePath()
let capturedImage = UIGraphicsGetImageFromCurrentImageContext()
imageView.image = capturedImage
}
UIGraphicsEndImageContext()
And upon user touch-up I am applying a scale transform to currentPath to render the image with the cutout part in full size to preserve UI performance.
What I'm trying to figure out now is how to approach the "restore" functionality. Essentially, the user should draw on the erased parts to reveal the original image.
I've tried looking at CGContextClipToMask but I'm not sure how to approach the implementation.
I've also looked at other approaches to achieving this "erase/restore" effect before rendering the actual images, such as masking a CAShapeLayer over the image but also in this approach restoring becomes a problem.
Any help will be greatly appreciated, as well as alternative approaches to erase and restore with a path on the UI-level and rendering level.
Thank you!
Yes, I would recommend adding a CALayer to your image's layer as a mask.
You can either make the mask layer a CAShapeLayer and draw geometric shapes into it, or use a simple CALayer as a mask, where the contents property of the mask layer is a CGImage. You'd then draw opaque pixels into the mask to reveal the image contents, or transparent pixels to "erase" the corresponding image pixels.
This approach is hardware accelerated and quite fast.
Handling undo/redo of eraser functions would require you to collect changes to your mask layer as well as the previous state of the mask.
Edit:
I created a small demo app on Github that shows how to use a CGImage as a mask on an image view
Here is the ReadMe file from that project:
MaskableImageView
This project demonstrates how to use a CALayer to mask a UIView.
It defines a custom subclass of UIImageView, MaskableView.
The MaskableView class has a property maskLayer that contains a CALayer.
MaskableView defines a didSet method on its bounds property so that when the view's bounds change, it resizes the mask layer to match the size of the image view.
The MaskableView has a method installSampleMask which builds an image the same size as the image view, mostly filled with opaque black, but with a small rectangle in the center filled with black at an alpha of 0.7. The translucent center rectangle causes the image view to become partly transparent and show the view underneath.
The demo app installs a couple of subviews into the MaskableView, a sample image of Scampers, one of my dogs, and a UILabel. It also installs an image of a checkerboard under the MaskableView so that you can see the translucent parts more easily.
The MaskableView has properties circleRadius, maskDrawingAlpha, and drawingAction that it uses to let the user erase/un-erase the image by tapping on the view to update the mask.
The MaskableView attaches a UIPanGestureRecognizer and a UITapGestureRecognizer to itself, with an action of gestureRecognizerUpdate. The gestureRecognizerUpdate method takes the tap/drag location from the gesture recognizer and uses it to draw a circle onto the image mask that either decreases the image mask's alpha (to partly erase pixels) or increase the image mask's alpha (to make those pixels more opaque.)
The MaskableView's mask drawing is crude, and only meant for demonstration purposes. It draws a series of discrete circles intstead of rendering a path into the mask based on the user's drag gesture. A better solution would be to connect the points from the gesture recognizer and use them to render a smoothed curve into the mask.
The app's screen looks like this:
Edit #2:
If you want to export the resulting image to a file that preserves the transparency, you can convert the CGImage to a UIImage (Using the init(cgImage:) initializer) and then use the UIImage function
func pngData() -> Data?
to convert the image to PNG data. That function returns nil if it is unable to convert the image to PNG data.
If it succeeds, you can then save the data to a file with a .png extension.
I updated the sample project to include the ability to save the resulting image to disk.
First I added an image computed property to the MaskableView. That looks like this:
public var image: UIImage? {
guard let renderer = renderer else { return nil}
let result = renderer.image {
context in
return layer.render(in: context.cgContext)
}
return result
}
Then I added a save button to the view controller that fetches the image from the MaskableView and saves it to the app's Documents directory:
#IBAction func handleSaveButton(_ sender: UIButton) {
print("In handleSaveButton")
if let image = maskableView.image,
let pngData = image.pngData(){
print(image.description)
let imageURL = getDocumentsDirectory().appendingPathComponent("image.png", isDirectory: false)
do {
try pngData.write(to: imageURL)
print("Wrote png to \(imageURL.path)")
}
catch {
print("Error writing file to \(imageURL.path)")
}
}
}
You could also save the image to the user's camera roll. It's been a while since I've done that so I'd have to dig up the steps for that.

Metal alphaBlendOperation .max weird behavior

I'm using metal to draw some lines, my drawing canvas has a texture in MTLRenderPassDescriptor and when I draw inside it blending is enabled MTLRenderPipelineDescriptor and I'm using alphaBlendOperation = .max
renderPassDescriptor = MTLRenderPassDescriptor()
let attachment = renderPassDescriptor?.colorAttachments[0]
attachment?.texture = self.texture
attachment?.loadAction = .load
attachment?.storeAction = .store
let rpd = MTLRenderPipelineDescriptor()
rpd.colorAttachments[0].pixelFormat = .rgba8Unorm
let attachment = rpd.colorAttachments[0]!
attachment.isBlendingEnabled = true
attachment.rgbBlendOperation = .max
attachment.alphaBlendOperation = .max
I can change the properties in brush (size, opacity, hardness "blur"). However first two brushes are working really great as in the image bellow
But I have only one weird behavior when I use blurred brush with faded sides where lines are connected the faded areas is not blending as expected and an empty small line created between the connection. the image bellow described this issue, please check the single line and single point and then check the connections you can see this behavior very clear
MTLRenderPassDescriptor Should choose even the bellow alpha from down texture or brush alpha but when tap in the second and third point its making empty line instead of choosing a one of the alpha, Its like making alpha zero in these areas.
This is my faded brush you can see there is a gradian of color but i don't know if there is a problem with it
Please share with me any idea you have to solve it

iOS UIImage.drawInRect (CGContextDrawImage) performance

I have an app that lets the user manipulate items like text, images, shapes (vector images). They are stacked on top of each other like layers are in Photoshop.
To manipulate means to translate, resize, rotate.
I use drawInRect to draw all my items. Unfortunately, when drawing images I see a very poor performance - this is the conclusion after inspecting the code with the profiler.
From what I've read online, people recommend using a separate UIView (UIImageView) for all the image drawing and normal drawInRect for the other stuff in a separate view.
But this approach would be problematic because I can have a situation like this:
2 layers in the back with text
1 layer with a image - obstructing part of the text
2 layers in the front with text on top of the image
This would mean I would have to make a UIView for the first 2 text layers, an UIImageView for the middle image and another UIView for the front items. This seems unreasonable.
How can I improve the performance of my image drawing?
Also, keep in mind that I need to export this whole thing to a certain final image size.
NOTE: where I mention layer I'm not referring to CALayer.
Use a method like this to draw what you want:
func drawSomeImage(completion: (UIImage) -> Void) {
UIGraphicsBeginImageContextWithOptions(<#T##size: CGSize##CGSize#>, <#T##opaque: Bool##Bool#>, <#T##scale: CGFloat##CGFloat#>)
//Your drawing...
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
completion(image)
}
And do the above lines in a thread out of the main thread like this:
func myImage() {
DispatchQueue.global().async {
drawSomeImage { image in
DispatchQueue.main.async {
//use this image in your main thread like:
//self.imageView.image = image
}
}
}

Glass effect in SceneKit material

I want to make glass effect in SceneKit.
I searched in google but there's no perfect answer.
So I'm finding SceneKit warrior who can solve my problem clearly.
There's an image that I'm going to make.
It should be looks like real.
The glass effect, reflection and shadow are main point here.
I have obj and dae file already.
So, Is there anyone to help me?
Create a SCNMaterial and configure the following properties and assign it to the bottle geometry of a SCNNode :
.lightingModel = .blinn
.transparent.content = // an image/texture whose alpha channel defines
// the area of partial transparency (the glass)
// and the opaque part (the label).
.transparencyMode = .dualLayer
.fresnelExponent = 1.5
.isDoubleSide = true
.specular.contents = UIColor(white: 0.6, alpha: 1.0)
.diffuse.contents = // texture image including the label (rest can be gray)
.shininess = // somewhere between 25 and 100
.reflective.contents = // glass won’t look good unless it has something
// to reflect, so also configure this as well.
// To at least a gray color with value 0.7
// but preferably an image.
Depending on what else is in your scene, the background, and the lighting used, you will probably have to tune the values above to get the desired results. If you want a bottle without the label, use the .transparency property (set its contents to a gray color) instead of the .transparent property.

Erase only part of an image (currently entire image is erased)

I'm trying to use a finger tap or drag to erase part of a UIImageView.
Here's what I have so far:
let panErase = UIPanGestureRecognizer(target: self, action: "erase:")
let tapErase = UITapGestureRecognizer(target: self, action: "erase:")
imageBeingEdited.addGestureRecognizer(panErase)
imageBeingEdited.addGestureRecognizer(tapErase)
I'm not quite sure how to debug graphics context modifications, but this erases the whole image:
let erasurePoint: CGPoint = gesture.locationInView(imageBeingEdited)
println("\(erasurePoint.x) \(erasurePoint.y)")
let image:UIImage = imageBeingEdited.image!
let s = image.size
UIGraphicsBeginImageContext(s)
let g = UIGraphicsGetCurrentContext()
CGContextBeginPath(g);
CGContextAddEllipseInRect(g, CGRectMake(erasurePoint.x, erasurePoint.y, 5, 5))
CGContextEOClip(g)
image.drawAtPoint(CGPointZero)
imageBeingEdited.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
The goal is to erase a circle at the tap location. What did I miss here? It looks like the image is cropped into a 5x5 circle, not necessarily at the tapped point.
It looks to me like the code you've posted would result in clipping your image to a 5x5 ellipse, exactly as you describe.
Did you write this code, or copy it from somewhere else?
It needs to be rearranged so it first draws the image into the context, then draws your ellipse using a clear color and the drawing mode where the alpha of the source pixels is written to the destination. Don't muck around with the context's clipping path at all.
This doesn't look like very efficient code to me. On every change in the pan gesture you're creating a context, drawing an image into it, and then copying out a new image. Then, presumably, you're drawing the resulting image. That's not likely to be fast enough to keep up with the user's pan gesture.
Instead I would probably add a CAShapeLayer as a mask layer to my image view's layer and modify that mask layer's path, appending an ellipse to the mask path for each point the user touches. Even that might not be fast enough for smooth drawing. You might need to write code that interpolates between a beginning and end touch position and filling the whole segment.

Resources