I have an app that lets the user manipulate items like text, images, shapes (vector images). They are stacked on top of each other like layers are in Photoshop.
To manipulate means to translate, resize, rotate.
I use drawInRect to draw all my items. Unfortunately, when drawing images I see a very poor performance - this is the conclusion after inspecting the code with the profiler.
From what I've read online, people recommend using a separate UIView (UIImageView) for all the image drawing and normal drawInRect for the other stuff in a separate view.
But this approach would be problematic because I can have a situation like this:
2 layers in the back with text
1 layer with a image - obstructing part of the text
2 layers in the front with text on top of the image
This would mean I would have to make a UIView for the first 2 text layers, an UIImageView for the middle image and another UIView for the front items. This seems unreasonable.
How can I improve the performance of my image drawing?
Also, keep in mind that I need to export this whole thing to a certain final image size.
NOTE: where I mention layer I'm not referring to CALayer.
Use a method like this to draw what you want:
func drawSomeImage(completion: (UIImage) -> Void) {
UIGraphicsBeginImageContextWithOptions(<#T##size: CGSize##CGSize#>, <#T##opaque: Bool##Bool#>, <#T##scale: CGFloat##CGFloat#>)
//Your drawing...
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
completion(image)
}
And do the above lines in a thread out of the main thread like this:
func myImage() {
DispatchQueue.global().async {
drawSomeImage { image in
DispatchQueue.main.async {
//use this image in your main thread like:
//self.imageView.image = image
}
}
}
Related
I'm trying to make a simple image eraser tool, where the user can erase and restore as drawing into an image, just like in this image:
After many attempts and testing, I have achieved the sufficient "erase" functionality with the following code on the UI side:
// Drawing code - on user touch
// `currentPath` is a `UIBezierPath` property of the containing class.
guard let image = pickedImage else { return }
UIGraphicsBeginImageContextWithOptions(imageView.frame.size, false, 0)
if let context = UIGraphicsGetCurrentContext() {
mainImageView.layer.render(in: context)
context.addPath(currentPath.cgPath)
context.setBlendMode(.clear)
context.setLineWidth(translatedBrushWidth)
context.setLineCap(.round)
context.setLineJoin(.round)
context.setStrokeColor(UIColor.clear.cgColor)
context.strokePath()
let capturedImage = UIGraphicsGetImageFromCurrentImageContext()
imageView.image = capturedImage
}
UIGraphicsEndImageContext()
And upon user touch-up I am applying a scale transform to currentPath to render the image with the cutout part in full size to preserve UI performance.
What I'm trying to figure out now is how to approach the "restore" functionality. Essentially, the user should draw on the erased parts to reveal the original image.
I've tried looking at CGContextClipToMask but I'm not sure how to approach the implementation.
I've also looked at other approaches to achieving this "erase/restore" effect before rendering the actual images, such as masking a CAShapeLayer over the image but also in this approach restoring becomes a problem.
Any help will be greatly appreciated, as well as alternative approaches to erase and restore with a path on the UI-level and rendering level.
Thank you!
Yes, I would recommend adding a CALayer to your image's layer as a mask.
You can either make the mask layer a CAShapeLayer and draw geometric shapes into it, or use a simple CALayer as a mask, where the contents property of the mask layer is a CGImage. You'd then draw opaque pixels into the mask to reveal the image contents, or transparent pixels to "erase" the corresponding image pixels.
This approach is hardware accelerated and quite fast.
Handling undo/redo of eraser functions would require you to collect changes to your mask layer as well as the previous state of the mask.
Edit:
I created a small demo app on Github that shows how to use a CGImage as a mask on an image view
Here is the ReadMe file from that project:
MaskableImageView
This project demonstrates how to use a CALayer to mask a UIView.
It defines a custom subclass of UIImageView, MaskableView.
The MaskableView class has a property maskLayer that contains a CALayer.
MaskableView defines a didSet method on its bounds property so that when the view's bounds change, it resizes the mask layer to match the size of the image view.
The MaskableView has a method installSampleMask which builds an image the same size as the image view, mostly filled with opaque black, but with a small rectangle in the center filled with black at an alpha of 0.7. The translucent center rectangle causes the image view to become partly transparent and show the view underneath.
The demo app installs a couple of subviews into the MaskableView, a sample image of Scampers, one of my dogs, and a UILabel. It also installs an image of a checkerboard under the MaskableView so that you can see the translucent parts more easily.
The MaskableView has properties circleRadius, maskDrawingAlpha, and drawingAction that it uses to let the user erase/un-erase the image by tapping on the view to update the mask.
The MaskableView attaches a UIPanGestureRecognizer and a UITapGestureRecognizer to itself, with an action of gestureRecognizerUpdate. The gestureRecognizerUpdate method takes the tap/drag location from the gesture recognizer and uses it to draw a circle onto the image mask that either decreases the image mask's alpha (to partly erase pixels) or increase the image mask's alpha (to make those pixels more opaque.)
The MaskableView's mask drawing is crude, and only meant for demonstration purposes. It draws a series of discrete circles intstead of rendering a path into the mask based on the user's drag gesture. A better solution would be to connect the points from the gesture recognizer and use them to render a smoothed curve into the mask.
The app's screen looks like this:
Edit #2:
If you want to export the resulting image to a file that preserves the transparency, you can convert the CGImage to a UIImage (Using the init(cgImage:) initializer) and then use the UIImage function
func pngData() -> Data?
to convert the image to PNG data. That function returns nil if it is unable to convert the image to PNG data.
If it succeeds, you can then save the data to a file with a .png extension.
I updated the sample project to include the ability to save the resulting image to disk.
First I added an image computed property to the MaskableView. That looks like this:
public var image: UIImage? {
guard let renderer = renderer else { return nil}
let result = renderer.image {
context in
return layer.render(in: context.cgContext)
}
return result
}
Then I added a save button to the view controller that fetches the image from the MaskableView and saves it to the app's Documents directory:
#IBAction func handleSaveButton(_ sender: UIButton) {
print("In handleSaveButton")
if let image = maskableView.image,
let pngData = image.pngData(){
print(image.description)
let imageURL = getDocumentsDirectory().appendingPathComponent("image.png", isDirectory: false)
do {
try pngData.write(to: imageURL)
print("Wrote png to \(imageURL.path)")
}
catch {
print("Error writing file to \(imageURL.path)")
}
}
}
You could also save the image to the user's camera roll. It's been a while since I've done that so I'd have to dig up the steps for that.
I'm taking a snapshot of every frame, applying a filter, and updating the background contents of the ARSCNView with the filtered image. Everything is working fine, but there is a lot of latency with all the UI elements on the screen. No latency on the ARSCNView.
func session(_ session: ARSession, didUpdate frame: ARFrame) {
guard let image = CIImage(image: sceneView.snapshot()) else { return }
// I'm setting a filter to each image here. Which has no effect on the latency.
sceneView.scene.background.contents = context.createCGImage(image, from: image.extent)
}
I know I can use frame.capturedImage, which makes latency go away. However, I also place AR objects on the screen which frame.capturedImage ignores for some reason, and sceneView.scene.background.contents cannot be reset to its original source. So, I cannot turn off the image filter. That's why I need to take a snapshot.
Is there anything I can do that will reduce latency on the UI elements? I have a few UIScrollViews on the screen that have tremendous lag.
I'm also in the middle of looking for a way to do this with no lag, but I was able to at least reduce the lag by rendering the view into an image manually:
extension ARSCNView {
/// Performs screen snapshot manually, seems faster than built in snapshot() function, but still somewhat noticeable
var snapshot: UIImage? {
let renderer = UIGraphicsImageRenderer(size: self.bounds.size)
let image = renderer.image(actions: { context in
self.drawHierarchy(in: self.bounds, afterScreenUpdates: true)
})
return image
}
}
It's frustrating that this is faster than the built-in snapshot function, but it seems to be, and also still captures all the SceneKit graphics in the snapshot. (Doing this every frame will still be expensive though, FYI, and the only real solution for that would likely be a custom Metal shader.)
I'm also trying to work with ARSCNView.snapshotView(afterScreenUpdates: Bool) because that seems to have essentially no lag for my purposes, but whenever I try to turn the resulting View into a UIImage, it's totally blank. Either way, the above method cut the lag in about half for me, so you might have some luck with that.
I want to display a map in my iOS application. Therefor, I got a floorplan image (UIImage) and use the following code to render paths (which represent the buildings or rooms) onto the map image:
static func draw(paths: [[CGPoint]], toImage image: UIImage?) -> UIImage? {
if let image = image {
let renderer = UIGraphicsImageRenderer(size: image.size)
return renderer.image { context in
image.draw(at: CGPoint(x: 0, y: 0))
context.cgContext.setFillColor(UIColor.init(white: 0.1, alpha: 0.5).cgColor)
for path in paths {
if path.count > 2 {
context.cgContext.move(to: path[0])
for point in path {
context.cgContext.addLine(to: point)
}
context.cgContext.addLine(to: path[0])
}
}
context.cgContext.drawPath(using: .fill)
}
} else {
return nil
}
}
The result of this method is then set to an UIImageView. However, this takes about two seconds, so way too long.
I am new to iOS development and this was the only way I found.
Does anyone know a faster way? Maybe using custom views or something?
I would suggest to have a look at CAShapeLayer, it is usually quite fast, although I can't say if it outperforms UIGraphicsImageRenderer in your case. My guess is that it will, because it also scales as needed, so removes the need to create a large image.
In case you are new to layers, they are like views except they don't have a user input part. They are easy to work with, since every UIView actually have a .layer for its rendering, which also can be used as a layer parent.
To make a layer work with a view, you just add it to your views layer property as a sub-layer, and then make sure it has the right size. Best way to size the layer is either by using the layers .contentsGravity or to set it manually in the views layoutSubviews.
Read more about CAShapeLayer in the docs
A tutorial on layers
I’m doing video processing with GPUImage2. When the app starts, I create a hexagonal grid and add it to my cameraView. The grid is fullscreen and consists of about 100 of hexagons.
In general, what I’m trying to achieve is
For each frame I want to find an average color (in RGB or even better HSV) within each cell of the grid.
When the color is determined, I want to draw something in the center of each hexagon depending on its average color.
I have an array with hexagons, each of them knows its vertexes’ coordinates and center.
I also have an array with UIBezierPaths which contains bounds of these hexagons (just in case).
So my code looks like this
class ViewController: UIViewController {
var hexagons = [HKHexagon]()
var hexagonsBounds = [UIBezierPath]()
let averageColorExtractor = AverageColorExtractor()
override func viewDidLoad() {
super.viewDidLoad()
do {
camera = try Camera(sessionPreset:AVCaptureSessionPreset1920x1080)
camera.delegate = self
cameraView.orientation = .landscapeLeft
camera --> cameraView
camera.startCapture()
drawGrid()
} catch {
fatalError("Could not initialize rendering pipeline: \(error)")
}
}
}
extension ViewController: CameraDelegate {
func didCaptureBuffer(_ sampleBuffer: CMSampleBuffer) {
for hexagon in hexagons {
}
}
}
I guess didCaptureBuffer() should be the place to apply averageColorExtractor to each hexagon but don’t have an idea what to do next..
I am new to iOS development and it’s the first time I use GPUImage2… Please, guide me in the right direction.
Not coding for your platform at all but GPU architecture allows to do it like this:
pass the image as texture
render the center points only as points
in fragment shader compute the avg color of hex around actual position
This is the hardest and most performance demanding part. If you compute just inscribed circle it is easy but for hexagon you need to compute which texel is inside and which not. For axis aligned hexagons you can divide hex into regions (2x rectangle, 4x triangle) for rotated hexes you need add transformation matrix.
compute/render output inside the center point.
I do not know what your framework can do for you from this. If you rendered stuff is bigger then just the center point then you need either use another pass in your render or use bigger primitive then points in #2 but that means you will compute the avg color for each rendered pixel which can slow things down a lot.
Take a look at GLSL shader that uses this technique (for entirely different task but the technique is the same):
How to implement 2D raycasting light effect in GLSL
If this is not adaptable to your platform then ignore this answer ...
To start, this project has been built using Swift.
I want to create a custom progress indicator that "fills up" as the script runs. The script will call a JSON feed that is pulled from the remote server.
To better visualize what I'm after, I made this:
My guess would be to have two PNG images; one white and one red, and then simply do some masking based on the progress amount.
Any thoughts on this?
Masking is probably overkill for this. Just redraw the image each time. When you do, you draw the red rectangle to fill the lower half of the drawing, to whatever height you want it; then you draw the droplet image (a PNG), which has transparency in the middle so the red rectangle shows through. So, one PNG is enough because the red rectangle can be drawn "live" each time you redraw.
I liked your drawing so much that I wanted to bring it to life, so here's my working code (my PNG is called tear.png and iv is a UIImageView in my interface; percent should be a CGFloat between 0 and 1):
func redraw(percent:CGFloat) {
let tear : UIImage! = UIImage(named:"tear")!
if tear == nil {return}
let sz = tear.size
let top = sz.height*(1-percent)
UIGraphicsBeginImageContextWithOptions(sz, false, 0)
let con = UIGraphicsGetCurrentContext()
UIColor.redColor().setFill()
CGContextFillRect(con, CGRectMake(0,top,sz.width,sz.height))
tear.drawAtPoint(CGPointMake(0,0))
self.iv.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
I also hooked up a UISlider whose action method converts its value to a CGFloat and calls that method, so that moving the slider back and forth moves the red fill up and down in the teardrop. I could play with this for hours!