Duplicate CALayer - ios

I want to create a PDF from a UIView.
Now I want to copy the layer and resize it to a DIN A4 page size.
CGRect A4PageRect = CGRectMake(0, 0, 595, 842);
CALayer *myLayer = [pdfView layer];
myLayer.bounds = pageRect;
But this cody resizes the visible layer on my screen.
How can I copy the layers contents to resize it to fit an A4 page?
Thanks for help, Julian

FWIW, you can make a shallow copy of a CALayer via:
(Swift 3)
let tmp = NSKeyedArchiver.archivedData(withRootObject: myLayer)
let copiedCA = NSKeyedUnarchiver.unarchiveObject(with: tmp) as! CALayer
(or whatever subclass you are using. e.g. CAShapeLayer)
Thanks to: https://stackoverflow.com/a/35345819/1452758

There is no way to duplicate a CALayer.
(It would be difficult for CoreAnimation to implement that in a sensible way. There might be a whole tree of sublayers, and they all might have delegates that influence their behavior, which wouldn't expect to suddenly get requests from the copies of the layers.)
I can only guess at a better solution, because I don't understand your exact situation. Do you have a PDF that you are trying to resize, or do you just want take an arbitrary existing layer and make a PDF document out of it?
If the latter:
Use UIGraphicsBeginPDFContextToData or UIGraphicsBeginPDFContextToFile to create a PDF drawing context
Call UIGraphicsBeginPDFPage or UIGraphicsBeginPDFPageWithInfo to create a page
Call UIGraphicsGetCurrentContext to get the PDF drawing context
Scale using CGContextScaleCTM so your layer will fit in the PDF page
Call -[CALayer renderInContext:] to draw the layer into the PDF context
Call UIGraphicsEndPDFContext to finish the PDF
Note that this may look terrible. Layers are bitmap-based, so you'll get a bitmap in your PDF. Also, -[CALayer renderInContext:] doesn't render exactly the same as it does on-screen -- see the note in the documentation.
If this is a problem, you'll need to add a separate drawing path that bypasses CALayer. In step 5, you would do your own drawing using CoreGraphics.

In your case it's indeed better to go with the proposed solution but for the record, yes you can duplicate - in a way - a CALayer : using CAReplicatorLayer.
It's a very powerful API, not necessary build for duplicating one layer but it works for that too. It works even better for making astonishing visual effects by duplicating series of layers.
For more information you can have a look at the apple official documentation, here an extract:
A layer that creates a specified number of copies of its sublayers (the source layer), each copy potentially having geometric, temporal, and color transformations applied to it.
And here a great tutorial by John Sundell, another extract:
CAReplicatorLayer specializes in drawing multiple copies of an original layer (hence it being a "replicator"), in an efficient - hardware accelerated - manner. It's super useful when drawing things like tiled backgrounds, patterns or other things that should be repeated multiple times. I even use it to implement the texture tiling feature of my upcoming open source Swift game engine.
And finally here a great example of what can be done with this api.

Related

Masking performance

I'm creating an animation that uncovers the underlying image. There's a virtual shape (e.g. star) moving chaotically and uncovering different parts of the image.
So I had two bitmaps so far:
mask (trace of a shape moving here'n'there)
image (underlying image)
So far in every drawRect() I was:
creating a newMask bitmap by copying the current mask
drawing a stamp on a newMask
creating a resulting bitmap (apply newMask onto an image)
drawing a resulting bitmap to screen context
I'm struggling with performance in this approach. Any ideas how to improve it?
In particular:
Is it possible to skip step 1. & 2. and draw onto mask directly (rather than clone it).
Should I start experimenting with CALayer approach (if this kind of masking is at all possible there)
Should I use OpenGL
Is there any other approach to tackle this?
No, you should not manipulate bitmaps. That is likely to be very CPU-intensive as well as jerky (not smooth animation.)
Instead you should use a CAShapeLayer as a mask and Core Animation.
With a shape layer you can install a path (a CGPath, which can be created easily from a UIBezierPath) into the layer. Then you create a CABasicAnimation that switches the path to a new path. The trick is to always keep the same number and type of control points in the starting and ending paths of the animation. (If the number and/or type of control points in the two paths are different you get very, very strange results. Note that the path calls that create arcs of circles actually generate different numbers of control points based on how much of a circle your arc covers, so circle arcs require special handling.)
I have a sample project on Github that demonstrates various Core Animation techniques, including a demonstration of a "clock wipe" animation that reveals/hides and image view much like you describe.
https://github.com/DuncanMC/iOS-CAAnimation-group-demo
The animation looks like this:
Note that the jerky nature of that image is because it's a GIF. The actual animation on a device is buttery-smooth. It's also possible to create very complex smooth animations like this one:
(That isn't a mask animation but it could be.)

Set blendmodes on UIImageViews like Photoshop

I've been trying to apply blend modes to my UIImageViews to replicate a PSD mock up file (sorry can't provide). The PSD file has 3 layers, a base color with 60% normal blend, an image layer with 55% multiply blend and a gradient layer with 35% overlay.
I've been trying several tutorials over the internet but still could not get the colors/image to be exactly the same.
One thing I noticed is that the color of my iPhone is different from my Mac's screen.
I found the documentation for Quartz 2D which I think is the right way to go, but I could not get any sample/tutorial about Using Blend Modes with Images.
https://developer.apple.com/library/mac/documentation/graphicsimaging/conceptual/drawingwithquartz2d/dq_images/dq_images.html#//apple_ref/doc/uid/TP30001066-CH212-CJBIJEFG
Can anyone provide a good tutorial that does the same as the one in the documentation so I could atleast try to mix things up should nobody provide me a straight forward answer to my question.
This question was asked ages ago, but if someone is looking for the same anwser, you can user compositingFilter on the backing layer of your view to get what you want:
overlayView.layer.compositingFilter = "multiplyBlendMode"
Suggested by #SeanA, here: https://stackoverflow.com/a/46676507/1147286
Filters
Complete list of filters is here
Or you can print out the compositing filter types with
print("Filters:\n",CIFilter.filterNames(inCategory: kCICategoryCompositeOperation))
Now built-in to iOS. Historic answer:
You don't want to use UIImageView for this since it doesn't really support blend modes.
Instead, the way to go would be to create a UIView subclass (which will act just like UIImageView). So make a UIView subclass called something like BlendedImageView. It should have an image property, and then in the drawRect: method you can do this:
- (void)drawRect:(CGRect)rect
{
//
// Here you need to calculate a rect based on self.contentMode to replicate UIImageView-behaviour.
// For example, for UIViewContentModeCenter, you want a rect that goes outside 'rect' and centers with it.
//
CGRect actualRect = // ...
[self.image drawInRect:actualRect blendMode:kCGBlendModeOverlay alpha:0.5];
}
Replace alpha and blend mode to fit your preference. Good practice would be to let your BlendedImageView save a blendMode property.
It may be that in your case, you need to do a bit more advanced view that draws all 3 of your images, using the same code as above.

How can I draw an image with many tiny modifications?

I am drawing many audio meters on a view and finding that drawRect can not keep up with the speed of the audio change. In practice only a very small part of the image changes at a time so I really only want to draw the incremental changes.
I have created a CGLayer and when the data changes I use CGContextBeginPath, CGContextMoveToPoint, CGContextAddLineToPoint and CGContextStrokePath to draw in the CGLayer.
In drawRect in the view I use CGContextDrawLayerAtPoint to display the layer.
When the data changes I draw just the difference by drawing a line over the top in the CGLayer. I had assumed it was like photoshop and the new data just draws over the old but I now believe that all the lines I have ever drawn remain present in the layer. Is that correct?
If so is there a way to remove lines from a CGLayer?
What exactly do you mean by 'audio meter' show some snapshots of your intended designs. Shows us some code...
These are my suggestions-
1) Yes the new data just draws on top of CGLayer unless you release it CGLayerRelease(layer)
2) CGContextStrokePath is an expensive operation. You may want to create a generic line stroke and store them in UIImage. Then reuse the UIImage everytime your datachanges.
3) Simplest solution: use UIProgressView if you just want to show audio levels.
I now believe that all the lines I have ever drawn remain present in the layer. Is that correct?
Yes.
If so is there a way to remove lines from a CGLayer?
No. There is not. You would create a new layer. Generally, you create a layer for what is drawn repeatedly.
Your drawing may be able to be simplified by drawing rects rather than paths.
For some audio meters, dividing the meter into multiple pieces may help (you could use a CGLayer here). Similarly, you may be able to just draw rectangles selectively and/or clip drawing, images, and/or layers.

What is the best way to use layers and partial rendering in iOS for speed

I'm working on a graphing application which I wrote using Core Graphics. I have a buffer which accumulates data, and I render it on the screen. It's super slow and I want to avoid going to openGL if possible. According to the profiler, drawing my graph data is what's killing me (which consists of a number of points which are converted to a path, followed by the calls AddPath, DrawPath)..
This is what I want to do, my question is how to best implement it using layers / views / etc..
I have a grid and some text. I want this to be rendered in a CALayer (or some other layer/view?) and only update when required (the graph is rescaled).
Only a portion of the data needs to be refreshed. I want to take the previous screen buffer, erase a rectangle's worth of data (or cover it with a white box) and then draw only the portion of the graphs that have changed.
I then want to merge the background layer with the foreground graphs to generate the composite image. This requires the graph layer to have a transparent background so as not to obscure the grid.
I've looked at using CAlayer as a sublayer, but it doesn't seem to provide a simple way to draw a line. CAShapeLayer seems a bit better, but it looks like it can only draw a single line. I want the grid to be composed of multiple lines.
What's the best approach and combination of objects to allow me to do this?
Thanks,
Reza
I'd have a CGLayerRef that was used for drawing the path into. For each new point I'd draw just the new segment. When the graph got to full width I'd create a new CGLayerRef and start drawing the new line segments into that.
What happens to the previous layer as it's drawn over by the new layer depends on how your graph is displayed, but you could clear the section which is now underneath the new layer (using CGContextSetBlendMode(context, kCGBlendModeClear);) or you could choose to blend them together in some other way.
Drawing the layers each time you make a change to the lines they contain is relatively cheap compared to drawing all of the line segments themselves.
Technically, there would also be CALayers used to manage the drawing of the CGLayerRefs to the screen (via the delegate relationship drawLayer:inContext:), but all of the line drawing is done using the CGLayerRefs context and then the CGLayerRef is drawn as a whole into the CALayers context (CGContextDrawLayerInRect(context, frame, backingCGLayer);).

-[CALayer setNeedsDisplayInRect:] causes the whole layer to be redrawn

I'm subclassing CALayer to provide my own drawing in method. For optimization I call -[MyLayer setNeedsDisplayInRect:] instead of -[MyLayer setNeedsDisplay]. In the drawing method I get the rect which should be redrawn via CGContextGetClipBoundingBox().
If I use this layer as the base layer of an UIView every thing works as expected. The problem arises, as soon as I use my custom layer as a sublayer of an other CALayer. Than CGContextGetClipBoundingBox() always returns the rect of the bounds of that layer.
Any ideas?
[EDIT]
It seems, that there is no guaranty, that the content of the CALayer is cached and only the dirty part gets redrawn. I did a small test and stored the rect that needs display as a separate property. The result was, that only this part was visible on the screen.
I'll now render to an image context and keep that image as a cache. In the draw method I'll only display the cached image.
Apple's documentation is unfortunately conflicting as the docs on -setNeedsDisplayInRect do not indicate whether the method works in practice. Based on my own experience, this technote sets it straight:
Note that, because of the way that iPhone/iPod touch/iPad updates its screen, the entire view will be redrawn if you call -setNeedsDisplayInRect: or -setNeedsDisplay:.
That being said, there are a number of things you can look into if you think that you are hitting a wall due to redundant drawing.
If drawing images, the biggest performance improvement you can make is to use images of the same dimensions at which you draw. If they're not, try to cache your image by rendering it to some offscreen bitmap context and bring it back later on.
Check out the shouldRasterize property on CALayer. This can be a godsend if you are trying to manipulate a layer whose sublayers potentially constitute a complex layer hierarchy. Be sure to check out how you're doing in Instruments by ticking the Color Hits Green and Misses Red box in the Core Animation instrument. If you see a lot of red, chances are using shouldRasterize is hurting more than it's helping.
Even better than shouldRasterize is to flatten your layer hierarchy, as then you can avoid the extra overhead that shouldRasterize incurs when flattening your layer hierarchy real time. Of course this is not always possible, but don't be afraid to try :)
If you're drawing images, try experimenting with your blending mode. If you happen to be drawing opaque images, there's no need to be using normal source over methods (which use both read/write bandwidth). Try kCGBlendModeCopy, which allows you to eliminate read bandwidth overhead.
Check out CGLayerRef, which allows you to cache Core Graphics output across various calls to your drawing methods. My experience is that, unless you're doing some hardcore pixel pushing, that this ends up being more costly than just redrawing. See this for an interesting read.
Above all, Instruments is your friend. Check out a couple videos from past WWDCs (2012, 2011, and 2010); they all have great info about how to fine-tune performance.
Please feel free to ask any further questions if something I've said makes little sense.

Resources