Adding UIView Laggy on Specific iPad Models - ios

I have a pdf viewer for sheet music, which is based on PDFKit. PDFKit has an option to use an internal UIPageViewController, but it is very problematic - you cannot set the transition type, and worse than that, there is no way to check whether a page swipe succeeded or failed. You end up seeing one page, while the reported page index is another one.
Therefore I decided to create my own page flipping method. I added a UITapGestureRecognizer, and when the right or left edges are tapped, the page flips programmatically. To achieve curl animation, I add a UIView with the same image of what's underneath it, do the curl animation to the PDFView, and then remove the view. Here is part of the code:
// Function to flip pages with page curl
func flipPage (direction: String) {
let renderer = UIGraphicsImageRenderer(size: pdfView.bounds.size)
let image = renderer.image { ctx in
pdfView.drawHierarchy(in: pdfView.bounds, afterScreenUpdates: true)
}
let imageView = UIImageView(image: image)
imageView.frame = pdfView.frame
imageView.tag = 830
self.view.addSubview(imageView)
self.view.bringSubviewToFront(imageView)
if direction == "forward" && pdfView.canGoToNextPage() {
pdfView.goToNextPage(nil)
let currentImageView = self.view.subviews.filter({$0.tag == 830})
if currentImageView.count > 0 {
UIView.transition(from: currentImageView[0],
to: pdfView, duration: 0.3,
options: [.transitionCurlUp, .allowUserInteraction],
completion: {finished in
currentImageView[0].removeFromSuperview()
})
}
}
Now comes the weird part. On my own iPad Pro 12.9 inches 1st generation, this method of flipping is blazing fast. No matter the build configuration or optimization level, it simply works. If I tap in a fast succession, the pages flip as fast as I tap.
I have users with the 2nd gen iPad Pro 12.9, and they experience a terrible lag when the UIView is drawn on top of the PDFView. This also happens on all build configurations - it happened with a release build, and also happened when I installed a debug build from my computer on such a device (sadly, I could not keep the device to explore things further).
There are several other instances in the app in which I add a UIView on top - to add a semi-transparent veil, or to capture UIGestureRecognizers. On my own device, these are all very fast. On the iPad 2nd gen, each and every one causes a lag. Incidentally, a user with a 3rd gen iPad Pro reported that the performance was very fast on his device, without any lags. On the simulator the animation is sometimes incomplete, but the response is as fast as it should be - for all iPad models.
I searched for answers, and found absolutely no references to such a weird situation. Has anyone experienced anything like this? Any quick fixes, or noticeable problems in the logic of my code?
I am afraid that if I try to draw the custom UIViews ahead of time, and only bring them to the front when needed, I'll end up with a ridiculously large amount of UIViews in the background, and simply move the delay elsewhere.

After doing a bit of research, I can provide a solution for people who face similar issues. The problem appears to be scheduling.
I still do not know why the 2017 models schedule their threads differently. Any ideas about why this problem reared its head in the first place is welcome. However -
I was not, in fact, following the best practice. Changes to the UI should always happen in the main thread, so if you encounter a lag like this, encapsulate the actual adding and removing of the UIView like this:
DispatchQueue.main.async {
self.view.addSubview(imageView)
self.view.bringSubviewToFront(imageView)
}
My users report the problem just vanished after that.
EDIT
Be sure to include both adding the UIView and the animation block to the same DispatchQueue segment, otherwise they will compete for the execution slot. My final code looks like this:
func flipPage (direction: String) {
let renderer = UIGraphicsImageRenderer(size: pdfView.bounds.size)
let image = renderer.image { ctx in
pdfView.drawHierarchy(in: self.pdfView.bounds, afterScreenUpdates: true)
let imageView = UIImageView(image: image)
imageView.frame = pdfView.frame
if direction == "forward" && pdfView.canGoToNextPage() {
DispatchQueue.main.async {
self.view.addSubview(imageView)
self.view.bringSubviewToFront(imageView)
self.pdfView.goToNextPage(nil)
UIView.transition(from: imageView,
to: self.pdfView, duration: 0.3,
options: [.transitionCurlUp, .allowUserInteraction],
completion: {finished in
imageView.removeFromSuperview()
})
}
}
P.S. If possible, avoid using drawHierarchy - it is not a very fast method.
In any case, if you need to code differently for specific devices, check out DeviceKit. A wonderful project, that gives you the simplest interface possible.

Related

ARSCNView snapshot() causes latency

I'm taking a snapshot of every frame, applying a filter, and updating the background contents of the ARSCNView with the filtered image. Everything is working fine, but there is a lot of latency with all the UI elements on the screen. No latency on the ARSCNView.
func session(_ session: ARSession, didUpdate frame: ARFrame) {
guard let image = CIImage(image: sceneView.snapshot()) else { return }
// I'm setting a filter to each image here. Which has no effect on the latency.
sceneView.scene.background.contents = context.createCGImage(image, from: image.extent)
}
I know I can use frame.capturedImage, which makes latency go away. However, I also place AR objects on the screen which frame.capturedImage ignores for some reason, and sceneView.scene.background.contents cannot be reset to its original source. So, I cannot turn off the image filter. That's why I need to take a snapshot.
Is there anything I can do that will reduce latency on the UI elements? I have a few UIScrollViews on the screen that have tremendous lag.
I'm also in the middle of looking for a way to do this with no lag, but I was able to at least reduce the lag by rendering the view into an image manually:
extension ARSCNView {
/// Performs screen snapshot manually, seems faster than built in snapshot() function, but still somewhat noticeable
var snapshot: UIImage? {
let renderer = UIGraphicsImageRenderer(size: self.bounds.size)
let image = renderer.image(actions: { context in
self.drawHierarchy(in: self.bounds, afterScreenUpdates: true)
})
return image
}
}
It's frustrating that this is faster than the built-in snapshot function, but it seems to be, and also still captures all the SceneKit graphics in the snapshot. (Doing this every frame will still be expensive though, FYI, and the only real solution for that would likely be a custom Metal shader.)
I'm also trying to work with ARSCNView.snapshotView(afterScreenUpdates: Bool) because that seems to have essentially no lag for my purposes, but whenever I try to turn the resulting View into a UIImage, it's totally blank. Either way, the above method cut the lag in about half for me, so you might have some luck with that.

What is the best way to rotate a view in swift and convert it to an image?

I have been given the task of creating a dynamic "ticket" in Swift. I am passed the ticket number, amount, etc from our servers API, and I am to generate the barcode, along with all labels associated with this ticket. I am able to generate all the necessary data without any issues.
The problem
The issue arises with laying it out. I need to have a thumbnail view for this ticket, along with a fullscreen view. This seems to be best done by converting the view into an image (right?) as it allows for features like zooming, having the thumbnail view etc. The main cause of the issue is the ticket labels and barcode need to be laid out vertically, or basically in landscape mode.
What I've tried
UIGraphicsBeginImageContext
I have created the image manually with UIGraphicsBeginImageContext() and associated APIs. This allows me to flip each view and convert it to an image. However, this method forces me to manually create a frame for each view and loses all accuracy and does not seem like the right way to do it when I have to add 10-15labels to a blank image.
AutoLayout
Next I tried laying everything out in a UIView with autolayout and applying a CGAffineTransform to each view and then converting the whole view to an image. This seems to work with the exception that I lose precision and can't line up views correctly. CGAffineTransform throws off constraints completely and I have to experiment with constraint constants until I get the view looking somewhat right and even then that doesn't translate all that well to all device sizes.
Landscape Mode
Lastly, I tried laying out the views normally, and forcing the view into landscape mode. Aside from the number of issues that arose because my app only supports portrait mode, I got it to work when the view is presented, but I have no idea how to get the thumbnail view which is supposed to show before the ticket view is presented to be in landscape mode. If I try doing so the thumbnail comes out in portrait mode and not landscape.
Do you guys have any ideas on a better way to accomplish this or should I stick to one of the methods that I've tried and try to work out all the bugs? I can provide code as needed but there's a lot that goes into it so I didn't want to just throw all the code in here if it wasn't necessary.
The following is an example of what I need to create except I need to add additional labels on there such as issue date, expiration date, etc:
Any help would be appreciated!
You asked:
What is the best way to rotate a view in swift and convert it to an image?
If you want to create a rotated snapshot of a view, apply a rotate and a translateBy to the context:
func clockwiseSnapshot(of subview: UIView) -> UIImage {
var rect = subview.bounds
swap(&rect.size.width, &rect.size.height)
return UIGraphicsImageRenderer(bounds: rect).image { context in
context.cgContext.rotate(by: .pi / 2)
context.cgContext.translateBy(x: 0, y: -rect.width)
subview.drawHierarchy(in: subview.bounds, afterScreenUpdates: true)
}
}
Or
func counterClockwiseSnapshot(of subview: UIView) -> UIImage {
var rect = subview.bounds
swap(&rect.size.width, &rect.size.height)
return UIGraphicsImageRenderer(bounds: rect).image { context in
context.cgContext.rotate(by: -.pi / 2)
context.cgContext.translateBy(x: -rect.height, y: 0)
subview.drawHierarchy(in: subview.bounds, afterScreenUpdates: true)
}
}
Obviously, if you want the Data associated with the image, instead, use pngData or jpegData instead:
func clockwiseSnapshotData(of subview: UIView) -> Data {
var rect = subview.bounds
swap(&rect.size.width, &rect.size.height)
return UIGraphicsImageRenderer(bounds: rect).pngData { context in
context.cgContext.rotate(by: .pi / 2)
context.cgContext.translateBy(x: 0, y: -rect.width)
subview.drawHierarchy(in: subview.bounds, afterScreenUpdates: true)
}
}
Or
func counterClockwiseSnapshotData(of subview: UIView) -> Data {
var rect = subview.bounds
swap(&rect.size.width, &rect.size.height)
return UIGraphicsImageRenderer(bounds: rect).pngData { context in
context.cgContext.rotate(by: -.pi / 2)
context.cgContext.translateBy(x: -rect.height, y: 0)
subview.drawHierarchy(in: subview.bounds, afterScreenUpdates: true)
}
}
If you don’t really need the image, but just want to rotate it in the UI, then apply a transform to the view that contains all of these subviews:
someView.transform = .init(rotationAngle: .pi / 2)

Move objects around, with gesture recognizer for multiple Objects

I am trying to make an app where you can use Stickers like on Snapchat and Instagram. It fully worked to find a technique, that adds the images, but now I want that if you swipe the object around the object changes its position (I also want to make the scale / rotate function).
My code looks like this:
#objc func StickerLaden() {
for i in 0 ..< alleSticker.count {
let imageView = UIImageView(image: alleSticker[i])
imageView.frame = CGRect(x: StickerXScale[i], y:StickerYScale[i], width: StickerScale[i], height: StickerScale[i])
ImageViewsSticker.append(imageView)
ImageView.addSubview(imageView)
imageView.isUserInteractionEnabled = true
let aSelector : Selector = "SlideFunc"
let slideGesture = UISwipeGestureRecognizer(target: self, action: aSelector)
imageView.addGestureRecognizer(slideGesture)
}
}
func SlideFunc(fromPoint:CGPoint, toPoint: CGPoint) {
}
Here are the high-level steps you need to take:
Add one UIPanGestureRecognizer to the parent view that has the images on it.
Implement UIGestureRecognizerDelegate methods to keep track of the user touching and releasing the screen.
On first touch, loop through all your images and call image.frame.contains(touchPoint). Add all images that are under the touch point to an array.
Loop through the list of touched images and calculate the distance of the touch point to the center of the image. Chose the image whose center is closest to the touched point.
Move the chosen image to the top of the view stack. [You now have selected an image and made it visible.]
Next, when you receive pan events, change the frame of the chosen image accordingly.
Once the user releases the screen, reset any state variables you may have, so that you can start again when the next touch is done.
The above will give you a nicely working pan solution. It's a good amount of things you need to sort out, but it's not very difficult.
As I said in my comment, scale and rotate are very tricky. I advise you to forget that for a bit and first implement other parts of your app.

viewWillTransitionToSize causes non-rotating view controller to resize and reposition

A lot of people have discussed techniques for mimicking the native iOS camera app (where the UI elements pivot in-place as the device rotates). I actually asked a question about it before here. Most people have you lock the orientation of the UI, but then force a transformation on just the elements that you want to pivot. This works, but the elements don't pivot with the smooth animations you see in the native iOS app and it leads some issues. Specifically, part of my interface allows users to share without leaving this interface, but when the sharing view gets rotated, it comes out off-center. So I wanted to find a different way to do this.
I found a link to Apple's AVCam sample, which got me off to a start here. I'm new to this stuff, but I managed to convert it from Obj-C to Swift already. Below is the key element of what I'm currently using:
override func viewWillTransitionToSize(size: CGSize, withTransitionCoordinator coordinator: UIViewControllerTransitionCoordinator) {
coordinator.animateAlongsideTransition({ (UIViewControllerTransitionCoordinatorContext) -> Void in
//Code here performs animations during the rotation
let deltaTransform: CGAffineTransform = coordinator.targetTransform()
let deltaAngle: CGFloat = atan2(deltaTransform.b, deltaTransform.a)
var currentRotation: CGFloat = self.nonRotatingView.layer.valueForKeyPath("transform.rotation.z") as! CGFloat
// Adding a small value to the rotation angle forces the animation to occur in a the desired direction, preventing an issue where the view would appear to rotate 2PI radians during a rotation from LandscapeRight -> LandscapeLeft.
currentRotation += -1 * deltaAngle + 0.0001;
self.nonRotatingView.layer.setValue(currentRotation, forKeyPath: "transform.rotation.z")
}, completion: { (UIViewControllerTransitionCoordinatorContext) -> Void in
print("rotation completed");
// Code here will execute after the rotation has finished
// Integralize the transform to undo the extra 0.0001 added to the rotation angle.
var currentTransform: CGAffineTransform = self.nonRotatingView.transform
currentTransform.a = round(currentTransform.a)
currentTransform.b = round(currentTransform.b)
currentTransform.c = round(currentTransform.c)
currentTransform.d = round(currentTransform.d)
self.nonRotatingView.transform = currentTransform
})
super.viewWillTransitionToSize(size, withTransitionCoordinator: coordinator)
}
I then have separate view controllers for icons that do the same transformation in the opposite direction. The icons actually pivot in place properly now (with smooth animations and everything) and the camera preview stays properly oriented.
The problem is, the non-rotating view gets resized and everything gets misplaced when I rotate the device from portrait to landscape or vice-versa. How do I fix that?
I got it working (though it took me way more time than it should have). I needed to set the width and height parameters of the non-rotating view to remove at build time and then added the following to viewDidLoad():
let nonRotatingWidth = nonRotatingView.widthAnchor.constraintEqualToConstant(UIScreen.mainScreen().bounds.width)
let nonRotatingHeight = nonRotatingView.heightAnchor.constraintEqualToConstant(UIScreen.mainScreen().bounds.height)
nonRotatingView.addConstraints([nonRotatingWidth, nonRotatingHeight])
This uses the new Swift 2 constraint syntax, which is relatively concise and easy to read.
I've also tinkered with only using the new constraint syntax (and no transformations) to achieve the same type of interface via updateViewConstraints() (see my question about it here). I believe that approach is a cleaner way to create this type of interface, but I have not yet been able to make it work without crashing at runtime.

SKEffectNode - CIFilter Blur Size Limit - Big Black Box

I am trying to blur multiple SKNode objects. I do this by having a parent SKEffectNode with a CIFilter set to #"CIGaussianBlur". Like so:
- (SKEffectNode *)createBlurNode
{
SKEffectNode *blurNode = [[SKEffectNode alloc] init];
blurNode.shouldRasterize = YES;
[blurNode setShouldEnableEffects:NO];
[blurNode setFilter:[CIFilter filterWithName:#"CIGaussianBlur"
keysAndValues:#"inputRadius", #10.0f, nil]];
return blurNode;
}
This works fine for a bunch of nodes currently onscreen. But when I space these notes far away from each other (about 3000 pixels), the blurring no longer happens and I get a big black box. This happens regardless of whether the SKNodes I'm blurring are SKShapeNodes or SKSpriteNodes. Here's a sample project with this issue: Sample Project. (By the way, thanks to BobMoff for the initial version found here):
Here's happy blur (when nodes are less than 3000 pixels away from each other):
Sad blur (when nodes are more than 3000 pixels away from each other):
UPDATE
This behavior occurs whenever an SKEffectNode is the parent. It doesn't matter if it's enabling effects, blurring, etc. If the parent node is an SKNode, it's fine. i.e. Even if the parent blur node is created like it is below, you will get the blackness:
- (SKEffectNode *)createBlurNode
{
SKEffectNode *blurNode = [[SKEffectNode alloc] init];
// blurNode.shouldRasterize = YES;
// [blurNode setShouldEnableEffects:NO];
// [blurNode setFilter:[CIFilter filterWithName:#"CIGaussianBlur"
// keysAndValues:#"inputRadius", #10.0f, nil]];
return blurNode;
}
I had a similar problem, with a very wide, panning scene that I wanted to blur.
To get the blur effect to work, I removed any nodes that were sticking out too far past the edges of the scene:
// Property declarations, elsewhere in the class:
var blurNode: SKEffectNode
var mainScene: SKScene
var exParents: [SKNode : SKNode] = [:]
/**
* Remove outlying nodes from the scene and activate the SKEffectNode
*/
func blurScene() {
let FILTER_MARGIN: CGFloat = 100
let widthMax: CGFloat = mainScene.size.width + FILTER_MARGIN
let heightMax: CGFloat = mainScene.size.height + FILTER_MARGIN
// Recursively iterate through all blurNode's children
blurNode.enumerateChildNodesWithName(".//*", usingBlock: {
[unowned self]
node, stop in
if node.parent != nil && node.scene != nil { // Ignore nodes we already removed
if let sprite = node as? SKSpriteNode {
// Calculate sprite node position in scene coordinates
let sceneOrig = sprite.scene!.convertPoint(sprite.position, fromNode: sprite.parent!)
// Find left, right, bottom and top edges of sprite
let l = sceneOrig.x - sprite.size.width*sprite.anchorPoint.x
let r = l + sprite.size.width
let b = sceneOrig.y - sprite.size.height*sprite.anchorPoint.y
let t = b + sprite.size.height
if l < -FILTER_MARGIN || r > widthMax || b < -FILTER_MARGIN || t > heightMax {
self.exParents[sprite] = sprite.parent!
sprite.removeFromParent()
}
}
}
})
blurNode.shouldEnableEffects = true
}
/**
* Disable blur and reparent nodes we removed earlier
*/
func removeBlur() {
self.blurNode.shouldEnableEffects = false
for (kid, parent) in exParents {
parent.addChild(kid)
}
exParents = [:]
}
NOTES:
This does remove content from your effect node, so extremely wide nodes won't show up in the final result:
You can see the mountain highlighted in red stuck out too far and was removed from the resulting blur.
This code only considers SKSpriteNodes. Empty SKNodes don't seem to break the effect node, but if you're using other visible nodes like SKShapeNodes or SKLabelNodes, you'll have to modify this code to include them.
If you have ignoreSiblingOrder = false, this code might mess up your z-ordering since you can't guarantee what order the nodes are added back to the scene.
Stuff I tried that didn't work
Simply saying node.hidden = true instead of using removeFromParent() doesn't work. That would be WAY too easy ;)
Using an SKCropNode to crop out outlying content didn't work for me. I tried having the SKEffectNode parent the SKCropNode and the other way around, but the black square appeared no matter how small I made the cropped area. This might still be worth looking into if you're desperate for a cleaner solution.
As noted here, SKScenes are secretly SKEffectNodes and you can set their filter just like our blurNode above. SKScenes don't show a black screen when their content is too big. Unfortunately, they seem to just silently disable the filter instead. Again, I might have missed something, so you could explore this option further if you're trying to apply an effect across the entire scene.
Alternate Solutions
You can capture an image of the whole screen and apply a filter to that, as suggested here. I ended up going with an even simpler solution; I took a generic screenshot of the stuff I wanted to blur, then applied a very heavy blur so you can't see the precise details. I used that as the blurred background and you can hardly tell it's not the real thing ;) This also saves a healthy chunk of memory and avoids a small UI hiccup.
Musings
This is a pretty nasty bug, and I hope Apple comes up with a solution soon. You can click this cute picture of a camera to get a GPU trace and some insight on what's happening:
The device seems to be discarding the framebuffer for the effect node because it takes up too much memory. This is affirmed by the fact that when there's more memory pressure on the device, it's easier to get the 'black square' on smaller content in the SKEffectNode.
I used a method that worked for my game but it requires the blurred area to be static without movement.
On iOS 10 using Swift 3 I used SKSpriteNode, SKView, SKEffectNode, CIFilter. I created a sprite from a texture returned from the SKView method "texture from node" and passed the current scene as the parameter because it inherits from SKNode. So essentially I was taking a "screenshot" of the scene and creating a sprite from it. I then put it in an SKEffectNode with a blur filter. (set "should rasterize" to true for better performance as I only needed to blur once). Finally I added the new sprite to the scene. From there you could add sprites to the scene and place them above the new blurred node.
let blurFilter = CIFilter(name: "CIGaussianBlur")!
let blurAmount = 15.0
blurFilter.setValue(blurAmount, forKey: kCIInputRadiusKey)
let blurEffect = SKEffectNode()
blurEffect.shouldRasterize = true
let screenshotNode = SKSpriteNode(texture: gameScene.view!.texture(from: gameScene))
blurEffect.addChild(screenshotNode)
blurEffect.filter = blurFilter
gameScene.addChild(blurEffect)
Possible workaround for the bug:
Use a camera, zoom WAY out, so you can see most everything of your background, take a screenshot style rendering of this image. Crop it to your needs, and then blur it. Then rasterise this.
Then scale this image back up, and slice it up if needs be, and place accordingly.
SKEffectNode renders into a texture. In most iOS systems the maximum size for a texture is 2048x2048. If an SKEffectNode is trying to render content larger than that, it will just use a 2048x2048 texture and anything outside of it will just not appear in the texture. It won't give you any error or warning about this happening; it simply does it silently.
And no, there is no way to tell SKEffectNode to use a texture of a specific size, and pan&clamp the content into it. It always uses a texture that will cover all the child nodes, and if the texture would be too large, it just silently uses that 2048x2048 texture.

Resources