How to drawInRect asynchronously - ios

I'm trying to load a huge image (talking about 131.072x131.072 pixels) tiled up nicely into 512x512 tiles of 256x256 pixels from a bunch of URLs.
Once my function returns the Image I want to draw it in a Rect on the proper position.
Since this process takes a while, I want to run the whole thing asynchronously.
Below is what I've tried so far:
override func drawRect(rect: CGRect) {
let firstColumn = Int(CGRectGetMinX(rect) / sideLength)
let lastColumn = Int(CGRectGetMaxX(rect) / sideLength)
let firstRow = Int(CGRectGetMinY(rect) / sideLength)
let lastRow = Int(CGRectGetMaxY(rect) / sideLength)
let qos = Int(QOS_CLASS_USER_INITIATED.rawValue)
dispatch_async(dispatch_get_global_queue(qos, 0)) { () -> Void in
for row in firstRow...lastRow {
for column in firstColumn...lastColumn {
let url = NSURL(string: "https://someURL/\(row)/\(column).jpg")
let tile = UIImage(data: NSData(contentsOfURL: url!)!)!
let x = self.sideLength * CGFloat(column)
let y = self.sideLength * CGFloat(row)
let point = CGPoint(x: x, y: y)
let size = CGSize(width: self.sideLength, height: self.sideLength)
var tileRect = CGRect(origin: point, size: size)
tileRect = CGRectIntersection(self.bounds, tileRect)
dispatch_async(dispatch_get_main_queue()) {
tile.drawInRect(tileRect)
}
}
}
}
}
And I'm getting this error:
<Error>: CGContextRestoreGState: invalid context 0x0. Backtrace:
<-[UIImage drawInRect:]+66>
<_TFFFC6H1Z1DB15MyClass8drawRectFS0_FVSC6CGRectT_U_FT_T_U_FT_T_+122>
<_TTRXFo__dT__XFdCb__dT__+39>
<_dispatch_call_block_and_release+12>
<_dispatch_client_callout+8>
<_dispatch_main_queue_callback_4CF+1738>
<__CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__+9>
<__CFRunLoopRun+2073>
<CFRunLoopRunSpecific+488>
<GSEventRunModal+161>
<UIApplicationMain+171>
<main+109>
Can anybody give me a hint on how to retrieve and draw the image asynchronously?

Don't use drawRect. There's no reason to use drawRect in this scenario. Simply use views, or layers, or SpriteKit, or OpenGL ES (there are probably more possible choices). In the first two cases, you'll probably have to add/remove bits and pieces based on the part of the view which is visible on screen, but using standard views/layers will get you much better performance. Apple strongly recommends against using drawRect.
If you do use drawRect, certainly don't load data while in there. Apple clearly states that you should be drawing, not doing anything else while in there. And you certainly don't want to start asynchronous tasks while in there, this will just lead to a catastrophe. Load the data beforehand, store it somewhere, and just do the drawing while in drawRect. If you load data as the user moves around, do the loading as the user moves, not when you draw. You'll probably need to invalidate rects when the image has actually been loaded so that drawRect is then called. But again, don`t use drawRect. Just add/remove views/layers.
Also I recommend not using NSData(contentsOfURL:). Use an NSURLSession dataTask* with the appropriate completion handler. This way, all of your loads will happen simultaneously (up to the set limits), not one after the other.

Related

MTKView frequently displaying scrambled MTLTextures

I am working on an MTKView-backed paint program which can replay painting history via an array of MTLTextures that store keyframes. I am having an issue in which sometimes the content of these MTLTextures is scrambled.
As an example, say I want to store a section of the drawing below as a keyframe:
During playback, sometimes the drawing will display exactly as intended, but sometimes, it will display like this:
Note the distorted portion of the picture. (The undistorted portion constitutes a static background image that's not part of the keyframe in question)
I describe the way I Create individual MTLTextures from the MTKView's currentDrawable below. Because of color depth issues I won't go into, the process may seem a little round-about.
I first get a CGImage of the subsection of the screen that constitutes a keyframe.
I use that CGImage to create an MTLTexture tied to the MTKView's device.
I store that MTLTexture into a MTLTextureStructure that stores the MTLTexture and the keyframe's bounding-box (which I'll need later)
Lastly, I store in an array of MTLTextureStructures (keyframeMetalArray). During playback, when I hit a keyframe, I get it from this keyframeMetalArray.
The associated code is outlined below.
let keyframeCGImage = weakSelf!.canvasMetalViewPainting.mtlTextureToCGImage(bbox: keyframeBbox, copyMode: copyTextureMode.textureKeyframe) // convert from MetalTexture to CGImage
let keyframeMTLTexture = weakSelf!.canvasMetalViewPainting.CGImageToMTLTexture(cgImage: keyframeCGImage)
let keyframeMTLTextureStruc = mtlTextureStructure(texture: keyframeMTLTexture, bbox: keyframeBbox, strokeType: brushTypeMode.brush)
weakSelf!.keyframeMetalArray.append(keyframeMTLTextureStruc)
Without providing specifics about how each conversion is happening, I wonder if, from an architecture design point, I'm overlooking something that is corrupting my data stored in the keyframeMetalArray. It may be unwise to try to store these MTLTextures in volatile arrays, but I don't know that for a fact. I just figured using MTLTextures would be the quickest way to update content.
By the way, when I swap out arrays of keyframes to arrays of UIImage.pngData, I have no display issues, but it's a lot slower. On the plus side, it tells me that the initial capture from currentDrawable to keyframeCGImage is working just fine.
Any thoughts would be appreciated.
p.s. adding a bit of detail based on the feedback:
mtlTextureToCGImage:
func mtlTextureToCGImage(bbox: CGRect, copyMode: copyTextureMode) -> CGImage {
let kciOptions = [convertFromCIContextOption(CIContextOption.outputPremultiplied): true,
convertFromCIContextOption(CIContextOption.useSoftwareRenderer): false] as [String : Any]
let bboxStrokeScaledFlippedY = CGRect(x: (bbox.origin.x * self.viewContentScaleFactor), y: ((self.viewBounds.height - bbox.origin.y - bbox.height) * self.viewContentScaleFactor), width: (bbox.width * self.viewContentScaleFactor), height: (bbox.height * self.viewContentScaleFactor))
let strokeCIImage = CIImage(mtlTexture: metalDrawableTextureKeyframe,
options: convertToOptionalCIImageOptionDictionary(kciOptions))!.oriented(CGImagePropertyOrientation.downMirrored)
let imageCropCG = cicontext.createCGImage(strokeCIImage, from: bboxStrokeScaledFlippedY, format: CIFormat.RGBA8, colorSpace: colorSpaceGenericRGBLinear)
cicontext.clearCaches()
return imageCropCG!
} // end of func mtlTextureToCGImage(bbox: CGRect)
CGImageToMTLTexture:
func CGImageToMTLTexture (cgImage: CGImage) -> MTLTexture {
// Note that we forego the more direct method of creating stampTexture:
//let stampTexture = try! MTKTextureLoader(device: self.device!).newTexture(cgImage: strokeUIImage.cgImage!, options: nil)
// because MTKTextureLoader seems to be doing additional processing which messes with the resulting texture/colorspace
let width = Int(cgImage.width)
let height = Int(cgImage.height)
let bytesPerPixel = 4
let rowBytes = width * bytesPerPixel
//
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm,
width: width,
height: height,
mipmapped: false)
texDescriptor.usage = MTLTextureUsage(rawValue: MTLTextureUsage.shaderRead.rawValue)
texDescriptor.storageMode = .shared
guard let stampTexture = device!.makeTexture(descriptor: texDescriptor) else {
return brushTextureSquare // return SOMETHING
}
let dstData: CFData = (cgImage.dataProvider!.data)!
let pixelData = CFDataGetBytePtr(dstData)
let region = MTLRegionMake2D(0, 0, width, height)
print ("[MetalViewPainting]: w= \(width) | h= \(height) region = \(region.size)")
stampTexture.replace(region: region, mipmapLevel: 0, withBytes: pixelData!, bytesPerRow: Int(rowBytes))
return stampTexture
} // end of func CGImageToMTLTexture (cgImage: CGImage)
The type of distortion looks like a bytes-per-row alignment issue between CGImage and MTLTexture. You're probably only seeing this issue when your image is a certain size that falls outside of the bytes-per-row alignment requirement of your MTLDevice. If you really need to store the texture as a CGImage, ensure that you are using the bytesPerRow value of the CGImage when copying back to the texture.

Is there a faster way to render paths onto an UIImage than UIGraphicsImageRenderer?

I want to display a map in my iOS application. Therefor, I got a floorplan image (UIImage) and use the following code to render paths (which represent the buildings or rooms) onto the map image:
static func draw(paths: [[CGPoint]], toImage image: UIImage?) -> UIImage? {
if let image = image {
let renderer = UIGraphicsImageRenderer(size: image.size)
return renderer.image { context in
image.draw(at: CGPoint(x: 0, y: 0))
context.cgContext.setFillColor(UIColor.init(white: 0.1, alpha: 0.5).cgColor)
for path in paths {
if path.count > 2 {
context.cgContext.move(to: path[0])
for point in path {
context.cgContext.addLine(to: point)
}
context.cgContext.addLine(to: path[0])
}
}
context.cgContext.drawPath(using: .fill)
}
} else {
return nil
}
}
The result of this method is then set to an UIImageView. However, this takes about two seconds, so way too long.
I am new to iOS development and this was the only way I found.
Does anyone know a faster way? Maybe using custom views or something?
I would suggest to have a look at CAShapeLayer, it is usually quite fast, although I can't say if it outperforms UIGraphicsImageRenderer in your case. My guess is that it will, because it also scales as needed, so removes the need to create a large image.
In case you are new to layers, they are like views except they don't have a user input part. They are easy to work with, since every UIView actually have a .layer for its rendering, which also can be used as a layer parent.
To make a layer work with a view, you just add it to your views layer property as a sub-layer, and then make sure it has the right size. Best way to size the layer is either by using the layers .contentsGravity or to set it manually in the views layoutSubviews.
Read more about CAShapeLayer in the docs
A tutorial on layers

How to invert colors of a specific area of a UIView?

It's easy to blur a portion of the view, keeping in mind that if the contents of views behind change, the blur changes too in realtime.
My questions
How to make an invert effect, and you can put it over a view and the contents behind would have inverted colors
How to add an effect that would know the average color of the pixels behind?
In general, How to access the pixels and manipulate them?
My question is not about UIImageView, asking about UIView in general..
there are libraries that does something similar, but they are so slow and don't run as smooth as blur!
Thanks.
If you know how to code a CIColorKernel, you'll have what you need.
Core Image has several blur filters, all of which use the GPU, which will give you the performance you need.
The CIAreaAverage will give you the average color for a specified rectangular area.
Core Image Filters
Here is about the simplest CIColorKernel you can write. It swaps the red and green value for every pixel in an image (note the "grba" instead of "rgba"):
kernel vec4 swapRedAndGreenAmount(__sample s) {
return s.grba;
}
To put this into a CIColorKernel, just use this line of code:
let swapKernel = CIKernel(string:
"kernel vec4 swapRedAndGreenAmount(__sample s) {" +
"return s.grba;" +
"}"
#tww003 has good code to convert a view's layer into a UIImage. Assuming you call your image myUiImage, to execute this swapKernel, you can:
let myInputCi = CIImage(image: myUiImage)
let myOutputCi = swapKernel.apply(withExtent: myInputCi, arguments: myInputCi)
Let myNewImage = UIImage(ciImage: myOutputCi)
That's about it. You can do alot more (including using CoreGraphics, etc.) but this is a good start.
One last note, you can chain individual filters (including hand-written color, warp, and general kernels). If you want, you can chain your color average over the underlying view with a blur and do whatever kind of inversion you wish as a single filter/effect.
I don't think I can fully answer your question, but maybe I can point you in the right direction.
Apple has some documentation on accessing the pixels data from CGImages, but of course that requires that you have an image to work with in the first place. Fortunately, you can create an image from a UIView like this:
UIGraphicsBeginImageContext(view.frame.size)
view.layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
From this image you created, you'll be able to manipulate the pixel data how you want to. It may not be the cleanest way to solve your problem, but maybe it's something worth exploring.
Unfortunately, the link I provided is written in Objective-C and is a few years old, but maybe you can figure out how to make good use of it.
1st I will recommend you to extend UIImageView for this purpose. ref
Good ref by Joe
You have to override drawRect method
import UIKit
#IBDesignable class PortholeView: UIView {
#IBInspectable var innerCornerRadius: CGFloat = 10.0
#IBInspectable var inset: CGFloat = 20.0
#IBInspectable var fillColor: UIColor = UIColor.grayColor()
#IBInspectable var strokeWidth: CGFloat = 5.0
#IBInspectable var strokeColor: UIColor = UIColor.blackColor()
override func drawRect(rect: CGRect) {
super.drawRect(rect:rect)
// Prep constants
let roundRectWidth = rect.width - (2 * inset)
let roundRectHeight = rect.height - (2 * inset)
// Use EvenOdd rule to subtract portalRect from outerFill
// (See https://stackoverflow.com/questions/14141081/uiview-drawrect-draw-the-inverted-pixels-make-a-hole-a-window-negative-space)
let outterFill = UIBezierPath(rect: rect)
let portalRect = CGRectMake(
rect.origin.x + inset,
rect.origin.y + inset,
roundRectWidth,
roundRectHeight)
fillColor.setFill()
let portal = UIBezierPath(roundedRect: portalRect, cornerRadius: innerCornerRadius)
outterFill.appendPath(portal)
outterFill.usesEvenOddFillRule = true
outterFill.fill()
strokeColor.setStroke()
portal.lineWidth = strokeWidth
portal.stroke()
}
}
Your answer is here

Need serious help on creating a comet with animations in SpriteKit

I'm currently working on a SpriteKit project and need to create a comet with a fading tail that animates across the screen. I am having serious issues with SpriteKit in this regards.
Attempt 1. It:
Draws a CGPath and creates an SKShapeNode from the path
Creates a square SKShapeNode with gradient
Creates an SKCropNode and assigns its maskNode as line, and adds square as a child
Animates the square across the screen, while being clipped by the line/SKCropNode
func makeCometInPosition(from: CGPoint, to: CGPoint, color: UIColor, timeInterval: NSTimeInterval) {
... (...s are (definitely) irrelevant lines of code)
let path = CGPathCreateMutable()
...
let line = SKShapeNode(path:path)
line.lineWidth = 1.0
line.glowWidth = 1.0
var squareFrame = line.frame
...
let square = SKShapeNode(rect: squareFrame)
//Custom SKTexture Extension. I've tried adding a normal image and the leak happens either way. The extension is not the problem
square.fillTexture = SKTexture(color1: UIColor.clearColor(), color2: color, from: from, to: to, frame: line.frame)
square.fillColor = color
square.strokeColor = UIColor.clearColor()
square.zPosition = 1.0
let maskNode = SKCropNode()
maskNode.zPosition = 1.0
maskNode.maskNode = line
maskNode.addChild(square)
//self is an SKScene, background is an SKSpriteNode
self.background?.addChild(maskNode)
let lineSequence = SKAction.sequence([SKAction.waitForDuration(timeInterval), SKAction.removeFromParent()])
let squareSequence = SKAction.sequence([SKAction.waitForDuration(1), SKAction.moveBy(CoreGraphics.CGVectorMake(deltaX * 2, deltaY * 2), duration: timeInterval), SKAction.removeFromParent()])
square.runAction(SKAction.repeatActionForever(squareSequence))
maskNode.runAction(lineSequence)
line.runAction(lineSequence)
}
This works, as shown below.
The problem is that after 20-40 other nodes come on the screen, weird things happen. Some of the nodes on the screen disappear, some stay. Also, the fps and node count (toggled in the SKView and never changed)
self.showsFPS = true
self.showsNodeCount = true
disappear from the screen. This makes me assume it's a bug with SpriteKit. SKShapeNode has been known to cause issues.
Attempt 2. I tried changing square from an SKShapeNode to an SKSpriteNode (Adding and removing lines related to the two as necessary)
let tex = SKTexture(color1: UIColor.clearColor(), color2: color, from: from, to: to, frame: line.frame)
let square = SKSpriteNode(texture: tex)
the rest of the code is basically identical. This produces a similar effect with no bugs performance/memory wise. However, something odd happens with SKCropNode and it looks like this
It has no antialiasing, and the line is thicker. I have tried changing anti-aliasing, glow width, and line width. There is a minimum width that can not change for some reason, and setting the glow width larger does this
. According to other stackoverflow questions maskNodes are either 1 or 0 in alpha. This is confusing since the SKShapeNode can have different line/glow widths.
Attempt 3. After some research, I discovered I might be able to use the clipping effect and preserve line width/glow using an SKEffectNode instead of SKCropNode.
//Not the exact code to what I tried, but very similar
let maskNode = SKEffectNode()
maskNode.filter = customLinearImageFilter
maskNode.addChild(line)
This produced the (literally) exact same effect as attempt 1. It created the same lines and animation, but the same bugs with other nodes/fps/nodeCount occured. So it seems to be a bug with SKEffectNode, and not SKShapeNode.
I do not know how to bypass the bugs with attempt 1/3 or 2.
Does anybody know if there is something I am doing wrong, if there is a bypass around this, or a different solution altogether for my problem?
Edit: I considered emitters, but there could potentially be hundreds of comets/other nodes coming in within a few seconds and didn't think they would be feasible performance-wise. I have not used SpriteKit before this project so correct me if I am wrong.
This looks like a problem for a custom shader attached to the comet path. If you are not familiar with OpenGL Shading Language (GLSL) in SpriteKit it lets you jump right into the GPU fragment shader specifically to control the drawing behavior of the nodes it is attached to via SKShader.
Conveniently the SKShapeNode has a strokeShader property for hooking up an SKShader to draw the path. When connected to this property the shader gets passed the length of the path and the point on the path currently being drawn in addition to the color value at that point.*
controlFadePath.fsh
void main() {
//uniforms and varyings
vec4 inColor = v_color_mix;
float length = u_path_length;
float distance = v_path_distance;
float start = u_start;
float end = u_end;
float mult;
mult = smoothstep(end,start,distance/length);
if(distance/length > start) {discard;}
gl_FragColor = vec4(inColor.r, inColor.g, inColor.b, inColor.a) * mult;
}
To control the fade along the path pass a start and end point into the custom shader using two SKUniform objects named u_start and u_end These get added to the custom shader during initialization of a custom SKShapeNode class CometPathShape and animated via a custom Action.
class CometPathShape:SKShapeNode
class CometPathShape:SKShapeNode {
//custom shader for fading
let pathShader:SKShader
let fadeStartU = SKUniform(name: "u_start",float:0.0)
let fadeEndU = SKUniform(name: "u_end",float: 0.0)
let fadeAction:SKAction
override init() {
pathShader = SKShader(fileNamed: "controlFadePath.fsh")
let fadeDuration:NSTimeInterval = 1.52
fadeAction = SKAction.customActionWithDuration(fadeDuration, actionBlock:
{ (node:SKNode, time:CGFloat)->Void in
let D = CGFloat(fadeDuration)
let t = time/D
var Ps:CGFloat = 0.0
var Pe:CGFloat = 0.0
Ps = 0.25 + (t*1.55)
Pe = (t*1.5)-0.25
let comet:CometPathShape = node as! CometPathShape
comet.fadeRange(Ps,to: Pe) })
super.init()
path = makeComet...(...) //custom method that creates path for comet shape
strokeShader = pathShader
pathShader.addUniform(fadeStartU)
pathShader.addUniform(fadeEndU)
hidden = true
//set up for path shape, eg. strokeColor, strokeWidth...
...
}
func fadeRange(from:CGFloat, to:CGFloat) {
fadeStartU.floatValue = Float(from)
fadeEndU.floatValue = Float(to)
}
func launch() {
hidden = false
runAction(fadeAction, completion: { ()->Void in self.hidden = true;})
}
...
The SKScene initializes the CometPathShape objects, caches and adds them to the scene. During update: the scene simply calls .launch() on the chosen CometPathShapes.
class GameScene:SKScene
...
override func didMoveToView(view: SKView) {
/* Setup your scene here */
self.name = "theScene"
...
//create a big bunch of paths with custom shaders
print("making cache of path shape nodes")
for i in 0...shapeCount {
let shape = CometPathShape()
let ext = String(i)
shape.name = "comet_".stringByAppendingString(ext)
comets.append(shape)
shape.position.y = CGFloat(i * 3)
print(shape.name)
self.addChild(shape)
}
override func update(currentTime: CFTimeInterval) {
//pull from cache and launch comets, skip busy ones
for _ in 1...launchCount {
let shape = self.comets[Int(arc4random_uniform(UInt32(shapeCount)))]
if shape.hasActions() { continue }
shape.launch()
}
}
This cuts the number of SKNodes per comet from 3 to 1 simplifying your code and the runtime environment and it opens the door for much more complex effects via the shader. The only drawback I can see is having to learn some GLSL.**
*not always correctly in the device simulator. Simulator not passing distance and length values to custom shader.
**that and some idiosyncrasies in CGPath glsl behavior. Path construction is affecting the way the fade performs. Looks like v_path_distance is not blending smoothly across curve segments. Still, with care constructing the curve this should work.

How to render a complex UIView into a PDF Context with high resolution?

There are several questions on SO asking how to render a UIView into a PDF context, but they all use view.layer.renderInContext(pdfContext), which results in a 72 DPI image (and one that looks terrible when printed). What I'm looking for is a technique to somehow get the UIView to render at something like 300 DPI.
In the end, I was able to take hints from several prior posts and put together a solution. I'm posting this since it took me a long time to get working, and I really hope to save someone else time and effort doing the same.
This solution uses two basic techniques:
Render the UIView into a scaled bitmap context to produce a large image
Draw the image into a PDF Context which has been scaled down, so that the drawn image has a high resolution
Build your view:
let v = UIView()
... // then add subviews, constraints, etc
Create the PDF Context:
UIGraphicsBeginPDFContextToData(data, docRect, stats.headerDict) // zero == (612 by 792 points)
defer { UIGraphicsEndPDFContext() }
UIGraphicsBeginPDFPage();
guard let pdfContext = UIGraphicsGetCurrentContext() else { return nil }
// I tried 300.0/72.0 but was not happy with the results
let rescale: CGFloat = 4 // 288 DPI rendering of VIew
// You need to change the scale factor on all subviews, not just the top view!
// This is a vital step, and there may be other types of views that need to be excluded
Then create a large bitmap of the image with an expanded scale:
func scaler(v: UIView) {
if !v.isKindOfClass(UIStackView.self) {
v.contentScaleFactor = 8
}
for sv in v.subviews {
scaler(sv)
}
}
scaler(v)
// Create a large Image by rendering the scaled view
let bigSize = CGSize(width: v.frame.size.width*rescale, height: v.frame.size.height*rescale)
UIGraphicsBeginImageContextWithOptions(bigSize, true, 1)
let context = UIGraphicsGetCurrentContext()!
CGContextSetFillColorWithColor(context, UIColor.whiteColor().CGColor)
CGContextFillRect(context, CGRect(origin: CGPoint(x: 0, y: 0), size: bigSize))
// Must increase the transform scale
CGContextScaleCTM(context, rescale, rescale)
v.layer.renderInContext(context)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Now we have a large image with each point representing one pixel.
To get it drawn into the PDF at high resolution, we need to scale the PDF down while drawing the image at its large size:
CGContextSaveGState(pdfContext)
CGContextTranslateCTM(pdfContext, v.frame.origin.x, v.frame.origin.y) // where the view should be shown
CGContextScaleCTM(pdfContext, 1/rescale, 1/rescale)
let frame = CGRect(origin: CGPoint(x: 0, y: 0), size: bigSize)
image.drawInRect(frame)
CGContextRestoreGState(pdfContext)
... // Continue with adding other items
You can see that the left "S" contained in the cream colored bitmap looks pretty nice compared to a "S" drawn but an attributed string:
When the same PDF is viewed by a simple rendering of the PDF without all the scaling, this is what you would see:

Resources