label creation not positioned in the dwg - ios

I have a function that gets called to create labels and position them to simulate dimensions in a drawing (like a cad front view); however they all get positioned in the top left even though the x and y coordinates are being sent correctly via the call function. I've hit a wall on this one and would appreciate any help.see screenshot I took and you'll notice that all the labels are scrunched up in the top left corner
All the dwg gets added to the CGContext
Here's my function:
public func drawText(ctx: CGContext,
txtCol:UIColor,
text:String,
posX:Double,
posY:Double,
rotation:Double,
fill:Bool) -> CGContext {
let label = UILabel()
label.textAlignment = .center
label.text = text
label.textColor = txtCol
label.transform = CGAffineTransform(rotationAngle: CGFloat.pi / 2)
if fill {
label.backgroundColor = UIColor.yellow
}
label.draw(CGRect(x: CGFloat(posX), y: CGFloat(posY), width: 100, height: 30))
label.layer.render(in: ctx)
return ctx
}

As renderInContext: documentation states : "Renders in the coordinate space of the layer." This means the frame origin of your layer/view is ignored.
You might want to use CGContextTranslateCTM before calling label.layer.render(in: ctx). Remember to save and restore the CTM on each call using CGContextSaveGState and CGContextRestoreGState.
Also, I doubt the need to call label.draw(CGRect(x: CGFloat(posX), y: CGFloat(posY), width: 100, height: 30)). Since it should not be call manually.

Related

How to pin a label / UIView to a node in ARKit

I’d like to have a UILabel stay on top of a node in ARKit, similar to the dimensions labels in iOS 12’s Measure app.
I’ve tried adding the label as a plane node and using a billboard constraint, but then the text gets smaller as you move away, which isn’t ideal.
At WWDC, they referred to this as Screen Space, but didn’t say how to achieve it.
Thanks in advance!
I've come up with the following solution to make it work.
First, I create the UILabel and add it as a subview. Next, I convert the position of the node I want to follow to screen coordinates in renderer(_: updateAtTime:). Now the label follows the node correctly and stays fixed in scale. However, the label stays horizontal to the screen, which looks weird. To make it stay horizontal to the world, I rotate the label according to the ARCamera's yaw (z rotation).
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
// Convert the node's position to screen coordinates
let screenCoordinate = self.sceneView.projectPoint(node.position)
DispatchQueue.main.async {
// Move the label
label.center = CGPoint(x: CGFloat(screenCoordinate.x), y: CGFloat(screenCoordinate.y))
// Hide the label if the node is "behind the screen"
label.isHidden = (screenCoordinate.z > 1)
// Rotate the label
if let rotation = sceneView.session.currentFrame?.camera.eulerAngles.z {
label.transform = CGAffineTransform(rotationAngle: CGFloat(rotation + Float.pi/2))
}
}
}
Try with a SCNText geometry, I've used the following code some months ago:
var txtNote = SCNText()
txtNote.string = "Hello AR word!"
txtNote.font = UIFont.systemFont(ofSize: 2)
txtNote.extrusionDepth = 1.0
txtNote.containerFrame = CGRect(x: 2.5, y: 0, width: 16, height: 10)
txtNote.isWrapped = true
txtNote.alignmentMode = kCAAlignmentLeft
txtNote.materials.first?.diffuse.contents = UIColor.black.cgColor
var txtNode = SCNNode(geometry: txtNote)

UIImageview mask updating too slowly in response to gesture

I need to create a component that will let users choose from 2 choices in images. At first, you see 2 images side by side with a "handle" in the middle. If you move the handle to the left, you will see more of the image to right and less of the left image, as to reveal the right image, and vice versa.
Technically, I have 2 full size UIImageViews one on top of the other, and they are masked. I have a pan gesture and when the user slides the handle, the handle moves and the masks update themselves to adjust to "the new middle".
Here's the code responsible for adjusting the image mask. The constant is calculated in the method called by the gesture. I know my calculations of that constant are good because the "handle" and the masks are updated correctly.
BUT
the masks gets updated too late and when dragging, we see it being adjusted too late.
func adjustImagesMasks(to constant: CGFloat) {
choiceImageA.mask?.willChangeValue(forKey: "frame")
choiceImageB.mask?.willChangeValue(forKey: "frame")
let separationPoint: CGFloat = self.frame.width / 2.0 + constant
maskA.backgroundColor = UIColor.black.cgColor
maskA.frame = CGRect(origin: .zero, size: CGSize(width: separationPoint, height: self.frame.size.height))
maskB.backgroundColor = UIColor.black.cgColor
maskB.frame = CGRect(x: separationPoint, y: 0, width: self.frame.width - separationPoint, height: self.frame.size.height)
choiceImageA.mask?.didChangeValue(forKey: "frame")
choiceImageB.mask?.didChangeValue(forKey: "frame")
maskA.drawsAsynchronously = true
maskB.drawsAsynchronously = true
self.setNeedsDisplay()
maskA.setNeedsDisplay()
maskA.displayIfNeeded()
maskB.setNeedsDisplay()
maskB.displayIfNeeded()
}
The image views have their masks setup like this:
maskA = CALayer()
maskB = CALayer()
choiceImageA.layer.mask = maskA
choiceImageA.layer.masksToBounds = true
choiceImageB.layer.mask = maskB
choiceImageB.layer.masksToBounds = true
So to recap, my question is really about performance. The image views are being correctly adjusted, but too slowly. The "handle", which is positioned with constraints, get updated really quickly.
So apparently, CALayer tries to animate most of the changes to its properties. So the delay I was seeing was in fact due to an animation.
I resolved my issue by surrounding the call to adjustImagesMasks() with CATransaction.setValue(kCFBooleanTrue, forKey:kCATransactionDisableActions) and CATransaction.commit(). So for this transaction, I'm asking to not animate the changes. Because this is continuous (with the panning gesture), it is seemless.
Full code here:
CATransaction.setValue(kCFBooleanTrue, forKey:kCATransactionDisableActions)
adjustImagesMasks(to: newConstant)
CATransaction.commit()```.
This other post helped me a lot. There's a nice explanation too.
Hope this helps someone else.

Swift 3 - NSString.draw(in: rect, withAttributes:) -- Text not being drawn at expected point

Teeing off of this Stackoverflow post, which was very helpful, I've been able to successfully draw text onto a full-screen image (I'm tagging the image with pre-canned, short strings, e.g., "Trash"). However, the text isn't appearing where I want, which is centered at the exact point the user has tapped. Here's my code, based on some code from the above post but updated for Swift3 --
func addTextToImage(text: NSString, inImage: UIImage, atPoint:CGPoint) -> UIImage{
// Setup the font specific variables
let textColor: UIColor = UIColor.red
let textFont: UIFont = UIFont(name: "Helvetica Bold", size: 80)!
//Setups up the font attributes that will be later used to dictate how the text should be drawn
let textFontAttributes = [
NSFontAttributeName: textFont,
NSForegroundColorAttributeName: textColor,
]
// Create bitmap based graphics context
UIGraphicsBeginImageContextWithOptions(inImage.size, false, 0.0)
//Put the image into a rectangle as large as the original image.
inImage.draw(in: CGRect(x:0, y:0, width:inImage.size.width, height: inImage.size.height))
// Create the rectangle where the text will be written
let rect: CGRect = CGRect(x:atPoint.x, y:atPoint.y, width:inImage.size.width, height: inImage.size.height)
// Draft the text in the rectangle
text.draw(in: rect, withAttributes: textFontAttributes)
// Get the image from the graphics context
let newImag = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImag!
}
In the above, atPoint is the location of the user's tap. This is where I want the text to be drawn. However, the text is always written toward the upper left corner of the image. For example, in the attached image, I have tapped half way down the waterfall as that is where I want the text string "Trash" to be written. But instead, you can see that it is written way up in the left-hand corner. I've tried a bunch of stuff but can't get a solution. I appreciate any help.
enter image description here
TrashShouldBeInMiddleOfWaterfallButIsNot
How are you setting atPoint? If you are using the same coordinate space as the screen, that won't work... which is what I suspect is happening.
Suppose your image is 1000 x 2000, and you are showing it in a UIImageView that is 100 x 200. If you tap at x: 50 y: 100 in the view (at the center), and then send that point to your function, it will draw the text at x: 50 y: 100 of the image -- which will be in the upper-left corner, instead of in the center.
So, you need to convert your point from the Image View size to the actual image size.. either before you call your function, or by modifying your function to handle it.
An example (not necessarily the best way to do it):
// assume:
// View Size is 100 x 200
// Image Size is 1000 x 2000
// tapPoint is CGPoint(x: 50, y: 100)
let xFactor = image.size.width / imageView.frame.size.width
// xFactor now equals 10
let yFactor = image.size.height / imageView.frame.size.height
// yFactor now equals 10
let convertedPoint = CGPoint(x: tapPoint.x * xFactor, y: tapPoint.y * yFactor)
convertedPoint now equals CGPoint(x: 500, y: 1000), and you can send that as the atPoint value in your call to addTextToImage.

iOS Screen coordinates and scaling

I'm trying to draw on top of an image in a CALayer and am having trouble with where the drawing shows up on different size displays.
func drawLayer(){
let circleLayer = CAShapeLayer()
let radius: CGFloat = 30
let x = Thermo.frame.origin.x
let y = Thermo.frame.origin.y
let XX = Thermo.frame.width
let YY = Thermo.frame.height
print("X: \(x) Y: \(y) Width: \(XX) Height: \(YY)")
circleLayer.path = UIBezierPath(roundedRect: CGRect(x: 0, y: 0, width: 2.0 * radius, height: 2.0 * radius) , cornerRadius: radius).CGPath
circleLayer.fillColor = UIColor.redColor().CGColor
circleLayer.shadowOffset = CGSizeMake(0, 3)
circleLayer.shadowRadius = 5.0
circleLayer.shadowColor = UIColor.blackColor().CGColor
circleLayer.shadowOpacity = 0.8
circleLayer.frame = CGRectMake(0, 410, 0, 192);
self.Thermo.layer.addSublayer(circleLayer)
circleLayer.setNeedsDisplay()
}
That draws a circle, in the correct place ... for an iPhone 6s. But when the enclosing UIImageView component is scaled for a smaller device, well, to clearly doesn't. I added the print() to see what the image size, and position was and ... well, it's exactly the same on every device I run it on X: 192.0 Y: 8.0 Width: 216.0 Height: 584.0 but clearly it's being scaled by the constraints in the AuoLayout manager.
So, my question is how can I figure out the proper radio and position for different screen sizes if I can't use the enclosing View's size and position since that seems to never change?
Here is the image I am starting with, in a UIImageView, and trying to draw over.
Im of course trying to color it in based on data from an external device. Any suggestions/sample code most appreciated!
CALayer and its subclasses incl. CAShapeLayer have a property
var contentsScale: CGFloat
From class reference :
For layers you create and manage yourself, you must set the value of this property yourself based on the resolution of the screen and the content you are providing. Core Animation uses the value you specify as a cue to determine how to render your content.
So what you need to do is set the scale on the layer and you get the scale of the device from UIDevice class
circleLayer.scale = UIScreen.mainScreen().scale

Subtract UIView from another UIView in Swift

I'm sure this is a very simple thing to do, but I can't seem to wrap my head around the logic.
I have two UIViews. One black, semi-transparent and "full-screen" ("overlayView"), another one on top, smaller and resizeable ("cropView"). It's pretty much a crop-view setup, where I want to "dim" out the areas of an underlying image that are not being cropped.
My question is: How do I go about this? I'm sure my approach should be with CALayers and masks, but no matter what I try, I can't get behind the logic.
This is what I have right now:
This is what I would want it to look like:
How do I achieve this result in Swift?
Although you won't find a method such as subtract(...), you can easily build a screen with an overlay and a transparent cut with the following code:
Swift 4.2
private func addOverlayView() {
let overlayView = UIView(frame: self.bounds)
let targetMaskLayer = CAShapeLayer()
let squareSide = frame.width / 1.6
let squareSize = CGSize(width: squareSide, height: squareSide)
let squareOrigin = CGPoint(x: CGFloat(center.x) - (squareSide / 2),
y: CGFloat(center.y) - (squareSide / 2))
let square = UIBezierPath(roundedRect: CGRect(origin: squareOrigin, size: squareSize), cornerRadius: 16)
let path = UIBezierPath(rect: self.bounds)
path.append(square)
targetMaskLayer.path = path.cgPath
// Exclude intersected paths
targetMaskLayer.fillRule = CAShapeLayerFillRule.evenOdd
overlayView.layer.mask = targetMaskLayer
overlayView.clipsToBounds = true
overlayView.alpha = 0.6
overlayView.backgroundColor = UIColor.black
addSubview(overlayView)
}
Just call this method inside your custom view's constructor or inside your ViewController's viewDidLoad().
Walkthrough
First I create a raw overlayView, then a CAShapeLayer which I called "targetMaskLayer". The ultimate goal is to draw a square with the help of UIBezierPath inside that overlayView. After defining the square's dimensions, I set its cgPath as the targetMaskLayer's path.
Now comes an important part:
targetMaskLayer.fillRule = CAShapeLayerFillRule.evenOdd
Here I basically configure the fill rule to exclude the intersection.
Finally, I provide some styling to the overlayView and add it as a subview.
ps.: don't forget to import UIKit
There might be another drawing solution but basically you have 4 areas that need to be handled. Take the square area above and below the space with full width and add the right and left side between them with constraints to eachother.

Resources