How to project 3D Joint Points (Transformation) onto 2D Screen ARKit? - ios

I am trying to project the 3D human joint points onto the iPhone's screen using ARKit.
I am extracting the global transforms:
let rightArmPosition = skeleton.modelTransform(for: ARSkeleton.JointName(rawValue: "right_arm_joint"))!
let rootPosition = skeleton.modelTransform(for: .root)!
I am calculating the offset
let rightOffset = simd_make_float3(rightArmPosition.columns.3)
let rootOffset = simd_make_float3(rootPosition.columns.3)
I am projecting the points
let pMatrix = camera.projectionMatrix
let pRightOffset = camera.projectPoint(rightOffset, orientation: .portrait, viewportSize: CGSize(width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.height))
let pRootOffset = camera.projectPoint(rootOffset, orientation: .portrait, viewportSize: CGSize(width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.height))
humanJointsView.frame = UIScreen.main.bounds
humanJointsView.points = [pRightOffset, pRootOffset]
I am trying to draw the points in the target view:
override func draw(_ rect: CGRect) {
path.removeAllPoints()
self.points.forEach { point in
path = UIBezierPath(ovalIn: CGRect(x: point.x, y: point.y, width: CGFloat(30), height: CGFloat(30)))
UIColor.green.setFill()
path.fill()
}
}
This approach is not working however, where is my mistake?
Thank you!

The right arm and root transforms are in skeleton model space, you need to multiply them with the body anchor transform.
rightArmWorldTransform = body.transform * rightArmPosition

Related

How to make a Vision output display into a UI?

I am relatively new to coding, and I have recently been working on a program that allows a user to scan a crystal, using the iPhones rear camera, and it will identify what kind of crystal it is. I used CreateML to build the model, and Vision to identify the crystal. What I can't seem to figure out is how to get the results into the UI I built. The results are printing to the Xcode console.
Here's a picture of the Storyboard:
I assume you want to draw a box around the detected crystal?
You should be getting a boundingBox of your crystal that looks something like this:
(0.166666666666667, 0.35, 0.66666666666667, 0.3)
These are "normalized" coordinates, which means that they are relative to the image that you send to Vision. I explain this more in detail here...
What you are used to
What Vision returns
You need to convert these "normalized" coordinates to UIKit coordinates that you can use. To do that, I have this converting function:
func getConvertedRect(boundingBox: CGRect, inImage imageSize: CGSize, containedIn containerSize: CGSize) -> CGRect {
let rectOfImage: CGRect
let imageAspect = imageSize.width / imageSize.height
let containerAspect = containerSize.width / containerSize.height
if imageAspect > containerAspect { /// image extends left and right
let newImageWidth = containerSize.height * imageAspect /// the width of the overflowing image
let newX = -(newImageWidth - containerSize.width) / 2
rectOfImage = CGRect(x: newX, y: 0, width: newImageWidth, height: containerSize.height)
} else { /// image extends top and bottom
let newImageHeight = containerSize.width * (1 / imageAspect) /// the width of the overflowing image
let newY = -(newImageHeight - containerSize.height) / 2
rectOfImage = CGRect(x: 0, y: newY, width: containerSize.width, height: newImageHeight)
}
let newOriginBoundingBox = CGRect(
x: boundingBox.origin.x,
y: 1 - boundingBox.origin.y - boundingBox.height,
width: boundingBox.width,
height: boundingBox.height
)
var convertedRect = VNImageRectForNormalizedRect(newOriginBoundingBox, Int(rectOfImage.width), Int(rectOfImage.height))
/// add the margins
convertedRect.origin.x += rectOfImage.origin.x
convertedRect.origin.y += rectOfImage.origin.y
return convertedRect
}
You can use it like this:
let convertedRect = self.getConvertedRect(
boundingBox: observation.boundingBox,
inImage: image.size, /// image is the image that you feed into Vision
containedIn: self.previewView.bounds.size /// the size of your camera feed's preview view
)
self.drawBoundingBox(rect: convertedRect)
/// draw the rectangle
func drawBoundingBox(rect: CGRect) {
let uiView = UIView(frame: rect)
previewView.addSubview(uiView)
uiView.backgroundColor = UIColor.orange.withAlphaComponent(0.2)
uiView.layer.borderColor = UIColor.orange.cgColor
uiView.layer.borderWidth = 3
}
Result (I'm doing a VNDetectRectanglesRequest):
If you want to "track" the detected object while your phone is moving, check out my answer here

Adding the same layer in a subview changes its visual position

I added a subview (with a black border) in a view and centered it.
Then I generate 2 identical triangles with CAShapeLayer and add one to the subview and the other to the main view.
Here is the visual result in Playground where we can see that the green triangle is totally off and should have been centered.
And here is the code:
let view = UIView()
let borderedView = UIView()
var containedFrame = CGRect(x: 0, y: 0, width: 100, height: 100)
func setupUI() {
view.frame = CGRect(x: 0, y: 0, width: 300, height: 600)
view.backgroundColor = .white
borderedView.frame = containedFrame
borderedView.center = view.center
borderedView.backgroundColor = .clear
borderedView.layer.borderColor = UIColor.black.cgColor
borderedView.layer.borderWidth = 1
view.addSubview(borderedView)
setupTriangles()
}
private func setupTriangles() {
view.layer.addSublayer(createTriangle(color: .red)) // RED triangle
borderedView.layer.addSublayer(createTriangle(color: .green)) // GREEN triangle
}
private func createTriangle(color: UIColor) -> CAShapeLayer {
let layer = CAShapeLayer()
let bezierPath = UIBezierPath()
bezierPath.move(to: CGPoint(x: 0, y: 0))
bezierPath.addLine(to: CGPoint(x: -containedFrame.width, y: 0))
bezierPath.addLine(to: CGPoint(x: 0, y: -containedFrame.height))
bezierPath.close()
layer.position = borderedView.center
layer.path = bezierPath.cgPath
layer.fillColor = color.cgColor
return layer
}
Note: All position (of view, the borderedView and both triangles) are the same (150.0, 300.0)
Question: Why is the green layer not in the right position?
#DuncanC is right that each view has its own coordinate system. Your problem is this line:
layer.position = borderedView.center
That sets the layer's position to the center of the frame for the borderedView which is in the coordinate system of view. When you create the green triangle, it needs to use the coordinate system of borderedView.
You can fix this by passing the view to your createTriangle function, and then use the center of the bounds of that view as the layer position:
private func setupTriangles() {
view.layer.addSublayer(createTriangle(color: .red, for: view)) // RED triangle
borderedView.layer.addSublayer(createTriangle(color: .green, for: borderedView)) // GREEN triangle
}
private func createTriangle(color: UIColor, for view: UIView) -> CAShapeLayer {
let layer = CAShapeLayer()
let bezierPath = UIBezierPath()
bezierPath.move(to: CGPoint(x: 0, y: 0))
bezierPath.addLine(to: CGPoint(x: -containedFrame.width, y: 0))
bezierPath.addLine(to: CGPoint(x: 0, y: -containedFrame.height))
bezierPath.close()
layer.position = CGPoint(x: view.bounds.midX, y: view.bounds.midY)
layer.path = bezierPath.cgPath
layer.fillColor = color.cgColor
return layer
}
Note: When you do this, the green triangle appears directly below the red one, so it isn't visible.
Every view/layer uses the coordinate system of it's superview/superlayer. If you add a layer to self.view.layer, it will be positioned in self.view.layer's coordinate system. If you add a layer to borderedView.layer, it will be in borderedView.layer's coordinate system.
Think of the view/layer hierarchy as stacks of pieces of graph paper. You place a new piece of paper on the current piece (the superview/layer) in the current piece's coordinates system, but then if you draw on the new view/layer, or add new views/layer inside that one, you use the new view/layer's coordinate system.

Views drawn from code (PaintCode) are pixelated, very pixelated when scaled

I am building an app that overlays views drawn with code (output from PaintCode) onto photos. I have added gesture recognizers to rotate and scale the views drawn with code.
There is some mild pixelation on the views drawn on top. If I do any rotation or scale the image larger (even a slight bit), there is a lot more pixelation.
Here is a comparison of the images:
No rotating or scaling:
A small amount of rotation/scaling:
Here is the UIView extension I'm using to output the composited view:
extension UIView {
func printViewToImage() -> UIImage {
let format = UIGraphicsImageRendererFormat()
format.scale = 2.0
let renderer = UIGraphicsImageRenderer(bounds: self.bounds, format: format)
return renderer.image { rendererContext in
self.drawHierarchy(in: self.bounds, afterScreenUpdates: true)
}
}
}
Even if I set the scale to something like 4.0, there is no difference.
Here is the code I'm using for the scale/rotation gesture recognizers:
#IBAction func handlePinch(recognizer: UIPinchGestureRecognizer) {
guard let view = recognizer.view else {
return
}
view.transform = view.transform.scaledBy(x: recognizer.scale, y: recognizer.scale)
recognizer.scale = 1
}
#IBAction func handleRotate(recognizer: UIRotationGestureRecognizer) {
guard let view = recognizer.view else {
return
}
view.transform = view.transform.rotated(by: recognizer.rotation)
recognizer.rotation = 0
}
I have experimented with making the canvasses very large in PaintCode (3000x3000), and there is no difference, so I don't think it has to do with that.
How can I draw/export these views so that they are not pixelated?
Edit: Here's what some of the drawing code looks like...
public dynamic class func drawCelebrateDiversity(frame targetFrame: CGRect = CGRect(x: 0, y: 0, width: 3000, height: 3000), resizing: ResizingBehavior = .aspectFit, color: UIColor = UIColor(red: 1.000, green: 1.000, blue: 1.000, alpha: 1.000)) {
//// General Declarations
let context = UIGraphicsGetCurrentContext()!
//// Resize to Target Frame
context.saveGState()
let resizedFrame: CGRect = resizing.apply(rect: CGRect(x: 0, y: 0, width: 3000, height: 3000), target: targetFrame)
context.translateBy(x: resizedFrame.minX, y: resizedFrame.minY)
context.scaleBy(x: resizedFrame.width / 3000, y: resizedFrame.height / 3000)
//// Bezier 13 Drawing
let bezier13Path = UIBezierPath()
bezier13Path.move(to: CGPoint(x: 2915.18, y: 2146.51))
bezier13Path.addCurve(to: CGPoint(x: 2925.95, y: 2152.38), controlPoint1: CGPoint(x: 2919.93, y: 2147.45), controlPoint2: CGPoint(x: 2924.05, y: 2147.91))
When scaling UIViews (or custom CALayers), you should set their contentsScale to match the desired density of their content. UIViews set their layer contentsScale to screen scale (2 on retina), and you need to multiply this with the extra scale you do via transform.
view.layer.contentsScale = UIScreen.main.scale * gesture.scale;
Even if the drawing code is resolution independent, everything on screen must be converted to bitmap at some time. UIView allocates bitmap with size of bounds.size * contentsScale and then invokes -drawRect:/draw(_ rect:) method.
It is important to set contentsScale on that view that draws, even if that view is not scaled (but some of its parent is). A common solution is to recursively set contentsScale on all sublayers of the scaled view.
– PaintCode Support

CGPath-based CPTPlotSymbol inverted and distorted

I'm currently working with custom markers on a scatter plot and found myself with an issue that results in CPTPlotSymbol created from a CGPath upside down and distorted.
I've tested the path-creating code in a playground and it works without issues, drawing the path with the correct shape and orientation.
Here's the path drawing code:
private func getOuterPathInRect(rect: CGRect) -> CGPath {
let circlePath: CGPath = {
let p = CGMutablePath()
let topHundred = CGRect(x: 0, y: 0, width: 100, height: 100)
p.addEllipse(in: topHundred)
return p
}()
let arrowPath: CGPath = {
let p = CGMutablePath()
p.move(to: CGPoint(x: rect.midX, y: rect.maxY - 5.0))
p.addLine(to: CGPoint(x: rect.midX - 7.5, y: rect.maxY - 15.0))
p.addLine(to: CGPoint(x: rect.midX + 7.5, y: rect.maxY - 15.0))
return p
}()
let path = CGMutablePath()
path.addPath(circlePath)
path.addPath(arrowPath)
return path
}
And the code that creates the CPTPlotSymbol is:
func symbol(for plot: CPTScatterPlot, record idx: UInt) -> CPTPlotSymbol? {
let index = Int(idx)
guard items[index].requiresMarker else { return nil }
let rect = CGRect(x: 0, y: 0, width: 120, height: 120)
let marker = BallMarkerView()
marker.contentMode = .center
let path = marker.pathIn(rect: rect)
let symbol = CPTPlotSymbol.customPlotSymbol(with: path)
symbol.size = rect.size
symbol.fill = CPTFill(color: CPTColor.red())
return symbol
}
My goal was to use a custom UIView as a marker, but I couldn't find an API to do so, so I resorted to providing a path-based marker and fill it with an image representation of the marker.
Is this the proper way of doing it?
Why is my path being drawn distorted and upside down? The path being upside down could be explained by the difference in the coordinate system between UIKit and CoreGraphics, but that doesn't explain the distorsion.
Thanks!
Because Core Plot shares drawing code between the Mac and iOS, it uses the same drawing coordinate system on both platforms where (0, 0) is the lower-left corner of the drawing canvas. This is flipped from the normal drawing coordinate system on iOS.

Union UIBezierPaths rather than apend path

I have an app where I take a UIBezierPath and use it as a brush by a series of appendPath: calls. After a few goes and with really complex brush shapes the memory runs out and the app grinds to a halt. What I really want to do is a full on union exactly like Paint Code does but I can't find any way of doing this.
How would I go about unioning two or more UIBezierPaths?
EDIT:
Here is a visual of what I want to achieve dynamically.
In Paint Code you take two paths and overlap them like this:
BUT I want to merge / union them into one new single path like:
Note that in the bottom panel of Paint Code there is code for now one single shape and this is what I want to be able to get to programatically with maybe 1000 original paths.
You can get desired result easily by following 2 concepts of Core Graphics :-
i)CGBlendMode
ii)OverLap2Layer
Blend modes tell a context how to apply new content to itself. They determine how pixel data is digitally blended.
class UnionUIBezierPaths : UIView {
var firstBeizerPath:UIImage!
var secondBeizerPath:UIImage!
override func draw(_ rect: CGRect) {
super.draw(rect)
firstBeizerPath = drawOverLapPath(firstBeizerpath: drawCircle(), secondBeizerPath: polygon())
secondBeizerPath = drawOverLapPath(firstBeizerpath: polygon(), secondBeizerPath: drawCircle())
let image = UIImage().overLap2Layer(firstLayer:firstBeizerPath , secondLayer:secondBeizerPath)
}
func drawCircle() -> UIBezierPath {
let path = UIBezierPath(ovalIn: CGRect(x: 40, y: 120, width: 100, height: 100) )
return path
}
func polygon() -> UIBezierPath {
let beizerPath = UIBezierPath()
beizerPath.move(to: CGPoint(x: 100, y: 10) )
beizerPath.addLine(to: CGPoint(x: 200.0, y: 40.0) )
beizerPath.addLine(to: CGPoint(x: 160, y: 140) )
beizerPath.addLine(to: CGPoint(x: 40, y: 140) )
beizerPath.addLine(to: CGPoint(x: 0, y: 40) )
beizerPath.close()
return beizerPath
}
func drawOverLapPath(firstBeizerpath:UIBezierPath ,secondBeizerPath:UIBezierPath ) -> UIImage {
UIGraphicsBeginImageContext(self.frame.size)
let firstpath = firstBeizerpath
UIColor.white.setFill()
UIColor.black.setStroke()
firstpath.stroke()
firstpath.fill()
// sourceAtop = 20
let mode = CGBlendMode(rawValue:20)
UIGraphicsGetCurrentContext()!.setBlendMode(mode!)
let secondPath = secondBeizerPath
UIColor.white.setFill()
UIColor.white.setStroke()
secondPath.fill()
secondPath.stroke()
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image!
}
func drawImage(image1:UIImage , secondImage:UIImage ) ->UIImage
{
UIGraphicsBeginImageContext(self.frame.size)
image1.draw(in: CGRect(x: 0, y: 0, width: frame.size.width, height: frame.size.height) )
secondImage.draw(in: CGRect(x: 0, y: 0, width: frame.size.width, height: frame.size.height) )
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}
//OverLap2Layer
extension UIImage {
func overLap2Layer(firstLayer:UIImage , secondLayer:UIImage ) -> UIImage {
UIGraphicsBeginImageContext(firstLayer.size)
firstLayer.draw(in: CGRect(x: 0, y: 0, width: firstLayer.size.width, height: firstLayer.size.height) )
secondLayer.draw(in: CGRect(x: 0, y: 0, width: firstLayer.size.width, height: firstLayer.size.height) )
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}
First Path :-
Second Path :-
Final Result :-
Reference:-
Blend in Core Graphics ,
Creating Image
GitHub Demo
Finally a solution!!
Using https://github.com/adamwulf/ClippingBezier you can find the intersecting points. Then you can walk through the path, turning left if clockwise or vice-versa to stay on the outside. Then you can generate a new path using the sequence of points.
You can use the GPCPolygon, an Objective-C wrapper for GPC
-GPCPolygonSet*) initWithPolygons:(NSMutableArray*)points;
or
- (GPCPolygonSet*) unionWithPolygonSet:(GPCPolygonSet*)p2;

Resources