How to make a Vision output display into a UI? - ios

I am relatively new to coding, and I have recently been working on a program that allows a user to scan a crystal, using the iPhones rear camera, and it will identify what kind of crystal it is. I used CreateML to build the model, and Vision to identify the crystal. What I can't seem to figure out is how to get the results into the UI I built. The results are printing to the Xcode console.
Here's a picture of the Storyboard:

I assume you want to draw a box around the detected crystal?
You should be getting a boundingBox of your crystal that looks something like this:
(0.166666666666667, 0.35, 0.66666666666667, 0.3)
These are "normalized" coordinates, which means that they are relative to the image that you send to Vision. I explain this more in detail here...
What you are used to
What Vision returns
You need to convert these "normalized" coordinates to UIKit coordinates that you can use. To do that, I have this converting function:
func getConvertedRect(boundingBox: CGRect, inImage imageSize: CGSize, containedIn containerSize: CGSize) -> CGRect {
let rectOfImage: CGRect
let imageAspect = imageSize.width / imageSize.height
let containerAspect = containerSize.width / containerSize.height
if imageAspect > containerAspect { /// image extends left and right
let newImageWidth = containerSize.height * imageAspect /// the width of the overflowing image
let newX = -(newImageWidth - containerSize.width) / 2
rectOfImage = CGRect(x: newX, y: 0, width: newImageWidth, height: containerSize.height)
} else { /// image extends top and bottom
let newImageHeight = containerSize.width * (1 / imageAspect) /// the width of the overflowing image
let newY = -(newImageHeight - containerSize.height) / 2
rectOfImage = CGRect(x: 0, y: newY, width: containerSize.width, height: newImageHeight)
}
let newOriginBoundingBox = CGRect(
x: boundingBox.origin.x,
y: 1 - boundingBox.origin.y - boundingBox.height,
width: boundingBox.width,
height: boundingBox.height
)
var convertedRect = VNImageRectForNormalizedRect(newOriginBoundingBox, Int(rectOfImage.width), Int(rectOfImage.height))
/// add the margins
convertedRect.origin.x += rectOfImage.origin.x
convertedRect.origin.y += rectOfImage.origin.y
return convertedRect
}
You can use it like this:
let convertedRect = self.getConvertedRect(
boundingBox: observation.boundingBox,
inImage: image.size, /// image is the image that you feed into Vision
containedIn: self.previewView.bounds.size /// the size of your camera feed's preview view
)
self.drawBoundingBox(rect: convertedRect)
/// draw the rectangle
func drawBoundingBox(rect: CGRect) {
let uiView = UIView(frame: rect)
previewView.addSubview(uiView)
uiView.backgroundColor = UIColor.orange.withAlphaComponent(0.2)
uiView.layer.borderColor = UIColor.orange.cgColor
uiView.layer.borderWidth = 3
}
Result (I'm doing a VNDetectRectanglesRequest):
If you want to "track" the detected object while your phone is moving, check out my answer here

Related

How to project 3D Joint Points (Transformation) onto 2D Screen ARKit?

I am trying to project the 3D human joint points onto the iPhone's screen using ARKit.
I am extracting the global transforms:
let rightArmPosition = skeleton.modelTransform(for: ARSkeleton.JointName(rawValue: "right_arm_joint"))!
let rootPosition = skeleton.modelTransform(for: .root)!
I am calculating the offset
let rightOffset = simd_make_float3(rightArmPosition.columns.3)
let rootOffset = simd_make_float3(rootPosition.columns.3)
I am projecting the points
let pMatrix = camera.projectionMatrix
let pRightOffset = camera.projectPoint(rightOffset, orientation: .portrait, viewportSize: CGSize(width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.height))
let pRootOffset = camera.projectPoint(rootOffset, orientation: .portrait, viewportSize: CGSize(width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.height))
humanJointsView.frame = UIScreen.main.bounds
humanJointsView.points = [pRightOffset, pRootOffset]
I am trying to draw the points in the target view:
override func draw(_ rect: CGRect) {
path.removeAllPoints()
self.points.forEach { point in
path = UIBezierPath(ovalIn: CGRect(x: point.x, y: point.y, width: CGFloat(30), height: CGFloat(30)))
UIColor.green.setFill()
path.fill()
}
}
This approach is not working however, where is my mistake?
Thank you!
The right arm and root transforms are in skeleton model space, you need to multiply them with the body anchor transform.
rightArmWorldTransform = body.transform * rightArmPosition

try Makeup using MLKit ios swift

Any idea, how can put lipstick on lips in face detection. i have done put color but i want to show Glossy and Shiny.
Any Idea about How can use texture and shades in MLKit ios app
Simply I find Lips points and create CAShapelayer layer and after that fill the color.
in ARKit we can use scenekit and sceneview so we add easily material and texture and all.
but how can we use in mlkit
[![enter image description here][1]][1]
Q1. is that possible to use sceneview in mlkit for materials and textures.
Q2. or how can drawing image in all points like (lips, eye, eyebrow)
// MARK: Contour func
private func addContours(for face: VisionFace, width: CGFloat, height: CGFloat) {
let facez = SCNScene()
guard let facez = SCNScene(named: "8.scn") else {
return
}
facez.rootNode.scale = SCNVector3(1,1,1)
let multipl :CGFloat = 200.0
let xoff :CGFloat = 0.98
let yoff : CGFloat = 1.76
let xp = ((face.frame.origin.x) / multipl) - xoff
let yp = ((face.frame.origin.y) / multipl) - yoff
facez.rootNode.position = SCNVector3(xp, yp, -1)
facez.rootNode.eulerAngles = SCNVector3(-1,1,0)
cameraView.allowsCameraControl = true
cameraView.autoenablesDefaultLighting = true
cameraView.scene = facez
let materials = facez.rootNode.geometry?.firstMaterial
materials?.diffuse.contents = UIColor.red
}
i tried like this and
public static func addleftImage(
atPoint point: CGPoint,
to view: UIView,
color: UIColor,
radius: CGFloat
) {
let divisor: CGFloat = 2.0
let xCoord = point.x - radius / divisor
let yCoord = point.y - radius / divisor
let circleRect = CGRect(x: xCoord, y: yCoord, width: radius, height: radius)
let circleView = UIImageView(frame: circleRect)
circleView.image = #imageLiteral(resourceName: "leftEye")
circleView.layer.cornerRadius = radius / divisor
circleView.alpha = Constants.circleViewAlpha
circleView.backgroundColor = color
view.addSubview(circleView)
}
But no luck!
Q1: I'm afraid that's out of the scope of mlkit.
Q2: if it helps, MLKit has a quickstart app in swift that can show contour points on the face: https://github.com/firebase/quickstart-ios/tree/master/mlvision/MLVisionExample
A transformMatrix was applied and this is the line to add contour points:
https://github.com/firebase/quickstart-ios/blob/master/mlvision/MLVisionExample/ViewController.swift#L845

Create a Frame image from single image

I want to make a frame image from single image. Below is the code I'm using
func createFrameFromImage(image:UIImage , size :CGSize) -> UIImage
{
let imageSize = CGSize.init(width: size.width , height: size.height)
UIGraphicsBeginImageContext(imageSize)
let width = imageSize.width
let height = imageSize.height
var letTop = image
let rightTop = rotateImageByAngles(image: &letTop, angles: .pi/2) // correct
let rightBottom = rotateImageByAngles(image: &letTop, angles: -.pi) // correct
let leftBottom = rotateImageByAngles(image: &letTop, angles: -.pi/2) // correct
letTop.draw(in: CGRect(x: 0, y: 0, width: width/2, height: height/2))
rightTop.draw(in: CGRect(x: (width/2) , y: 0, width: width/2, height: height/2))
leftBottom.draw(in: CGRect(x: 0, y: height/2, width: width/2, height: height/2))
rightBottom.draw(in: CGRect(x: (width/2) , y: (height/2), width: width/2, height: height/2))
guard let finalImage = UIGraphicsGetImageFromCurrentImageContext() else { return rightTop }
UIGraphicsEndImageContext()
return finalImage
}
Above function takes one piece of image and create four different images by rotating a single at specific angle and merge them to make a frame image. Issue I'm facing is maintaining image ratio. for ex: if create final image of size 320 * 120 it squeezes image horizontally.Attaching screen shot of output. I want to show this new generated image on wall using ARkit.
Final frame image
Given Image
// Adding Frame
// 1 inch = 72 points
//converting size inch to points to create frame image
let frameWidth = (size.width + 1)
let frameHeight = (size.height + 1)
let imgFrameUnit = UIImage(named: "img.png")!
let imgFrame = Singleton.shared.createFrameFromImage(image: imgFrameUnit, size: CGSize(width: frameWidth , height: frameHeight))
let frame = SCNNode(geometry: SCNPlane(width: ((frameWidth * 2.54) / 100), height: ((frameHeight * 2.54) / 100))) // in meters
frame.geometry?.firstMaterial?.diffuse.contents = imgFrame
frame.name = "frame"
nodeWeCanChange?.addChildNode(frame)
Any Help would be really Appreciated?

How to resize a UIImage without antialiasing?

I am developing an iOS board game. I am trying to give the board a kind of "texture".
What I did was I created this very small image (really small, be sure to look carefully):
And I passed this image to the UIColor.init(patternImage:) initializer to create a UIColor that is this image. I used this UIColor to fill some square UIBezierPaths, and the result looks like this:
All copies of that image lines up perfectly and they form many diagonal straight lines. So far so good.
Now on the iPad, the squares that I draw will be larger, and the borders of those squares will be larger too. I have successfully calculated what the stroke width and size of the squares should be, so that is not a problem.
However, since the squares are larger on an iPad, there will be more diagonal lines per square. I do not want that. I need to resize the very small image to a bigger one, and that the size depends on the stroke width of the squares. Specifically, the width of the resized image should be twice as much as the stroke width.
I wrote this extension to resize the image, adapted from this post:
extension UIImage {
func resized(toWidth newWidth: CGFloat) -> UIImage {
let scale = newWidth / size.width
let newHeight = size.height * scale
UIGraphicsBeginImageContextWithOptions(CGSize(width: newWidth, height: newHeight), false, 0)
self.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}
And called it like this:
// this is the code I used to draw a single square
let path = UIBezierPath(rect: CGRect(origin: point(for: Position(x, y)), size: CGSize(width: squareLength, height: squareLength)))
UIColor.black.setStroke()
path.lineWidth = strokeWidth
// this is the line that's important!
UIColor(patternImage: #imageLiteral(resourceName:
"texture").resized(toWidth: strokeWidth * 2)).setFill()
path.fill()
path.stroke()
Now the game board looks like this on an iPhone:
You might need to zoom in the webpage a bit to see what I mean. The board now looks extremely ugly. You can see the "borders" of each copy of the image. I don't want this. On an iPad though, the board looks fine. I suspect that this only happens when I downsize the image.
I figured that this might be due to the antialiasing that happens when I use the extension. I found this post and this post about removing antialiasing, but the former seems to be doing this in a image view while I am doing this in the draw(_:) method of my custom GameBoardView. The latter's solution seems to be exactly the same as what I am using.
How can I resize without antialiasing? Or on a higher level of abstraction, How can I make my board look pretty?
class Ruled: UIView {
override func draw(_ rect: CGRect) {
let T: CGFloat = 15 // desired thickness of lines
let G: CGFloat = 30 // desired gap between lines
let W = rect.size.width
let H = rect.size.height
guard let c = UIGraphicsGetCurrentContext() else { return }
c.setStrokeColor(UIColor.orange.cgColor)
c.setLineWidth(T)
var p = -(W > H ? W : H) - T
while p <= W {
c.move( to: CGPoint(x: p-T, y: -T) )
c.addLine( to: CGPoint(x: p+T+H, y: T+H) )
c.strokePath()
p += G + T + T
}
}
}
Enjoy.
Note that you would, obviously, clip that view.
If you want to have a number of them on the screen or in a pattern, just do that.
To clip to a given rectangle:
The class above simply draws it the "size of the UIView".
However, often, you want to draw a number of the "boxes" actually within the view, at different coordinates. (A good example is for a calendar).
Furthermore, this example explicitly draws "both stripes" rather than drawing one stripe over the background color:
func simpleStripes(x: CGFloat, y: CGFloat, width: CGFloat, height: CGFloat) {
let stripeWidth: CGFloat = 20.0 // whatever you want
let m = stripeWidth / 2.0
guard let c = UIGraphicsGetCurrentContext() else { return }
c.setLineWidth(stripeWidth)
let r = CGRect(x: x, y: y, width: width, height: height)
let longerSide = width > height ? width : height
c.saveGState()
c.clip(to: r)
var p = x - longerSide
while p <= x + width {
c.setStrokeColor(pale blue)
c.move( to: CGPoint(x: p-m, y: y-m) )
c.addLine( to: CGPoint(x: p+m+height, y: y+m+height) )
c.strokePath()
p += stripeWidth
c.setStrokeColor(pale gray)
c.move( to: CGPoint(x: p-m, y: y-m) )
c.addLine( to: CGPoint(x: p+m+height, y: y+m+height) )
c.strokePath()
p += stripeWidth
}
c.restoreGState()
}
extension UIImage {
func ResizeImage(targetSize: CGSize) -> UIImage
{
let size = self.size
let widthRatio = targetSize.width / self.size.width
let heightRatio = targetSize.height / self.size.height
// Figure out what our orientation is, and use that to form the rectangle
var newSize: CGSize
if(widthRatio > heightRatio) {
newSize = CGSize(width: size.width * heightRatio, height: size.height * heightRatio)
} else {
newSize = CGSize(width: size.width * widthRatio, height: size.height * widthRatio)
}
// This is the rect that we've calculated out and this is what is actually used below
let rect = CGRect(x: 0, y: 0, width: newSize.width,height: newSize.height)
// Actually do the resizing to the rect using the ImageContext stuff
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
self.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}

Draw image and text into a single image

I'm attempting to combine both an image and a text field into a single image whilst still keeping the texts initial positioning.
I'm using UIGraphicsBeginImageContext to create a bitmap context and UIGraphicsGetImageFromCurrentImageContext to draw the final image.
So far, I have the following contained within a function:
let size = CGSize(width: self.takenImage.size.width, height: self.takenImage.size.height)
UIGraphicsBeginImageContextWithOptions(takenImage.size, false, takenImage.scale)
let areaSize = CGRect(x: 0, y: 0, width: self.takenImage.size.width, height: self.takenImage.size.height)
takenImage.draw(in: areaSize)
let imageViewSize = self.takenImage.size
let multiWidth = areaSize.width / imageViewSize.width
let multiHeight = areaSize.height / imageViewSize.height
print ("multiWidth = \(multiWidth)")
print ("multiHeight = \(multiHeight)")
let textSize = CGRect(x: textOverlay.frame.origin.x * multiWidth, y: textOverlay.frame.origin.y * multiHeight, width: textOverlay.frame.size.width * multiWidth, height: textOverlay.frame.size.height * multiHeight)
textOverlay.drawText(in: textSize)
let outputImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
self.finalImage = outputImage
takenImage is the taken image from my camera (never null) and textOverlay is a UITextField containing the wanted text.
I first create the bitmap and draw takenImage using both its original width/height.
If I draw just this to my finalImage, all works fine. The problem stems from trying to add the text and keep it in the same position.
I've tried to create a second CGRect with x, y, w, h coordinates from the UITextField : textOverlay but when viewing the final image, I'm getting weird results.
The images can be seen here.
How would I go about preserving the text's position in the merged image?
you need recalculate area size to draw text.
let areaSize = CGRect(x: 0, y: 0, width: self.takenImage.size.width, height: self.takenImage.size.height)
takenImage.draw(in: areaSize)
let imageViewSize = self.imageView.size
let multiWidth = areaSize.width / imageViewSize.width
let multiHeight = areaSize.height / imageViewSize.height
let textSize = CGRect(x: textOverlay.frame.origin.x * multiWidth, y: textOverlay.frame.origin.y * multiHeight, width: textOverlay.frame.size.width * multiWidth, height: textOverlay.frame.size.height * multiHeight)
textOverlay.drawText(in: textSize)
Here, I use self.imageView - it is imageView contains your takenImage.
Hope it help.

Resources