How to draw 2D matrix of squares on iOS? - ios

What is the best way how to draw a matrix of squares on iOS using SpriteKit? I am complete iOS beginner and I'm not sure what is the correct approach.
I was thinking creating a sprite representing a picture of a square and than add that sprite several times to specific locations. The data in the matrix will change over time and I need to reflect that in what is being drawn on the screen, though the shape of the matrix will remain the same.

This will get a basic 2d matrix of squares on the scree. If you need to keep track of these tile sprites, you can create an array of tiles and modify them elsewhere in your code. Hope this is helpful.
func setupMap() {
let tilesWide = 10
let tilesTall = 10
for i in 0..<tilesWide {
for j in 0..<tilesTall {
let tile = SKSpriteNode(color: SKColor.redColor(), size: CGSize(width: 5, height: 5))
tile.anchorPoint = CGPointZero
let x = CGFloat(i) * tile.size.width
let y = CGFloat(j) * tile.size.height
tile.position = CGPoint(x: x, y: y)
self.addChild(tile)
}
}
}

update for Swift3 :
func setupMap() {
let tilesWide = 10
let tilesTall = 10
for i in 0..<tilesWide {
for j in 0..<tilesTall {
let tile = SKSpriteNode(color: SKColor.red, size: CGSize(width: 5, height: 5))
tile.anchorPoint = .zero
let x = CGFloat(i) * tile.size.width
let y = CGFloat(j) * tile.size.height
tile.position = CGPoint(x: x, y: y)
self.addChild(tile)
}
}
}

Related

try Makeup using MLKit ios swift

Any idea, how can put lipstick on lips in face detection. i have done put color but i want to show Glossy and Shiny.
Any Idea about How can use texture and shades in MLKit ios app
Simply I find Lips points and create CAShapelayer layer and after that fill the color.
in ARKit we can use scenekit and sceneview so we add easily material and texture and all.
but how can we use in mlkit
[![enter image description here][1]][1]
Q1. is that possible to use sceneview in mlkit for materials and textures.
Q2. or how can drawing image in all points like (lips, eye, eyebrow)
// MARK: Contour func
private func addContours(for face: VisionFace, width: CGFloat, height: CGFloat) {
let facez = SCNScene()
guard let facez = SCNScene(named: "8.scn") else {
return
}
facez.rootNode.scale = SCNVector3(1,1,1)
let multipl :CGFloat = 200.0
let xoff :CGFloat = 0.98
let yoff : CGFloat = 1.76
let xp = ((face.frame.origin.x) / multipl) - xoff
let yp = ((face.frame.origin.y) / multipl) - yoff
facez.rootNode.position = SCNVector3(xp, yp, -1)
facez.rootNode.eulerAngles = SCNVector3(-1,1,0)
cameraView.allowsCameraControl = true
cameraView.autoenablesDefaultLighting = true
cameraView.scene = facez
let materials = facez.rootNode.geometry?.firstMaterial
materials?.diffuse.contents = UIColor.red
}
i tried like this and
public static func addleftImage(
atPoint point: CGPoint,
to view: UIView,
color: UIColor,
radius: CGFloat
) {
let divisor: CGFloat = 2.0
let xCoord = point.x - radius / divisor
let yCoord = point.y - radius / divisor
let circleRect = CGRect(x: xCoord, y: yCoord, width: radius, height: radius)
let circleView = UIImageView(frame: circleRect)
circleView.image = #imageLiteral(resourceName: "leftEye")
circleView.layer.cornerRadius = radius / divisor
circleView.alpha = Constants.circleViewAlpha
circleView.backgroundColor = color
view.addSubview(circleView)
}
But no luck!
Q1: I'm afraid that's out of the scope of mlkit.
Q2: if it helps, MLKit has a quickstart app in swift that can show contour points on the face: https://github.com/firebase/quickstart-ios/tree/master/mlvision/MLVisionExample
A transformMatrix was applied and this is the line to add contour points:
https://github.com/firebase/quickstart-ios/blob/master/mlvision/MLVisionExample/ViewController.swift#L845

How do I make a hexagon with 6 triangular SCNNodes?

I'm trying to make a hexagon grid with triangles without altering any pivot points, but I can't seem to position the triangles correctly to make single hexagon. I'm creating SCNNodes with UIBezierPaths to form triangles and then rotating the bezier paths. This seems to work fine UNTIL I try to use a parametric equation to position the triangles around a circle to form the hexagon, then they don't end up in the correct position. Can you help me spot where I'm doing wrong here?
class TrianglePlane: SCNNode {
var size: CGFloat = 0.1
var coords: SCNVector3 = SCNVector3Zero
var innerCoords: Int = 0
init(coords: SCNVector3, innerCoords: Int, identifier: Int) {
super.init()
self.coords = coords
self.innerCoords = innerCoords
setup()
}
init(identifier: Int) {
super.init()
// super.init(identifier: identifier)
setup()
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
func setup() {
let myPath = path()
let geo = SCNShape(path: myPath, extrusionDepth: 0)
geo.firstMaterial?.diffuse.contents = UIColor.red
geo.firstMaterial?.blendMode = .multiply
self.geometry = geo
}
func path() -> UIBezierPath {
let max: CGFloat = self.size
let min: CGFloat = 0
let bPath = UIBezierPath()
bPath.move(to: .zero)
bPath.addLine(to: CGPoint(x: max / 2,
y: UIBezierPath.middlePeak(height: max)))
bPath.addLine(to: CGPoint(x: max, y: min))
bPath.close()
return bPath
}
}
extension TrianglePlane {
static func generateHexagon() -> [TrianglePlane] {
var myArr: [TrianglePlane] = []
let colors = [UIColor.red, UIColor.green,
UIColor.yellow, UIColor.systemTeal,
UIColor.cyan, UIColor.magenta]
for i in 0 ..< 6 {
let tri = TrianglePlane(identifier: 0)
tri.geometry?.firstMaterial?.diffuse.contents = colors[i]
tri.position = SCNVector3( -0.05, 0, -0.5)
// Rotate bezier path
let angleInDegrees = (Float(i) + 1) * 180.0
print(angleInDegrees)
let angle = CGFloat(deg2rad(angleInDegrees))
let geo = tri.geometry as! SCNShape
let path = geo.path!
path.rotateAroundCenter(angle: angle)
geo.path = path
// Position triangle in hexagon
let radius = Float(tri.size)/2
let deg: Float = Float(i) * 60
let radians = deg2rad(-deg)
let x1 = tri.position.x + radius * cos(radians)
let y1 = tri.position.y + radius * sin(radians)
tri.position.x = x1
tri.position.y = y1
myArr.append(tri)
}
return myArr
}
static func deg2rad(_ number: Float) -> Float {
return number * Float.pi / 180
}
}
extension UIBezierPath {
func rotateAroundCenter(angle: CGFloat) {
let center = self.bounds.center
var transform = CGAffineTransform.identity
transform = transform.translatedBy(x: center.x, y: center.y)
transform = transform.rotated(by: angle)
transform = transform.translatedBy(x: -center.x, y: -center.y)
self.apply(transform)
}
static func middlePeak(height: CGFloat) -> CGFloat {
return sqrt(3.0) / 2 * height
}
}
extension CGRect {
var center : CGPoint {
return CGPoint(x:self.midX, y:self.midY)
}
}
What it currently looks like:
What it SHOULD look like:
I created two versions – SceneKit and RealityKit.
SceneKit (macOS version)
The simplest way to compose a hexagon is to use six non-uniformly scaled SCNPyramids (flat) with their shifted pivot points. Each "triangle" must be rotated in 60 degree increments (.pi/3).
import SceneKit
class ViewController: NSViewController {
override func viewDidLoad() {
super.viewDidLoad()
let sceneView = self.view as! SCNView
let scene = SCNScene()
sceneView.scene = scene
sceneView.allowsCameraControl = true
sceneView.backgroundColor = NSColor.white
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
scene.rootNode.addChildNode(cameraNode)
cameraNode.position = SCNVector3(x: 0, y: 0, z: 15)
for i in 1...6 {
let triangleNode = SCNNode(geometry: SCNPyramid(width: 1.15,
height: 1,
length: 1))
// the depth of the pyramid is almost zero
triangleNode.scale = SCNVector3(5, 5, 0.001)
// move a pivot point from pyramid its base to upper vertex
triangleNode.simdPivot.columns.3.y = 1
triangleNode.geometry?.firstMaterial?.diffuse.contents = NSColor(
calibratedHue: CGFloat(i)/6,
saturation: 1.0,
brightness: 1.0,
alpha: 1.0)
triangleNode.rotation = SCNVector4(0, 0, 1,
-CGFloat.pi/3 * CGFloat(i))
scene.rootNode.addChildNode(triangleNode)
}
}
}
RealityKit (iOS version)
In this project I generated a triangle with the help of MeshDescriptor and copied it 5 more times.
import UIKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
let anchor = AnchorEntity()
let camera = PointOfView()
let indices: [UInt32] = [0, 1, 2]
override func viewDidLoad() {
super.viewDidLoad()
self.arView.environment.background = .color(.black)
self.arView.cameraMode = .nonAR
self.camera.position.z = 9
let positions: [simd_float3] = [[ 0.00, 0.00, 0.00],
[ 0.52, 0.90, 0.00],
[-0.52, 0.90, 0.00]]
var descriptor = MeshDescriptor(name: "Hexagon's side")
descriptor.materials = .perFace(self.indices)
descriptor.primitives = .triangles(self.indices)
descriptor.positions = MeshBuffers.Positions(positions[0...2])
var material = UnlitMaterial()
let mesh: MeshResource = try! .generate(from: [descriptor])
let colors: [UIColor] = [.systemRed, .systemGreen, .yellow,
.systemTeal, .cyan, .magenta]
for i in 0...5 {
material.color = .init(tint: colors[i], texture: nil)
let triangleModel = ModelEntity(mesh: mesh,
materials: [material])
let trianglePivot = Entity() // made to control pivot point
trianglePivot.addChild(triangleModel)
trianglePivot.orientation = simd_quatf(angle: -.pi/3 * Float(i),
axis: [0,0,1])
self.anchor.addChild(trianglePivot)
}
self.anchor.addChild(self.camera)
self.arView.scene.anchors.append(self.anchor)
}
}
There are a few problems with the code as it stands. Firstly, as pointed out in the comments, the parametric equation for the translations needs to be rotated by 90 degrees:
let deg: Float = (Float(i) * 60) - 90.0
The next issue is that the centre of the bounding box of the triangle and the centroid of the triangle are not the same point. This is important because the parametric equation calculates where the centroids of the triangles must be located, not the centres of their bounding boxes. So we're going to need a way to calculate the centroid. This can be done by adding the following extension method to TrianglePlane:
extension TrianglePlane {
/// Calculates the centroid of the triangle
func centroid() -> CGPoint
{
let max: CGFloat = self.size
let min: CGFloat = 0
let peak = UIBezierPath.middlePeak(height: max)
let xAvg = (min + max / CGFloat(2.0) + max) / CGFloat(3.0)
let yAvg = (min + peak + min) / CGFloat(3.0)
return CGPoint(x: xAvg, y: yAvg)
}
}
This allows the correct radius for the parametric equation to be calculated:
let height = Float(UIBezierPath.middlePeak(height: tri.size))
let centroid = tri.centroid()
let radius = height - Float(centroid.y)
The final correction is to calculate the offset between the origin of the triangle and the centroid. This correction depends on whether the triangle has been flipped by the rotation or not:
let x1 = radius * cos(radians)
let y1 = radius * sin(radians)
let dx = Float(-centroid.x)
let dy = (i % 2 == 0) ? Float(centroid.y) - height : Float(-centroid.y)
tri.position.x = x1 + dx
tri.position.y = y1 + dy
Putting all this together gives the desired result.
Full working ViewController can be found int this gist
Note the code can be greatly simplified by making the origin of the triangle be the centroid.

How to extract outer lips from its feature point using vision framework in swift

I implemented addFaceLandmarksToImage function to crop the outer lips of the image. addFaceLandmarksToImage function first detects the face on the image using vision, convert the face bounding box size and origin to image size and origin. then I used face landmark outer lips to get the normalized point of the outer lips and draw the line on the outer lips of the image by connecting all the normalized points.
Then I implemented logic to crop the image. I first converted the normalized point of the outer lips into the image coordinate and used the find point method to get left, right, top and bottom-most points and extracted outer lips by cropping it and showed in processed image view. The issue with this function is output following is not as expected. Also if I use an image other then the one that is in the following link, it crops images other then outer lips. I could not figure out where have I gone wrong, is it on calculation of cropping rect or should I use another approach(using OpenCV and region of interest (ROI)) to extract the outer lips from the image?
video of an application
func addFaceLandmarksToImage(_ face: VNFaceObservation)
{
UIGraphicsBeginImageContextWithOptions(image.size, true, 0.0)
let context = UIGraphicsGetCurrentContext()
// draw the image
image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
context?.translateBy(x: 0, y: image.size.height)
context?.scaleBy(x: 1.0, y: -1.0)
// draw the face rect
let w = face.boundingBox.size.width * image.size.width
let h = face.boundingBox.size.height * image.size.height
let x = face.boundingBox.origin.x * image.size.width
let y = face.boundingBox.origin.y * image.size.height
let cropFace = self.image.cgImage?.cropping(to: CGRect(x: x, y: y, width: w, height: h))
let ii = UIImage(cgImage: cropFace!)
// outer lips
context?.saveGState()
context?.setStrokeColor(UIColor.yellow.cgColor)
if let landmark = face.landmarks?.outerLips {
var actualCordinates = [CGPoint]()
print(landmark.normalizedPoints)
for i in 0...landmark.pointCount - 1 {
// last point is 0,0
let point = landmark.normalizedPoints[i]
actualCordinates.append(CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))
if i == 0 {
context?.move(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))
} else {
context?.addLine(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))
}
}
// Finding left,right,top,buttom point from actual coordinates points[CGPOINT]
let leftMostPoint = self.findPoint(points: actualCordinates, position: .leftMost)
let rightMostPoint = self.findPoint(points: actualCordinates, position: .rightMost)
let topMostPoint = self.findPoint(points: actualCordinates, position: .topMost)
let buttonMostPoint = self.findPoint(points: actualCordinates, position: .buttonMost)
print("actualCordinates:",actualCordinates,
"leftMostPoint:",leftMostPoint,
"rightMostPoint:",rightMostPoint,
"topMostPoint:",topMostPoint,
"buttonMostPoint:",buttonMostPoint)
let widthDistance = -(leftMostPoint.x - rightMostPoint.x)
let heightDistance = -(topMostPoint.y - buttonMostPoint.y)
//Cropping the image.
// self.image is actual image
let cgCroppedImage = self.image.cgImage?.cropping(to: CGRect(x: leftMostPoint.x,y: leftMostPoint.x - heightDistance,width:1000,height: topMostPoint.y + heightDistance + 500))
let jj = UIImage(cgImage: cgCroppedImage!)
self.processedImageView.image = jj
}
context?.closePath()
context?.setLineWidth(8.0)
context?.drawPath(using: .stroke)
context?.saveGState()
// get the final image
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
// end drawing context
UIGraphicsEndImageContext()
imageView.image = finalImage
}}
normalized points of the outer lips of an image:-
[(0.397705078125, 0.3818359375),
(0.455322265625, 0.390625),
(0.5029296875, 0.38916015625),
(0.548828125, 0.40087890625),
(0.61279296875, 0.3984375),
(0.703125, 0.37890625),
(0.61474609375, 0.21875),
(0.52294921875, 0.1884765625),
(0.431640625, 0.20166015625),
(0.33203125, 0.34423828125)]
actual coordinate points of the outer lips of an image: -
[(3025.379819973372, 1344.4951847679913),
(3207.3986613331363, 1372.2607707381248),
(3357.7955853380263, 1367.633173076436),
(3502.7936454042792, 1404.6539543699473),
(3704.8654099646956, 1396.9412916004658),
(3990.2339324355125, 1335.2399894446135),
(3711.035540180281, 829.2893117666245),
(3421.039420047775, 733.6522934250534),
(3132.5858324691653, 775.3006723802537),
(2817.9091914743185, 1225.7201781179756)]
I also tried using the following method that uses CIDetector to get the mouth position and extracting the outer lips through cropping. The output wasn't good.
func focusonMouth() {
let ciimage = CIImage(cgImage: image.cgImage!)
let options = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: options)!
let faces = faceDetector.features(in: ciimage)
if let face = faces.first as? CIFaceFeature {
if face.hasMouthPosition {
let crop = image.cgImage?.cropping(to: CGRect(x: face.mouthPosition.x, y: face.mouthPosition.y, width: face.bounds.width - face.mouthPosition.x , height: 200))
processedImageView.image = imageRotatedByDegrees(oldImage: UIImage(cgImage: crop!), deg: 90)
}
}
}
There are two problems:
input image on iOS can be rotated with orientation property marking how it is rotated. Vision Framework will do the job, but coordinates will be rotated. The simplest solution is to supply image that is up-oriented (normal rotation).
position and size of located landmarks are relative to position and size of the detected face landmarks. So found position and size should be scaled by size and offset of the face found, not whole image.

ARKit distance between eyes

Hello guys I'm trying to integrate Vision framework to detect a face and keep track of both eyes, then I'm trying to measure the distance between the center of both eyes in a face detected by Vision. My problem at the moment is how to get this distance in the real world based on the two points for the center of the eye that Vision gave to me. After the face is recognised I use the following code to calculate the distance between the center of the eyes
guard let sceneView = self.view as? ARSKView, let currentFrame = sceneView.session.currentFrame else {
return
}
let w = face.boundingBox.size.width * image.size.width
let h = face.boundingBox.size.height * image.size.height
let x = face.boundingBox.origin.x * image.size.width
let y = face.boundingBox.origin.y * image.size.height
// left pupil
context?.saveGState()
context?.setStrokeColor(UIColor.yellow.cgColor)
var leftIrisLocation = CGPoint()
if let landmark = face.landmarks?.leftPupil {
for i in 0...landmark.pointCount - 1 { // last point is 0,0
let point = landmark.normalizedPoints[i]
if i == 0 {
leftIrisLocation = CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h)
context?.move(to: leftIrisLocation)
context?.addEllipse(in: CGRect(origin: leftIrisLocation, size: CGSize(width: 10, height: 10)))
}
}
}
context?.closePath()
context?.setLineWidth(8.0)
context?.drawPath(using: .stroke)
context?.saveGState()
// right pupil
context?.saveGState()
context?.setStrokeColor(UIColor.yellow.cgColor)
var rightIrisLocation = CGPoint()
if let landmark = face.landmarks?.rightPupil {
for i in 0...landmark.pointCount - 1 { // last point is 0,0
let point = landmark.normalizedPoints[i]
if i == 0 {
rightIrisLocation = CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h)
context?.move(to: rightIrisLocation)
context?.addEllipse(in: CGRect(origin: rightIrisLocation, size: CGSize(width: 10, height: 10)))
}
}
}
let distanceFromIris = self.distanceFrom(p1: leftIrisLocation, to: rightIrisLocation)
At this point I have the distance on the image but no the real world distance, so I try to hitTest on the current frame to get the distance from each eye to the camera and then calculate the final distance using pythagoras theorem as following:
for leftResult in currentFrame.hitTest(leftIrisLocation, types: [.featurePoint, .existingPlaneUsingExtent]) {
print("Distance left iris to camera")
print(leftResult.distance)
}
for rightResult in currentFrame.hitTest(rightIrisLocation, types: [.featurePoint, .existingPlaneUsingExtent]) {
print("Distance right iris to camera")
print(rightResult.distance)
}
let distanceBetweenIris = sqrt((leftResult.distance * leftResult.distance) - (rightResult.distance * rightResult.distance))
But this distanceBetweenIris is getting really big like 20cm where it should be close to 5cm ~ 6cm, any ideas how to reduce this big error?

Xcode - Swift SpriteKit: SKTileMapNode with a "physics gap"

I am playing around with SpriteKit and the TileMapNode and i've got an annoying problem.
That's how it should look like.
That's how its actually looking in the simulator/device.
The white spaces are terrible and i have no idea, how to get rid of them.
My Tiles are about 70x70 in the "Sprite Atlas - Part" of the assets, i've configured my tilemapnode with a scale of 0.5 and tile size of 70x70.
While testing some cases, i figured out that this part of code triggers the error, but i have no idea, what could be wrong. Changing the SKPhysicsBody size to a smaller one, did not helped.
guard let tilemap = childNode(withName: "LevelGround") as? SKTileMapNode else { return }
let tileSize = tilemap.tileSize
let halfWidth = CGFloat(tilemap.numberOfColumns) / 2.0 * tileSize.width
let halfHeight = CGFloat(tilemap.numberOfRows) / 2.0 * tileSize.height
for row in 0..<tilemap.numberOfRows {
for col in 0..<tilemap.numberOfColumns {
if tilemap.tileDefinition(atColumn: col, row: row) != nil {
let x = CGFloat(col) * tileSize.width - halfWidth
let y = CGFloat(row) * tileSize.height - halfHeight
let rect = CGRect(x: 0, y: 0, width: tileSize.width, height: tileSize.height)
let tileNode = SKShapeNode(rect: rect)
tileNode.position = CGPoint(x: x, y: y)
tileNode.physicsBody = SKPhysicsBody(rectangleOf: CGSize(width: 70, height: 70), center: CGPoint(x: tileSize.width / 2.0, y: tileSize.height / 2.0))
tileNode.physicsBody?.isDynamic = false
tileNode.physicsBody?.collisionBitMask = 2
tileNode.physicsBody?.categoryBitMask = 1
tileNode.physicsBody?.contactTestBitMask = 2 | 1
tileNode.name = "Ground"
tilemap.addChild(tileNode)
}
}
}
tileNode.strokeColor = .clear
solved the problem
Update:
problem not solved... just moved :/
When checking, if ground & player are in contact, every new tile the status is switching between "contact" and "no contact".
When using a cube instead a circle, the cube begins to rotate. It seems, the corner of the cube get's stuck at the minimal space between the tiles.

Resources