Any idea, how can put lipstick on lips in face detection. i have done put color but i want to show Glossy and Shiny.
Any Idea about How can use texture and shades in MLKit ios app
Simply I find Lips points and create CAShapelayer layer and after that fill the color.
in ARKit we can use scenekit and sceneview so we add easily material and texture and all.
but how can we use in mlkit
[![enter image description here][1]][1]
Q1. is that possible to use sceneview in mlkit for materials and textures.
Q2. or how can drawing image in all points like (lips, eye, eyebrow)
// MARK: Contour func
private func addContours(for face: VisionFace, width: CGFloat, height: CGFloat) {
let facez = SCNScene()
guard let facez = SCNScene(named: "8.scn") else {
return
}
facez.rootNode.scale = SCNVector3(1,1,1)
let multipl :CGFloat = 200.0
let xoff :CGFloat = 0.98
let yoff : CGFloat = 1.76
let xp = ((face.frame.origin.x) / multipl) - xoff
let yp = ((face.frame.origin.y) / multipl) - yoff
facez.rootNode.position = SCNVector3(xp, yp, -1)
facez.rootNode.eulerAngles = SCNVector3(-1,1,0)
cameraView.allowsCameraControl = true
cameraView.autoenablesDefaultLighting = true
cameraView.scene = facez
let materials = facez.rootNode.geometry?.firstMaterial
materials?.diffuse.contents = UIColor.red
}
i tried like this and
public static func addleftImage(
atPoint point: CGPoint,
to view: UIView,
color: UIColor,
radius: CGFloat
) {
let divisor: CGFloat = 2.0
let xCoord = point.x - radius / divisor
let yCoord = point.y - radius / divisor
let circleRect = CGRect(x: xCoord, y: yCoord, width: radius, height: radius)
let circleView = UIImageView(frame: circleRect)
circleView.image = #imageLiteral(resourceName: "leftEye")
circleView.layer.cornerRadius = radius / divisor
circleView.alpha = Constants.circleViewAlpha
circleView.backgroundColor = color
view.addSubview(circleView)
}
But no luck!
Q1: I'm afraid that's out of the scope of mlkit.
Q2: if it helps, MLKit has a quickstart app in swift that can show contour points on the face: https://github.com/firebase/quickstart-ios/tree/master/mlvision/MLVisionExample
A transformMatrix was applied and this is the line to add contour points:
https://github.com/firebase/quickstart-ios/blob/master/mlvision/MLVisionExample/ViewController.swift#L845
Related
I am relatively new to coding, and I have recently been working on a program that allows a user to scan a crystal, using the iPhones rear camera, and it will identify what kind of crystal it is. I used CreateML to build the model, and Vision to identify the crystal. What I can't seem to figure out is how to get the results into the UI I built. The results are printing to the Xcode console.
Here's a picture of the Storyboard:
I assume you want to draw a box around the detected crystal?
You should be getting a boundingBox of your crystal that looks something like this:
(0.166666666666667, 0.35, 0.66666666666667, 0.3)
These are "normalized" coordinates, which means that they are relative to the image that you send to Vision. I explain this more in detail here...
What you are used to
What Vision returns
You need to convert these "normalized" coordinates to UIKit coordinates that you can use. To do that, I have this converting function:
func getConvertedRect(boundingBox: CGRect, inImage imageSize: CGSize, containedIn containerSize: CGSize) -> CGRect {
let rectOfImage: CGRect
let imageAspect = imageSize.width / imageSize.height
let containerAspect = containerSize.width / containerSize.height
if imageAspect > containerAspect { /// image extends left and right
let newImageWidth = containerSize.height * imageAspect /// the width of the overflowing image
let newX = -(newImageWidth - containerSize.width) / 2
rectOfImage = CGRect(x: newX, y: 0, width: newImageWidth, height: containerSize.height)
} else { /// image extends top and bottom
let newImageHeight = containerSize.width * (1 / imageAspect) /// the width of the overflowing image
let newY = -(newImageHeight - containerSize.height) / 2
rectOfImage = CGRect(x: 0, y: newY, width: containerSize.width, height: newImageHeight)
}
let newOriginBoundingBox = CGRect(
x: boundingBox.origin.x,
y: 1 - boundingBox.origin.y - boundingBox.height,
width: boundingBox.width,
height: boundingBox.height
)
var convertedRect = VNImageRectForNormalizedRect(newOriginBoundingBox, Int(rectOfImage.width), Int(rectOfImage.height))
/// add the margins
convertedRect.origin.x += rectOfImage.origin.x
convertedRect.origin.y += rectOfImage.origin.y
return convertedRect
}
You can use it like this:
let convertedRect = self.getConvertedRect(
boundingBox: observation.boundingBox,
inImage: image.size, /// image is the image that you feed into Vision
containedIn: self.previewView.bounds.size /// the size of your camera feed's preview view
)
self.drawBoundingBox(rect: convertedRect)
/// draw the rectangle
func drawBoundingBox(rect: CGRect) {
let uiView = UIView(frame: rect)
previewView.addSubview(uiView)
uiView.backgroundColor = UIColor.orange.withAlphaComponent(0.2)
uiView.layer.borderColor = UIColor.orange.cgColor
uiView.layer.borderWidth = 3
}
Result (I'm doing a VNDetectRectanglesRequest):
If you want to "track" the detected object while your phone is moving, check out my answer here
I am trying to make boarders about the frames of lines[Got from the google cloud vision text-recognition] on image in iOS Swift.
I've got lines from block and got the frames of line by using line.frame code provided by google.
I saw some Transformation algo which I applied :
private func createScaledFrame(
featureFrame: CGRect,
imageSize: CGSize, viewFrame: CGRect)
-> CGRect {
let viewSize = viewFrame.size
// 2
let resolutionView = viewSize.width / viewSize.height
let resolutionImage = imageSize.width / imageSize.height
// 3
var scale: CGFloat
if resolutionView > resolutionImage {
scale = viewSize.height / imageSize.height
} else {
scale = viewSize.width / imageSize.width
}
// 4
let featureWidthScaled = featureFrame.size.width * scale
let featureHeightScaled = featureFrame.size.height * scale
// 5
let imageWidthScaled = imageSize.width * scale
let imageHeightScaled = imageSize.height * scale
let imagePointXScaled = (viewSize.width - imageWidthScaled) / 2
let imagePointYScaled = (viewSize.height - imageHeightScaled) / 2
// 6
let featurePointXScaled = imagePointXScaled + featureFrame.origin.x * scale
let featurePointYScaled = imagePointYScaled + featureFrame.origin.y * scale
// 7
return CGRect(x: featurePointXScaled,
y: featurePointYScaled,
width: featureWidthScaled,
height: featureHeightScaled)
}
This is causes when lines and blocks increases It becomes small like shown In Image. Problem with scale is about 2.0...2.25 I dont know where I am wrong the issue with the aspect or something i missed here.
The Code where I am applying the above method:
let r = self.createScaledFrame(featureFrame: line.frame, imageSize: imageSize, viewFrame: self.imageView.frame)
let lyr = self.createShapeLayer(frame: r)
self.imageView.layer.addSublayer(lyr)
Create shape layer method :
private func createShapeLayer(frame: CGRect) -> CAShapeLayer {
// 1
let bpath = UIBezierPath(rect: frame)
let shapeLayer = CAShapeLayer()
shapeLayer.path = bpath.cgPath
// 2
shapeLayer.strokeColor = Constants.lineColor
shapeLayer.fillColor = Constants.fillColor
shapeLayer.lineWidth = Constants.lineWidth
return shapeLayer
}
Thanks In advance
I'm displaying some basic 3D geometry within my SpriteKit scene using an instance of SK3DNode to display the contents of a SceneKit scene, as explained in Apple's article here.
I have been able to position the node and 3D contents as I want using SceneKit node transforms and the position/viewport size of the SK3DNode.
Next, I want to display some other sprites in my SpriteKit scene overlaid on top of the 3D content, but I am unable to do so: The contents of the SK3DNode are always drawn on top.
I have tried specifying the zPosition property of both the SK3DNode and the SKSpriteNode, to no avail.
From Apple's documentation on SK3DNode:
Use SK3DNode objects to incorporate 3D SceneKit content into a
SpriteKit-based game. When SpriteKit renders the node, the SceneKit
scene is animated and rendered first. Then this rendered image is
composited into the SpriteKit scene. Use the scnScene property to
specify the SceneKit scene to be rendered.
(emphasis mine)
It is a bit ambiguous withv regard to z-order (it only seems to mention the temporal order in which rendering takes place).
I have put together a minimal demo project on GitHub; the relevant code is:
1. SceneKit Scene
import SceneKit
class SceneKitScene: SCNScene {
override init() {
super.init()
let box = SCNBox(width: 10, height: 10, length: 10, chamferRadius: 0)
let material = SCNMaterial()
material.diffuse.contents = UIColor.green
box.materials = [material]
let boxNode = SCNNode(geometry: box)
boxNode.transform = SCNMatrix4MakeRotation(.pi/2, 1, 1, 1)
self.rootNode.addChildNode(boxNode)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
2. SpriteKit Scene
import SpriteKit
class SpriteKitScene: SKScene {
override init(size: CGSize) {
super.init(size: size)
// Scene Background
self.backgroundColor = .red
// 3D Node
let objectNode = SK3DNode(viewportSize: size)
objectNode.scnScene = SceneKitScene()
addChild(objectNode)
objectNode.position = CGPoint(x: size.width/2, y: size.height/2)
let camera = SCNCamera()
let cameraNode = SCNNode()
cameraNode.camera = camera
objectNode.pointOfView = cameraNode
objectNode.pointOfView?.position = SCNVector3(x: 0, y: 0, z: 60)
objectNode.zPosition = -100
// 2D Sprite
let sprite = SKSpriteNode(color: .yellow, size: CGSize(width: 250, height: 60))
sprite.position = objectNode.position
sprite.zPosition = +100
addChild(sprite)
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
...And the rendered result is:
(I want the yellow rectangle above the green box)
I made a Technical Support Incident with Apple about this and they just got back to me. The solution is actually very very simple.
If you want 2D sprites to render on top of SK3DNodes, you need to stop the contents of the SK3DNodes from writing to the depth buffer.
To do this, you just need to set writesToDepthBuffer to false on the SCNMaterial.
...
let material = SCNMaterial()
material.diffuse.contents = UIColor.green
material.writesToDepthBuffer = false
...
Boom. Works.
Please note that this is just something I stumbled upon. I have no idea why it works and I probably wouldn't trust it without further understanding, but maybe it'll help find an explanation or a real solution.
It seems that having an SKShapeNode (with a fill) alongside an SK3DNode (either as a sibling, part of a sibling tree, or child), draws it in proper order. The SKShapeNode doesn't seem to need to intersect with the SK3DNode either.
The fill is important, as having a transparent fill makes this not work. Stroke doesn't seem to have any effect.
An SKShapeNode of extremely small size and almost zero alpha fill works too.
Here's my playground:
import PlaygroundSupport
import SceneKit
import SpriteKit
let viewSize = CGSize(width: 300, height: 150)
let viewportSize: CGFloat = viewSize.height * 0.75
func skview(color: UIColor, index: Int) -> SKView {
let scene = SKScene(size: viewSize)
scene.backgroundColor = color
let view = SKView(
frame: CGRect(
origin: CGPoint(x: 0, y: CGFloat(index) * viewSize.height),
size: viewSize
)
)
view.presentScene(scene)
view.showsDrawCount = true
// Draw the box of the 3d node view port
let viewport = SKSpriteNode(color: .orange, size: CGSize(width: viewportSize, height: viewportSize))
viewport.position = CGPoint(x: viewSize.width / 2, y: viewSize.height / 2)
scene.addChild(viewport)
return view
}
func cube() -> SK3DNode {
let mat = SCNMaterial()
mat.diffuse.contents = UIColor.green
let box = SCNBox(width: viewportSize, height: viewportSize, length: viewportSize, chamferRadius: 0)
box.firstMaterial = mat
let boxNode3d = SCNNode(geometry: box)
boxNode3d.runAction(.repeatForever(.rotateBy(x: 10, y: 10, z: 10, duration: 10)))
let scene = SCNScene()
scene.rootNode.addChildNode(boxNode3d)
let boxNode2d = SK3DNode(viewportSize: CGSize(width: viewportSize, height: viewportSize))
boxNode2d.position = CGPoint(x: viewSize.width / 2, y: viewSize.height / 2)
boxNode2d.scnScene = scene
return boxNode2d
}
func shape() -> SKShapeNode {
let shape = SKShapeNode(rectOf: CGSize(width: viewSize.height / 4, height: viewSize.height / 4))
shape.strokeColor = .clear
shape.fillColor = .purple
return shape
}
func rect(_ color: UIColor) -> SKSpriteNode {
let sp = SKSpriteNode(texture: nil, color: color, size: CGSize(width: 200, height: viewSize.height / 4))
sp.position = CGPoint(x: viewSize.width / 2, y: viewSize.height / 2)
return sp
}
// The original issue, untouched.
func v1() -> SKView {
let v = skview(color: .red, index: 0)
v.scene?.addChild(cube())
v.scene?.addChild(rect(.yellow))
return v
}
// Shape added as sibling after the 3d node. Notice that it doesn't overlap the SK3DNode.
func v2() -> SKView {
let v = skview(color: .blue, index: 1)
v.scene?.addChild(cube())
v.scene?.addChild(shape())
v.scene?.addChild(rect(.yellow))
return v
}
// Shape added to the 3d node.
func v3() -> SKView {
let v = skview(color: .magenta, index: 2)
let box = cube()
box.addChild(shape())
v.scene?.addChild(box)
v.scene?.addChild(rect(.yellow))
return v
}
// 3d node added after, but zPos set to -1.
func v4() -> SKView {
let v = skview(color: .cyan, index: 3)
v.scene?.addChild(shape())
v.scene?.addChild(rect(.yellow))
let box = cube()
box.zPosition = -1
v.scene?.addChild(box)
return v
}
// Shape added after the 3d node, but not as a sibling.
func v5() -> SKView {
let v = skview(color: .green, index: 4)
let parent = SKNode()
parent.addChild(cube())
parent.addChild(rect(.yellow))
v.scene?.addChild(parent)
v.scene?.addChild(shape())
return v
}
let container = UIView(frame: CGRect(origin: .zero, size: CGSize(width: viewSize.width, height: viewSize.height * 5)))
container.addSubview(v1())
container.addSubview(v2())
container.addSubview(v3())
container.addSubview(v4())
container.addSubview(v5())
PlaygroundPage.current.liveView = container
TL;DR
In your code, try:
...
let shape = SKShapeNode(rectOf: CGSize(width: 0.01, height: 0.01))
shape.strokeColor = .clear
shape.fillColor = UIColor.black.withAlphaComponent(0.01)
// 3D Node
let objectNode = SK3DNode(viewportSize: size)
objectNode.addChild(shape)
...
I'm trying to make a hexagon grid with triangles without altering any pivot points, but I can't seem to position the triangles correctly to make single hexagon. I'm creating SCNNodes with UIBezierPaths to form triangles and then rotating the bezier paths. This seems to work fine UNTIL I try to use a parametric equation to position the triangles around a circle to form the hexagon, then they don't end up in the correct position. Can you help me spot where I'm doing wrong here?
class TrianglePlane: SCNNode {
var size: CGFloat = 0.1
var coords: SCNVector3 = SCNVector3Zero
var innerCoords: Int = 0
init(coords: SCNVector3, innerCoords: Int, identifier: Int) {
super.init()
self.coords = coords
self.innerCoords = innerCoords
setup()
}
init(identifier: Int) {
super.init()
// super.init(identifier: identifier)
setup()
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
func setup() {
let myPath = path()
let geo = SCNShape(path: myPath, extrusionDepth: 0)
geo.firstMaterial?.diffuse.contents = UIColor.red
geo.firstMaterial?.blendMode = .multiply
self.geometry = geo
}
func path() -> UIBezierPath {
let max: CGFloat = self.size
let min: CGFloat = 0
let bPath = UIBezierPath()
bPath.move(to: .zero)
bPath.addLine(to: CGPoint(x: max / 2,
y: UIBezierPath.middlePeak(height: max)))
bPath.addLine(to: CGPoint(x: max, y: min))
bPath.close()
return bPath
}
}
extension TrianglePlane {
static func generateHexagon() -> [TrianglePlane] {
var myArr: [TrianglePlane] = []
let colors = [UIColor.red, UIColor.green,
UIColor.yellow, UIColor.systemTeal,
UIColor.cyan, UIColor.magenta]
for i in 0 ..< 6 {
let tri = TrianglePlane(identifier: 0)
tri.geometry?.firstMaterial?.diffuse.contents = colors[i]
tri.position = SCNVector3( -0.05, 0, -0.5)
// Rotate bezier path
let angleInDegrees = (Float(i) + 1) * 180.0
print(angleInDegrees)
let angle = CGFloat(deg2rad(angleInDegrees))
let geo = tri.geometry as! SCNShape
let path = geo.path!
path.rotateAroundCenter(angle: angle)
geo.path = path
// Position triangle in hexagon
let radius = Float(tri.size)/2
let deg: Float = Float(i) * 60
let radians = deg2rad(-deg)
let x1 = tri.position.x + radius * cos(radians)
let y1 = tri.position.y + radius * sin(radians)
tri.position.x = x1
tri.position.y = y1
myArr.append(tri)
}
return myArr
}
static func deg2rad(_ number: Float) -> Float {
return number * Float.pi / 180
}
}
extension UIBezierPath {
func rotateAroundCenter(angle: CGFloat) {
let center = self.bounds.center
var transform = CGAffineTransform.identity
transform = transform.translatedBy(x: center.x, y: center.y)
transform = transform.rotated(by: angle)
transform = transform.translatedBy(x: -center.x, y: -center.y)
self.apply(transform)
}
static func middlePeak(height: CGFloat) -> CGFloat {
return sqrt(3.0) / 2 * height
}
}
extension CGRect {
var center : CGPoint {
return CGPoint(x:self.midX, y:self.midY)
}
}
What it currently looks like:
What it SHOULD look like:
I created two versions – SceneKit and RealityKit.
SceneKit (macOS version)
The simplest way to compose a hexagon is to use six non-uniformly scaled SCNPyramids (flat) with their shifted pivot points. Each "triangle" must be rotated in 60 degree increments (.pi/3).
import SceneKit
class ViewController: NSViewController {
override func viewDidLoad() {
super.viewDidLoad()
let sceneView = self.view as! SCNView
let scene = SCNScene()
sceneView.scene = scene
sceneView.allowsCameraControl = true
sceneView.backgroundColor = NSColor.white
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
scene.rootNode.addChildNode(cameraNode)
cameraNode.position = SCNVector3(x: 0, y: 0, z: 15)
for i in 1...6 {
let triangleNode = SCNNode(geometry: SCNPyramid(width: 1.15,
height: 1,
length: 1))
// the depth of the pyramid is almost zero
triangleNode.scale = SCNVector3(5, 5, 0.001)
// move a pivot point from pyramid its base to upper vertex
triangleNode.simdPivot.columns.3.y = 1
triangleNode.geometry?.firstMaterial?.diffuse.contents = NSColor(
calibratedHue: CGFloat(i)/6,
saturation: 1.0,
brightness: 1.0,
alpha: 1.0)
triangleNode.rotation = SCNVector4(0, 0, 1,
-CGFloat.pi/3 * CGFloat(i))
scene.rootNode.addChildNode(triangleNode)
}
}
}
RealityKit (iOS version)
In this project I generated a triangle with the help of MeshDescriptor and copied it 5 more times.
import UIKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
let anchor = AnchorEntity()
let camera = PointOfView()
let indices: [UInt32] = [0, 1, 2]
override func viewDidLoad() {
super.viewDidLoad()
self.arView.environment.background = .color(.black)
self.arView.cameraMode = .nonAR
self.camera.position.z = 9
let positions: [simd_float3] = [[ 0.00, 0.00, 0.00],
[ 0.52, 0.90, 0.00],
[-0.52, 0.90, 0.00]]
var descriptor = MeshDescriptor(name: "Hexagon's side")
descriptor.materials = .perFace(self.indices)
descriptor.primitives = .triangles(self.indices)
descriptor.positions = MeshBuffers.Positions(positions[0...2])
var material = UnlitMaterial()
let mesh: MeshResource = try! .generate(from: [descriptor])
let colors: [UIColor] = [.systemRed, .systemGreen, .yellow,
.systemTeal, .cyan, .magenta]
for i in 0...5 {
material.color = .init(tint: colors[i], texture: nil)
let triangleModel = ModelEntity(mesh: mesh,
materials: [material])
let trianglePivot = Entity() // made to control pivot point
trianglePivot.addChild(triangleModel)
trianglePivot.orientation = simd_quatf(angle: -.pi/3 * Float(i),
axis: [0,0,1])
self.anchor.addChild(trianglePivot)
}
self.anchor.addChild(self.camera)
self.arView.scene.anchors.append(self.anchor)
}
}
There are a few problems with the code as it stands. Firstly, as pointed out in the comments, the parametric equation for the translations needs to be rotated by 90 degrees:
let deg: Float = (Float(i) * 60) - 90.0
The next issue is that the centre of the bounding box of the triangle and the centroid of the triangle are not the same point. This is important because the parametric equation calculates where the centroids of the triangles must be located, not the centres of their bounding boxes. So we're going to need a way to calculate the centroid. This can be done by adding the following extension method to TrianglePlane:
extension TrianglePlane {
/// Calculates the centroid of the triangle
func centroid() -> CGPoint
{
let max: CGFloat = self.size
let min: CGFloat = 0
let peak = UIBezierPath.middlePeak(height: max)
let xAvg = (min + max / CGFloat(2.0) + max) / CGFloat(3.0)
let yAvg = (min + peak + min) / CGFloat(3.0)
return CGPoint(x: xAvg, y: yAvg)
}
}
This allows the correct radius for the parametric equation to be calculated:
let height = Float(UIBezierPath.middlePeak(height: tri.size))
let centroid = tri.centroid()
let radius = height - Float(centroid.y)
The final correction is to calculate the offset between the origin of the triangle and the centroid. This correction depends on whether the triangle has been flipped by the rotation or not:
let x1 = radius * cos(radians)
let y1 = radius * sin(radians)
let dx = Float(-centroid.x)
let dy = (i % 2 == 0) ? Float(centroid.y) - height : Float(-centroid.y)
tri.position.x = x1 + dx
tri.position.y = y1 + dy
Putting all this together gives the desired result.
Full working ViewController can be found int this gist
Note the code can be greatly simplified by making the origin of the triangle be the centroid.
I gave myself an exercise to learn Swift, based on an example I have found on the Apple Swift website:
As you can see there's a river and a few dots in it right in the middle, forming a path. So I have started looking for a similar river image on the internet and I have created a Xcode playground. This is what I have now:
So basically I have an UIView with a subview consisting in the river image I have found and a dot made with UIBezierPath.
My first question is: is this the right way to drawn on to a UIView? I mean using a UIBezierPath. And my second question is: how do I draw the dot at a precise coordinate inside the UIView? (UIBezierPath or everything else?)
Just to be more precise, my intent here is to make an algorithm to make the program recognize the image and based on the pixel color it would draw a line with dots from the start to the end of the river, passing between it's middle.
To draw UIBezierPath on UIView do this:
let xCoord = 10
let yCoord = 20
let radius = 8
let dotPath = UIBezierPath(ovalInRect: CGRectMake(xCoord, yCoord, radius, radius))
let layer = CAShapeLayer()
layer.path = dotPath.CGPath
layer.strokeColor = UIColor.blueColor().CGColor
drawingView.layer.addSublayer(layer)
This code will draw a dot with radius 8 with coordinates 10,20 on your view.
Update: Swift 4+
let xCoord = 10
let yCoord = 20
let radius = 8
let dotPath = UIBezierPath(ovalIn: CGRect(x: xCoord, y: yCoord, width: radius, height: radius))
let layer = CAShapeLayer()
layer.path = dotPath.cgPath
layer.strokeColor = UIColor.blue.cgColor
drawingView.layer.addSublayer(layer)
Here is an attempt at the lines part of the equation:
var offset:CGFloat = 0; var squareWidth:Int = 20
var squareRows:Int = Int(view.frame.size.width/CGFloat(squareWidth))
var squareColumns:Int = Int(view.frame.size.height/CGFloat(squareWidth))
for (index,element) in (0...squareRows).enumerate(){
for (column,element) in (0...squareColumns).enumerate(){
// Build The Square
let rectanglePath = UIBezierPath(roundedRect: CGRectMake(
view.frame.minX + CGFloat(squareWidth * index) - offset,
view.frame.minY + CGFloat(column * squareWidth), 20, 20
),
cornerRadius: 0.00)
// Style Square
let a = CAShapeLayer()
a.path = rectanglePath.CGPath
a.strokeColor = UIColor.whiteColor().CGColor
a.fillColor = nil
a.opacity = 0.3
a.lineWidth = 1.5
view.layer.insertSublayer(a, atIndex: 1)
}
}