I'm playing around with Apples CIDetector to detect a face from live video using the front camera of the phone. I've been following this article and have nearly got it working. The problem I'm having is a new red box is being created on every frame rather the same one being reused.
The tutorial I'm following is meant to have code to stop that from happening but it doesn't seem to be working. I'm still very new to Swift and am struggling to work it out.
Here's the code I'm using:
func drawFaceMasksFor(features: [CIFaceFeature], bufferFrame: CGRect) {
CATransaction.begin()
CATransaction.setValue(kCFBooleanTrue, forKey: kCATransactionDisableActions)
//Hide all current masks
view.layer.sublayers?.filter({ $0.name == "MaskFace" }).forEach { $0.isHidden = true }
//Do nothing if no face is dected
guard !features.isEmpty else {
CATransaction.commit()
return
}
//The problem is we detect the faces on video image size
//but when we show on the screen which might smaller or bigger than your video size
//so we need to re-calculate the faces bounds to fit to your screen
let xScale = view.frame.width / bufferFrame.width
let yScale = view.frame.height / bufferFrame.height
let transform = CGAffineTransform(rotationAngle: .pi).translatedBy(x: -bufferFrame.width,
y: -bufferFrame.height)
for feature in features {
var faceRect = feature.bounds.applying(transform)
faceRect = CGRect(x: faceRect.minX * xScale,
y: faceRect.minY * yScale,
width: faceRect.width * xScale,
height: faceRect.height * yScale)
//Reuse the face's layer
var faceLayer = view.layer.sublayers?
.filter { $0.name == "MaskFace" && $0.isHidden == true }
.first
if faceLayer == nil {
// prepare layer
faceLayer = CALayer()
faceLayer?.backgroundColor = UIColor.clear.cgColor
faceLayer?.borderColor = UIColor.red.cgColor
faceLayer?.borderWidth = 3.0
faceLayer?.frame = faceRect
faceLayer?.masksToBounds = true
faceLayer?.contentsGravity = kCAGravityResizeAspectFill
view.layer.addSublayer(faceLayer!)
} else {
faceLayer?.frame = faceRect
faceLayer?.position = faceRect.origin
faceLayer?.isHidden = false
}
//You can add some masks for your left eye, right eye, mouth
}
CATransaction.commit()
}
Related
I have it set up so that the mesh I'm adding to my scene is rendering correctly (right side up so that the text I add as a child is legible), but the rotation is always the same (globally) whereas I want the rotation (on the XZ plane) to be towards the camera, but I'm not exactly sure how to go about this
My code looks like this:
#objc func handleTap() {
guard let view = self.view else { return }
guard let query = view.makeRaycastQuery(from: view.center, allowing: .estimatedPlane, alignment: .any) else { return }
guard let raycastResult = view.session.raycast(query).first else { return }
// set a transform to an existing entity
var transform = Transform(matrix: raycastResult.worldTransform)
// Create a new anchor to add content to
if(oldAnchor != nil) {
view.scene.removeAnchor(oldAnchor!)
}
let anchor = AnchorEntity()
oldAnchor = anchor
view.scene.anchors.append(anchor)
let material = SimpleMaterial(color: .lightGray, isMetallic: false)
// Add a curve entity
let curveEntity = try! ModelEntity.loadModel(named: "curve")
curveEntity.transform = transform
curveEntity.transform.rotation = simd_quatf(angle: 0, axis: SIMD3<Float>(1, 1, 1))
curveEntity.scale = [0.0002, 0.0002, 0.0002]
let curveRadians = 90.0 * Float.pi / 180.0
curveEntity.setOrientation(simd_quatf(angle: curveRadians, axis: SIMD3<Float>(1,0,0)), relativeTo: curveEntity)
// Adding text and children and materials etc.
...
anchor.addChild(curveEntity)
}
Without the curveEntity.transform.rotation = simd_quatf(angle: 0, axis: SIMD3<Float>(1, 1, 1)) line, the rotation of the curve is relative to the normal of the surface that gets raycasted upon, instead of constant.
I am developing ARKit Application using 3d models. So for that I have used 3d models & added gestures for move, rotate & zoom 3d models.
Now I am facing only 1 issue but I am not sure if this issue relates to what. Is there an issue in 3d model or if anything missing in my program.
Issue is the 3d model I am using shows very big & goes out of the screen. I am trying to scale it down size but its very big.
Here is my code :
#IBOutlet var mySceneView: ARSCNView!
var selectedNode = SCNNode()
var prevLoc = CGPoint()
var touchCount : Int = 0
override func viewDidLoad() {
super.viewDidLoad()
self.lblTitle.text = self.sceneTitle
let mySCN = SCNScene.init(named: "art.scnassets/\(self.sceneImagename).scn")!
self.mySceneView.scene = mySCN
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3Make(0, 0, 0)
self.mySceneView.scene.rootNode.addChildNode(cameraNode)
self.mySceneView.allowsCameraControl = true
self.mySceneView.autoenablesDefaultLighting = true
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(detailPage.doHandleTap(_:)))
let panGesture = UIPanGestureRecognizer(target: self, action: #selector(detailPage.doHandlePan(_:)))
let gesturesArray = NSMutableArray()
gesturesArray.add(tapGesture)
gesturesArray.add(panGesture)
gesturesArray.addObjects(from: self.mySceneView.gestureRecognizers!)
self.mySceneView.gestureRecognizers = (gesturesArray as! [UIGestureRecognizer])
}
//MARK:- Handle Gesture
#objc func doHandlePan(_ sender: UIPanGestureRecognizer) {
var delta = sender.translation(in: self.view)
let loc = sender.location(in: self.view)
if sender.state == .began {
self.prevLoc = loc
self.touchCount = sender.numberOfTouches
} else if sender.state == .changed {
delta = CGPoint(x: loc.x - prevLoc.x, y: loc.y - prevLoc.y)
prevLoc = loc
if self.touchCount != sender.numberOfTouches {
return
}
var rotMat = SCNMatrix4()
if touchCount == 2 {
rotMat = SCNMatrix4MakeTranslation(Float(delta.x * 0.025), Float(delta.y * -0.025), 0)
} else {
let rotMatX = SCNMatrix4Rotate(SCNMatrix4Identity, Float((1.0/100) * delta.y), 1, 0, 0)
let rotMatY = SCNMatrix4Rotate(SCNMatrix4Identity, Float((1.0/100) * delta.x), 0, 1, 0)
rotMat = SCNMatrix4Mult(rotMatX, rotMatY)
}
let transMat = SCNMatrix4MakeTranslation(selectedNode.position.x, selectedNode.position.y, selectedNode.position.z)
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, SCNMatrix4Invert(transMat))
let parentNodeTransMat = SCNMatrix4MakeTranslation((selectedNode.parent?.worldPosition.x)!, (selectedNode.parent?.worldPosition.y)!, (selectedNode.parent?.worldPosition.z)!)
let parentNodeMatWOTrans = SCNMatrix4Mult(selectedNode.parent!.worldTransform, SCNMatrix4Invert(parentNodeTransMat))
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, parentNodeMatWOTrans)
let camorbitNodeTransMat = SCNMatrix4MakeTranslation((self.mySceneView.pointOfView?.worldPosition.x)!, (self.mySceneView.pointOfView?.worldPosition.y)!, (self.mySceneView.pointOfView?.worldPosition.z)!)
let camorbitNodeMatWOTrans = SCNMatrix4Mult(self.mySceneView.pointOfView!.worldTransform, SCNMatrix4Invert(camorbitNodeTransMat))
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, SCNMatrix4Invert(camorbitNodeMatWOTrans))
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, rotMat)
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, camorbitNodeMatWOTrans)
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, SCNMatrix4Invert(parentNodeMatWOTrans))
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, transMat)
}
}
#objc func doHandleTap(_ sender: UITapGestureRecognizer) {
let p = sender.location(in: self.mySceneView)
var hitResults = self.mySceneView.hitTest(p, options: nil)
if (p.x > self.mySceneView.frame.size.width-100 || p.y < 100) {
self.mySceneView.allowsCameraControl = !self.mySceneView.allowsCameraControl
}
if hitResults.count > 0 {
let result = hitResults[0]
let material = result.node.geometry?.firstMaterial
selectedNode = result.node
SCNTransaction.begin()
SCNTransaction.animationDuration = 0.3
SCNTransaction.completionBlock = {
SCNTransaction.begin()
SCNTransaction.animationDuration = 0.3
SCNTransaction.commit()
}
material?.emission.contents = UIColor.white
SCNTransaction.commit()
}
}
My Question is :
Can we set any size of 3d object model Aspect fit in screen size in the centre of the screen ? Please suggest if there is some way for it.
Any guidence or suggestions will be highly appreciated.
What you need to is to use getBoundingSphereCenter to get the bounding sphere size, then can project that to the screen. Or alternatively get the ratio of that radius over the distance between scenekit camera and the object position. This way you will know how big the object will look on the screen. To the scale down, you simple set the scale property of your object.
For the second part, you can use projectPoint.
The way I handled this is making sure the 3D model always has a fixed size.
For example, if the 3D model is a small cup or a large house, I insure it always has a width of 25 cm on the scene's coordinate space (while maintaining the ratios between x y z).
You can calculate the width of the bounding box of the node like this:
let mySCN = SCNScene(named: "art.scnassets/\(self.sceneImagename).scn")!
let minX = mySCN.rootNode.boundingBox.min.x
let maxX = mySCN.rootNode.boundingBox.max.x
// change 0.25 to whatever you need
// this value is in meters
let scaleValue = 0.25 / abs(minX - maxX)
// scale all axes of the node using `scaleValue`
// this maintains ratios and does not stretch the model
mySCN.rootNode.scale = SCNVector3(scaleValue, scaleValue, scaleValue)
self.mySceneView.scene = mySCN
You can also calculate the scale value based on height or depth by using the y or z value of the bounding box.
My question pertains to how to mimic this Carousel view Youtube video only using a UIView not it's layer or a CALayer, which means actually transforming the UIViews its self.
I found a stack overflow question that actually is able to convert a
CATransform3D into a CGAffineTransform. That was written by some genius here as an answer, but my problem is a little unique.
The animation you see below is using CALayer to create. I need to create this same animation but transforming the UIView instead of its layer.
What it's Supposed to look like:
Code (Creates animation using Layers):
This takes an image card which is a CALayer() with a image attached to it and transforms which places it in the Carousel of images.
Note: turnCarousel() is also called when the user pans which moves / animates the Carousel.
let transformLayer = CATransformLayer()
func turnCarousel() {
guard let transformSubLayers = transformLayer.sublayers else {return}
let segmentForImageCard = CGFloat(360 / transformSubLayers.count)
var angleOffset = currentAngle
for layer in transformSubLayers {
var transform = CATransform3DIdentity
transform.m34 = -1 / 500
transform = CATransform3DRotate(transform, degreeToRadians(deg: angleOffset), 0, 1, 0)
transform = CATransform3DTranslate(transform, 0, 0, 175)
CATransaction.setAnimationDuration(0)
layer.transform = transform
angleOffset += segmentForImageCard
}
}
What It Currently Looks Like:
So basically it's close, but it seems as though there is a scaling issue with the cards that are supposed to be seen as in the front and the cards that are in the back of the carousel.
Fo this what I did is use a UIImageView as the base view for the carousel and then added more UIImageViews as cards to it. So now we are trying do a transformation on a UIImageView/UIView
Code:
var carouselTestView = UIImageView()
func turnCarouselTestCarousel() {
let segmentForImageCard = CGFloat(360 / carouselTestView.subviews.count)
var angleOffset = currentAngleTestView
for subview in carouselTestView.subviews {
var transform2 = CATransform3DIdentity
transform2.m34 = -1 / 500
transform2 = CATransform3DRotate(transform2, degreeToRadians(deg: angleOffset), 0, 1, 0)
transform2 = CATransform3DTranslate(transform2, 0, 0, 175)
CATransaction.setAnimationDuration(0)
// m13, m23, m33, m43 are not important since the destination is a flat XY plane.
// m31, m32 are not important since they would multiply with z = 0.
// m34 is zeroed here, so that neutralizes foreshortening. We can't avoid that.
// m44 is implicitly 1 as CGAffineTransform's m33.
let fullTransform: CATransform3D = transform2
let affine = CGAffineTransform(a: fullTransform.m11, b: fullTransform.m12, c: fullTransform.m21, d: fullTransform.m22, tx: fullTransform.m41, ty: fullTransform.m42)
subview.transform = affine
angleOffset += segmentForImageCard
}
}
the sub image that actually make up the Carousel are add with this function which simply goes through a for loop of image named 1...6 in my assets folder.
Code:
func CreateCarousel() {
carouselTestView.frame.size = CGSize(width: self.view.frame.width, height: self.view.frame.height / 2.9)
carouselTestView.center = CGPoint(self.view.frame.width * 0.5, self.view.frame.height * 0.5)
carouselTestView.alpha = 1.0
carouselTestView.backgroundColor = UIColor.clear
carouselTestView.isUserInteractionEnabled = true
self.view.insertSubview(carouselTestView, at: 0)
for i in 1 ... 6 {
addImageCardTestCarousel(name: "\(i)")
}
// Set the carousel for the first time. So that now we can see it like an actual carousel animation
turnCarouselTestCarousel()
let panGestureRecognizerTestCarousel = UIPanGestureRecognizer(target: self, action: #selector(self.performPanActionTestCarousel(recognizer:)))
panGestureRecognizerTestCarousel.delegate = self
carouselTestView.addGestureRecognizer(panGestureRecognizerTestCarousel)
}
The addImageCardTestCarousel function is here:
Code:
func addImageCardTestCarousel(name: String) {
let imageCardSize = CGSize(width: carouselTestView.frame.width / 2, height: carouselTestView.frame.height)
let cardPanel = UIImageView()
cardPanel.frame.size = CGSize(width: imageCardSize.width, height: imageCardSize.height)
cardPanel.frame.origin = CGPoint(carouselTestView.frame.size.width / 2 - imageCardSize.width / 2 , carouselTestView.frame.size.height / 2 - imageCardSize.height / 2)
guard let imageCardImage = UIImage(named: name) else {return}
cardPanel.image = imageCardImage
cardPanel.contentMode = .scaleAspectFill
cardPanel.layer.masksToBounds = true
cardPanel.layer.borderColor = UIColor.white.cgColor
cardPanel.layer.borderWidth = 1
cardPanel.layer.cornerRadius = cardPanel.frame.height / 50
carouselTestView.addSubview(cardPanel)
}
Purpose:
The purpose of this is that I want to build a UI that can take UIViews on the rotating cards you see, and a CALayer cannot add a UIView as a subview. It can only add the UIView's layer to its own Layer. So to solve this problem I need to actually achieve this animation with UIViews not CALayers.
I solved it the view that Appears to be behind the front most view are actually grabbing all the touch even if you touch a card right in the front the back card will prevent touches for the front card. So i made a function that can calculate. Which views are in the front. Than disable and enable touches for then. Its like when 2 cards get stacked on top of each other the card to the back will stop the card from the front from taking/userInteraction.
Code:
func DetermineFrontViews(view subview: UIView, angle angleOffset: CGFloat) {
let looped = Int(angleOffset / 360) // must round down to Int()
let loopSubtractingReset = CGFloat(360 * looped) // multiply 360 how ever many times we have looped
let finalangle = angleOffset - loopSubtractingReset
if (finalangle >= -70 && finalangle <= 70) || (finalangle >= 290) || (finalangle <= -260) {
print("In front of view")
if subview.isUserInteractionEnabled == false {
subview.isUserInteractionEnabled = true
}
} else {
print("Back of view")
if subview.isUserInteractionEnabled == true {
subview.isUserInteractionEnabled = false
}
}
}
I added this to the turn function to see if it could keep track of the first card being either in the back of the carousel or the front.
if subview.layer.name == "1" {
DetermineFrontViews(view: subview, angle: angleOffset)
}
I'm trying to make a game where you tilt your phone and try to keep the ball inside the boundary. I can't seem to figure out how to make the ball not go through the boundary. I have the tilt to move the ball working, but it just goes through all my boundaries and I can't seem to figure out how to make the ball stop when it comes in contact with a boundary. Here is my code:
override func didMove(to view: SKView) {
let border = SKPhysicsBody(edgeLoopFrom: self.frame)
self.physicsBody = border
boundary = (self.childNode(withName: "boundry") as! SKSpriteNode) //the boundary is spelled wrong
airplane = SKSpriteNode(imageNamed: "ball image")
airplane.physicsBody = SKPhysicsBody(circleOfRadius: 10)
airplane.position = CGPoint(x: -211.163, y: 367.3)
airplane.size = CGSize(width: 50, height: 50)
airplane.physicsBody?.isDynamic = true
airplane.physicsBody?.affectedByGravity = false
airplane.physicsBody?.allowsRotation = true
airplane.physicsBody?.pinned = false
self.addChild(airplane)
if motionManager.isAccelerometerAvailable {
// 2
motionManager.accelerometerUpdateInterval = 0.01
motionManager.startAccelerometerUpdates(to: .main) {
(data, error) in
guard let data = data, error == nil else {
return
}
// 3
let currentX = self.airplane.position.x
self.destX = currentX + CGFloat(data.acceleration.x * 500)
let currentY = self.airplane.position.y
self.destY = currentY + CGFloat(data.acceleration.y * 500)
}
}
}
override func update(_ currentTime: TimeInterval) {
let action = SKAction.moveTo(x: destX, duration: 1)
let action2 = SKAction.moveTo(y: destY, duration: 1)
airplane.run(action)
airplane.run(action2)
}
In SpriteKit things don't collide unless you give them a set of matching collisionBitMask. ie border.collisionBitMask & airplane.collisionBitMask need to have at least one non zero bit in common. Try setting both to 1 to begin with.
Another thing is that you may need to add a distance constraint to airplane so that it cannot escape from the border if speed is too high.
The code to accomplish this is pretty straightforward:
var cropNode = SKCropNode()
var shape = SKShapeNode(rectOf: CGSize(width:100,height:100))
shape.fillColor = SKColor.orange
var shape2 = SKShapeNode(rectOf: CGSize(width:25,height:25))
shape2.fillColor = SKColor.red
shape2.blendMode = .subtract
shape.addChild(shape2)
cropNode.addChild(shape)
cropNode.position = CGPoint(x:150,y:170)
cropNode.maskNode=shape
container.addChild(cropNode)
Same code, same iOS, different results = no bueno
Here is a method that will generate a maskNode for you using shaders:
func generateMaskNode(from mask:SKNode) -> SKNode
{
var returningNode : SKNode!
autoreleasepool
{
let view = SKView()
//First let's flatten the node
let texture = view.texture(from: mask)
let node = SKSpriteNode(texture:texture)
//Next apply the shader to the flattened node to allow for color swapping
node.shader = SKShader(fileNamed: "shader.fsh")
let texture2 = view.texture(from: node)
returningNode = SKSpriteNode(texture:texture2)
}
return returningNode
}
It requires you to create a file called shader.fsh, the code inside looks like this:
void main() {
// Find the pixel at the coordinate of the actual texture
vec4 val = texture2D(u_texture, v_tex_coord);
// If the color value of that pixel is 0,0,0
if (val.r == 0.0 && val.g == 0.0 && val.b == 0.0) {
// Turn the pixel off
gl_FragColor = vec4(0.0,0.0,0.0,0.0);
}
else {
// Otherwise, keep the original color
gl_FragColor = val;
}
}
To use it, it requires that you have black pixels instead of alpha as the means of determining what gets cropped, so here is what your code now should look like:
var cropNode = SKCropNode()
var shape = SKShapeNode(rectOf: CGSize(width:100,height:100))
shape.fillColor = SKColor.orange
var shape2 = SKShapeNode(rectOf: CGSize(width:25,height:25))
shape2.fillColor = SKColor.orange
shape2.blendMode = .subtract
shape.addChild(shape2)
let mask = generateMaskNode(from:shape)
cropNode.addChild(shape)
cropNode.position = CGPoint(x:150,y:170)
cropNode.maskNode=mask
container.addChild(cropNode)
The reason why subtract works on the simulator and not the device is because simulator subtracts the alpha channel, where as the device does not. The device is actually behaving correctly, since alpha is not suppose to be subtracted, it is suppose to be ignored.
Do note, you do not have to choose black to be your crop color, you can change the shader to allow for any color of your choosing, just change the line:
if (val.r == 0.0 && val.g == 0.0 && val.b == 0.0)
to a color you desire. (Like in your case. you can say r = 0 g = 1 b = 0 to crop only on green)
Result of above code on a device
Edit: I wanted to note that subtract blending is not necessary, this would also work:
var cropNode = SKCropNode()
var shape = SKShapeNode(rectOf: CGSize(width:100,height:100))
shape.fillColor = SKColor.orange
var shape2 = SKShapeNode(rectOf: CGSize(width:25,height:25))
shape2.fillColor = SKColor.black
shape2.blendMode = .replace
shape.addChild(shape2)
let mask = generateMaskNode(from:shape)
cropNode.addChild(shape)
cropNode.position = CGPoint(x:150,y:170)
cropNode.maskNode=mask
container.addChild(cropNode)
Which begs the question now that I cannot test, is my function even needed.
The following code in theory should work, since it is replacing the underlying pixels with the one above, so in theory the alpha should transfer over. If anybody could test this, please let me know if it works.
var cropNode = SKCropNode()
var shape = SKShapeNode(rectOf: CGSize(width:100,height:100))
shape.fillColor = SKColor.orange
var shape2 = SKShapeNode(rectOf: CGSize(width:25,height:25))
shape2.fillColor = SKColor(red:0,green:0,blue:0,alpha:0)
shape2.blendMode = .replace
shape.addChild(shape2)
cropNode.addChild(shape)
cropNode.position = CGPoint(x:150,y:170)
cropNode.maskNode= shape.copy() as! SKNode
container.addChild(cropNode)
replace only replaces color not alpha
Since inverse masking doesn't seem to be inherently available with SpriteKit (in a way that works on devices), I think the following is the closest thing to an answer:
let background = SKSpriteNode(imageNamed:"stocksnap")
background.position = CGPoint(x:65, y:background.size.height/2)
addChild(background)
let container = SKNode()
let cropNode = SKCropNode()
let bgCopy = SKSpriteNode(imageNamed:"stocksnap")
bgCopy.position = background.position
cropNode.addChild(bgCopy)
let cover = SKShapeNode(rect: CGRect(x:0,y:0,width:200,height:200))
cover.position = CGPoint(x:80,y:150)
cover.fillColor = SKColor.orange
container.addChild(cover)
let highlight = SKShapeNode(rectOf: CGSize(width:100,height:100))
highlight.position = CGPoint(x:cover.position.x+cover.frame.size.width/2,y:cover.position.y+cover.frame.size.height/2)
highlight.fillColor = SKColor.red
cropNode.maskNode = highlight
container.addChild(cropNode)
addChild(container)
Here is a screenshot from a device using above technique
This just uses a duplicate of the background, masks it, and overlays it in the same position to create the inverse masking effect. In situations where you are wanting to duplicate whatever is on the screen, you could use something like this:
func captureScreen() -> SKSpriteNode {
var image = UIImage()
if let view = self.view {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, false, UIScreen.main.scale)
view.drawHierarchy(in: view.bounds, afterScreenUpdates: true)
if let imageFromContext = UIGraphicsGetImageFromCurrentImageContext() {
image = imageFromContext
}
UIGraphicsEndImageContext()
}
let texture = SKTexture(image:image)
let sprite = SKSpriteNode(texture:texture)
//scale is applicable if using fixed screen sizes that aren't the actual width and height
sprite.scale(to: CGSize(width:size.width,height:size.height))
sprite.anchorPoint = CGPoint(x:0,y:0)
return sprite
}
Hopefully someone finds a better way or an update is made to SpriteKit to support inverse masking, but in the meantime this works fine for my use case.