Hi I need to take a set of photos with my iphone and read the corresponding projection matrix, rotation and translation matrix for post-processing. I have never used ARKit or programmed in Swift/Object C before. What is the best way to get started?
If you create a default AR project in Xcode then you'll be able to study the basics of getting an AR app running in Swift.
To read the camera parameters you need to implement this function:
func session(_ session: ARSession, didUpdate frame: ARFrame) {
guard let transform = frame.camera.transform else {return}
let position = SCNVector3Make(transform.columns.3.x, transform.columns.3.y, transform.columns.3.z)
let rotation = frame.camera.eulerAngles
let projection = frame.camera.projectionMatrix
print(position)
print(rotation)
print(projectionMatrix)
print("================="
}
(I.e copy / paste that code as another function in your ViewController class in the default app. The rotation is given in euler angles.
Given you've not used Swift before this may not be that helpful. But in theory it's all you need.
ARKit grabs frames at 60 frames per second (you can change this) - this function is called automatically for every new frame.
If you're using it to take photos you'll have to add more code to get all the timing right etc.
Related
I'm trying to get some thoughts on how this might be possible with apple frameworks arkit in xcode. I would like to use the measure with the touchesBegan and addDot functions with the ar camera. Then I would like to add automate the touchesbegan with an object detection model from a coremlmodel I have setup. When the coremlmodel detects the image it uses the bounding box as the end points. Is this possible? If it is I would like to have some information on where to look for this type of code. Or if its simple just leave it here. Thanks!
He is some starter code framework.
// Get measure by touching the screen
override func touchesBegan() {}
func addDot() {}
//getting the measurement between two points
SCNNode()
SCNVector3()
func calculate() {
let start = dotNodes[0]
let end = dotNodes[1]
print(start.position)
print(end.position)
}
// Adding in the vision component from a coremlmodel
VNCoreMLModel()
VNCoreMLRequest()
//Get the bounding box from the picture
How would I automate and tell swift that the boxes are what I want the touches to be for my measure and report the output. Thanks!
I am close to completing my first project in SceneKit but I'm struggling with the last few steps. It is probably easiest to explain my progress by sharing a short screen capture video of the Xcode Simulator displaying my current scene.
As you can see by the screen capture my project is composed of three elements (this is all done in code, I do not import any external assets):
outside box (defined via six SCNBox objects per corner)
inside sun (defined via a SCNTube object for the circle and UIBezierPath objects per "ray")
position of camera
Based on feedback I have committed the code to GitHub.
Right now the camera is allowed to rotate as seen in the screen capture but the centre of rotation of the camera and of the objects doesn't align so it appears to spin off-axis.
Here's where I want to get to:
correct camera position so that the combined box & sun is positioned directly in front of the camera, filling the screen
maintain the sun's position as being fixed (already done I guess)
allow the box to rotate freely in x, y & z around the sun based on touch input - so the user can "flick" the box and watch it flip and spin around the sun
The code structure is straight forward:
class GameViewController: UIViewController {
var gameView: SCNView!
var gameScene: SCNScene!
var cameraNode: SCNNode!
var targetCreationTime: TimeInterval = 0
override func viewDidLoad() {
super.viewDidLoad()
initView()
initScene() // createSun() and createCube() called here
initCamera()
}
And with respect to the camera position:
func initCamera() {
let camera = SCNCamera()
cameraNode = SCNNode()
cameraNode.camera = camera
cameraNode.position = SCNVector3(x: 0, y: 0, z: 0)
cameraNode.rotation = SCNVector4Make(1, 0, 0, .pi/2)
}
But what I've found is that despite playing around with the random cameraNode.position and cameraNode.rotation values the camera view doesn't seem to change.
My questions - any help will be greatly appreciated:
advice on repositioning the camera (what am I doing wrong?!) - once it's in the right place I can easily set "gameView.allowsCameraControl = false"
advice on how to enable the box to spin about its axis around the sun (while the sun remains fixed)
stretch goal! Any kind of general "check out this tutorial" type info on materials and lighting, and embedding this view into a SwiftUI view
Thanks!
I decided to stop fighting the point of rotation and instead reposition the elements around this.
One interesting thing, which I’ve mentioned at the start of the createBox() func.
// originally debugCube & debugNode were used for debugging the pivot point of the box
// but I found have this large node helped to balance out the centre of mass
// set to fully transparent and added to boxNode as final step after all other transformations
If you comment out the lines 19-26 plus 117 you will completely remove debugNode. And funnily enough when you do that the box stops spinning correctly. But you add it back in and everything is fixed. I’m guessing it’s adding “mass” to the overall node and helping lock the point of rotation to the correct position. So in the end I just made it transparent!
The final (version 1.0) code is posted on GitHub at github.com/LedenMcLeden/logo
Use this answer in post for your camera: 57586437, remove camera rotation and take camera control off. Rotate your box with a simple (I'd do an x,y,z independent spin just to verify it) spin so that you'll know if your pivot point is correct. It should be ok by default and spin in place right in front of the camera, but depends on how you built your cube.
If you added the sun and stuff as a subnode of your box, then you're probably in decent shape and the pieces will rotate together.
If you want to do camera rotations similar to cameraControl, then you'll need to add a gesture recognizer and then you can start experimenting with it.
Hope that helps!
Image of the file hierarchy in xcode of my animation file
Issue:
xcode is recognizing multiple animations for a single Collada (.dae) file. But I can't find any documentation on how to access these animations directly. I tried using the fox game example, but it only loads one of the animations.
Here's my code:
let modelNode = self.addModel_DAE_return(x: 0, y: 0, z: 0, scaleFactor: 0.0005, fileName: "models.scnassets/export_014/export_014_model.dae")
// add the animation to the model
let modelAnim = self.loadAnim_DAE(fileName: "models.scnassets/export_014/export_014_anim.dae")
modelNode.addAnimationPlayer(modelAnim, forKey: "headMove")
modelAnim.play()
// add the model to the scene
node.addChildNode(modelNode)
How can I access and load in the other animations?
Context:
I'm making an AR app. I'm using the Apple Image Recognition example as a base.
Here's a link to it:
https://developer.apple.com/documentation/arkit/recognizing_images_in_an_ar_experience
I animated a skeleton in Maya and exported the animation to a COLLADA (.DAE) file using the OpenCollada extension for Maya.
I exported separate files for the model and the animation, because if I export them as a single file, none of my animations export and xcode crashes every time I try to access the file to check if it registers any animations.
I want to access the "Entities" of my animation file so I can just loop over, load and attach the animations, but I can't
also I have looked into GameKit kind of but, is there not a way to do this with SceneKit?
Each joint in the rig has animation information
Originally I was only getting the information the first joint that had animations on it and not its children that also had animations attached
i.e. lets say your rig hierarchy is
hips > rightShoulder > arm > elbow > wrist
and your animations were on the shoulder, arm, elbow, and wrist joints
you do not have any animations on your hip joint
using enumerateChildNodes will only grab the information from the shoulder joint
using enumerateHierarchy will grab all the animation information from the shoulder, the elbow, AND the wrist joint
see below for my function that I used:
note that I had separately loaded and saved my 3D model onto an SCNNode and that was passed in so I could attach the animations to it
func loadAttachAnim_DAE(fileName: String, modelNode: SCNNode){
let scene = SCNScene(named: fileName)!
//find top level animation
var animationPlayer: SCNAnimationPlayer! = nil
scene.rootNode.enumerateHierarchy{ (child, stop) in
if !child.animationKeys.isEmpty {
print ("child = \(child) -----------------")
animationPlayer = child.animationPlayer(forKey: child.animationKeys[0])
modelNode.addAnimationPlayer(animationPlayer, forKey: "\(child)")
animationPlayer.play()
}
}
}
I need to find the distance between two points inside a room using ARKit.
I will explain my scenario. Inside a room I have one predefined point. Say that point is (x1,y1,z1). I grabbed this value from ARKit ARCamera.transform's current position. I dynamically moved to another point whose ARCamera.transform's current position is (x2,y2,z2). My intention is to find real world distance between the two.
func session(_ session: ARSession, didUpdate frame: ARFrame) {
let currentFrame = frame.camera.transform
let x = frame.camera.transform.columns.3.x
let y = frame.camera.transform.columns.3.y
let z = frame.camera.transform.columns.3.z
print( "camera transform :\(x),\(y),\(z)")
}
Please find the above code which am using to get camera current poistion.
Now here is the problem. (x2,y2,z2) is diifferent on each time I test at the same physical point (Please note that my starting point is kept same, so ARKit session starts on the same point each time),so the distance varies largerly. In short I can't rely on ARKit to give my camera's current point in a usefull manner , ARCamera.transform's current position seems a random value which varies depends on some unknown facts.
On googling I have seen that ARCamera.transform's current position gives the position of device camera.
Can anybody point a solution or correct me if am wrong or please tell me what exactly is ARCamera.transform's current position or how can we use ARCamera.transform's current position in real world positioning?
So I have a bit of a project I am trying to do. I am trying to get the devices rotation relative to gravity, and translation from where it started. So basically getting "tracking" data for the device. I plan to basically apply this by making a 3d pt that will mimic the data I record from the device later on.
Anyway to attempt to achieve this I thought it would be best to work with scene kit that way I can see things in 3 dimensions just like the data I am trying to record. Right now I have been trying to get the ship to rotate so that it always looks like its following gravity (like its on the ground or something) no mater what the device rotation is. I figure once I have this down it will be a sinch to apply this to a point. So I made the following code:
if let attitude = motionManager.deviceMotion?.attitude {
print(attitude)
ship.eulerAngles.y = -Float(attitude.roll)
ship.eulerAngles.z = -Float(attitude.yaw)
ship.eulerAngles.x = -Float(attitude.pitch)
}
When you only run one of the rotation lines then everything is perfectly. It does behave properly on that axis. However when I do all three axis' at once it becomes chaotic and performs far from expected with jitter and everything.
I guess my question is:
Does anyone know how to fix my code above so that the ship properly stays "upright" no matter what the orientation.
J.Doe!
First there is a slight trick. If you want to use the iphone laying down as the default position you have to notice that the axis used on sceneKit are different then those used by the DeviceMotion. Check the axis:
(source: apple.com)
First thing you need to set is the camera position. When you start a SceneKit project it creates your camera in the position (0, 0, 15). There is a problem with that:
The values of eulerAngles = (0,0,0) would mean the object would be in the plane xz, but as long as you are looking from Z, you just see it from the side. For that to be equivalent to the iphone laying down, you would need to set the camera to look from above. So it would be like you were looking at it from the phone (like a camera, idk)
// create and add a camera to the scene
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
scene.rootNode.addChildNode(cameraNode)
// place the camera
cameraNode.position = SCNVector3(x: 0, y: 15, z: 0)
// but then you need to make the cameraNode face the ship (the origin of the axis), rotating it
cameraNode.eulerAngles.x = -Float(M_PI)*0.5 //or Float(M_PI)*1.5
With this we are going to see the ship from above, so the first part is done.
Now we gotta make the ship remain "still" (facing the ground) with the device rotation.
//First we need to use SCNRendererDelegate
class GameViewController : UIViewController SCNSceneRendererDelegate{
private let motion = CMMotionManager();
...
Then on viewDidLoad:
//important if you remove the sceneKit initial action from the ship.
//The scene would be static, and static scenes do not trigger the renderer update, setting the playing property to true forces that:
scnView.playing = true;
if(motion.deviceMotionAvailable){
motion.startDeviceMotionUpdates();
motion.deviceMotionUpdateInterval = 1.0/60.0;
}
Then we go to the update method
Look at the axis: the axis Y and Z are "switched" if you compare the sceneKit axis and the deviceMotion axis. Z is up on the phone, while is to the side on the scene, and Y is up on the scene, while to the side on the phone. So the pitch, roll and yaw, respectively associated to the X, Y and Z axis, will be applied as pitch, yaw and roll.
Notice I've put the roll value positive, that's because there is something else "switched". It's kinda hard to visualize. See the Y axis of device motion is correlated to the Z axis of the scene. Now imagine an object rotation along this axis, in the same direction (clock-wise for example), they would be going in opposite directions because of the disposition of the axis. (you can set the roll negative too see how it goes wrong)
func renderer(renderer: SCNSceneRenderer, updateAtTime time: NSTimeInterval) {
if let rot = motion.deviceMotion?.attitude{
print("\(rot.pitch) \(rot.roll) \(rot.yaw)")
ship.eulerAngles.x = -Float(rot.pitch);
ship.eulerAngles.y = -Float(rot.yaw);
ship.eulerAngles.z = Float(rot.roll);
}
Hope that helps! See ya!